patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11860510 | DESCRIPTION OF THE EMBODIMENTS An observation device including a finder optical system according to examples of the present invention is described below. The observation device of the present invention includes an image display element that displays an image and a finder optical system that is used for observation of the image displayed on an image display surface of the image display element. The finder optical system according to each of the examples is used for observation of the image displayed on the image display surface. The finder optical system according to each example includes a first lens with a positive refractive power, a second lens with a negative refractive power, and a third lens with a positive refractive power arranged in this order from the image display surface side to the observation side. The finder optical system according to each example may include a fourth lens with a positive or negative refractive power on the observation side of the third lens. FIG.1is a lens cross-sectional view of an observation device of Example 1. FIG.2shows aberration diagrams of a finder optical system according to Example 1 in which the diopter thereof is −1.0 (the standard diopter). FIG.3is a lens cross-sectional view of an observation device of Example 2. FIG.4shows aberration diagrams of a finder optical system according to Example 2 in which the diopter thereof is −1.0. FIG.5is a lens cross-sectional view of an observation device of Example 3. FIG.6shows aberration diagrams of a finder optical system according to Example 3 in which the diopter thereof is −1.0. FIG.7is a lens cross-sectional view of an observation device of Example 4. FIG.8shows aberration diagrams of a finder optical system according to Example 4 in which the diopter thereof is −1.0. FIG.9is a lens cross-sectional view of an observation device of Example 5. FIG.10shows aberration diagrams of a finder optical system according to Example 5 in which the diopter thereof is −1.0. FIG.11is a lens cross-sectional view of an observation device of Example 6. FIG.12shows aberration diagrams of a finder optical system according to Example 6 in which the diopter thereof is −1.0. FIG.13is a lens cross-sectional view of an observation device of Example 7. FIG.14shows aberration diagrams of a finder optical system according to Example 7 in which the diopter thereof is −1.0. FIG.15is a schematic view of a main portion of an image pickup apparatus including the observation device of the present invention. The finder optical system of each example is used for an observation device such as an electronic viewfinder of an image pickup apparatus that is a digital camera, a video camera, or the like. In the lens cross-sectional view, the left side is the image display surface side, while the right side is the observation side (the exit pupil side). In the lens cross-sectional view, L0 represents the finder optical system, Li represents an i-th lens, IP represents an image display surface of an image display element formed of a liquid crystal, an organic EL, or the like, EP represents an observation surface for observation (an eye point) (the exit pupil), and CGI represents a cover glass (a protection member). InFIGS.3and13, Cl represents a reference position for calculation of the observation surface of the finder optical system L0. Among the aberration diagrams, in a spherical aberration diagram, a solid line d represents a d-line (with a wavelength of 587.6 nm), and a double dotted-dashed line F represents an F-line (with a wavelength of 486.1 nm). In an astigmatism diagram, ΔS (a solid line) represents a sagittal image plane of the d-line, and ΔM (a broken line) represents a meridional image plane of the d-line. A distortion is presented about the d-line. A lateral chromatic aberration is presented about the F-line. H represents a half of the diagonal length of the image display surface (the maximum image height). In order to enlargement-observe a small image display surface (display panel) of the image display element having the diagonal length of 20 mm or less with the viewing angle of 30 degrees or more, the finder optical system is required to have a strong positive refractive power (a power). To accomplish this, it is required to use lenses with a strong positive refractive power or with a strong negative refractive power. In this case, a spherical aberration, a field curvature, an astigmatism, and a chromatic aberration frequently occur in the finder optical system, and it is difficult to correct these aberrations. Due to the residual aberrations of the spherical aberration, the field curvature, the astigmatism, the chromatic aberration, and so on, the optical performance during observation of the image display surface is deteriorated. In order to improve such aberrations in this case, the finder optical system L0 according to each example has the following configuration. Specifically, the finder optical system L0 according to each example includes a first lens L1 with a positive refractive power, a second lens L2 with a negative refractive power, and a third lens L3 with a positive refractive power arranged in this order from the image display surface (the object surface) IP to the observation surface (the eye point) EP side. In each example, preferably, a fourth lens with a positive refractive power may be provided on the observation surface side of the third lens. The finder optical system L0 in the observation device of each example includes the first lens with a positive refractive power, the second lens with a negative refractive power, and the third lens with a positive refractive power arranged in this order from the image display surface IP to the observation EP side. A focal length of the second lens L2 is represented by f2, and a focal length of the finder optical system L0 is represented by f. A curvature radius of a lens surface of the second lens L2 on the image display surface IP side is represented by R21, a curvature radius of a lens surface of the second lens L2 on the observation EP side is represented by R22, and a half of a diagonal length of the image display surface IP is represented by H. In this case, the following conditional expressions are satisfied: −0.70<f2/f<−0.20 (1); 0.7<(R22+R21)/(R22−R21)<1.4 (2); and 0.31<H/f<0.50 (3). Next, technical meanings of the above conditional expressions are described. The conditional expression (1) defines the focal lengths of the second lens L2 and the entire finder optical system L0. In order to satisfactorily correct an axial chromatic aberration in the finder optical system L0, the negative lens needs to have a strong negative power (refractive power) above a certain level. When the ratio exceeds the upper limit value in the conditional expression (1), it becomes difficult to correct the axial chromatic aberration, and this is not preferable. On the other hand, when the power is weakened such that the ratio falls below the lower limit value in the conditional expression (1), it becomes difficult to correct the various aberrations in the finder optical system L0, and it becomes difficult to obtain good optical characteristics. The conditional expression (2) defines the lens shape of the second lens L2. When the ratio exceeds the upper limit value in the conditional expression (2), the difference of the curvature radii of the lens surfaces of the second lens L2 between the image display surface IP side and the observation surface EP side becomes too small, and thus the chromatic aberration is not sufficiently corrected. On the other hand, when the ratio falls below the lower limit value in the conditional expression (2), the curvature radius of the lens surface of the second lens L2 on the image display surface IP side becomes too small, and thus it becomes difficult to correct the various aberrations, which is not preferable. The conditional expression (3) shows a condition required to obtain a wide viewing angle using a small image display element. When the ratio exceeds the upper limit value or falls below the lower limit value in the conditional expression (3), it becomes difficult to obtain a wide viewing angle in the small image display element, thus this is not preferable. In each example, preferably, one or more of the following conditional expressions are satisfied. A curvature radius of a lens surface of the first lens L1 on the observation side is represented by R12. A focal length of the first lens L1 is represented by f1. A focal length of the third lens L3 is represented by f3. A curvature radius of a lens surface of the third lens L3 on the image display surface side is represented by R31. The second lens L2 includes at least one aspherical surface. A refractive index on the d-line of the material forming the second lens L2 is represented by nd, and an Abbe number based on the d-line of the material forming the second lens L2 is represented by vd. A length on the optical axis with a diopter of −1 from a lens surface closest to the image display surface in the finder optical system L0 to a lens surface closest to the observation surface EP therein is represented by dL. In this case, preferably, one or more of the following conditional expressions are satisfied: −8.5<(R21+R12)/(R21−R12)<−2.0 (4); 0.30<f1/f<1.00 (5); 0.50<f3/f1<3.10 (6); −3.0<(R31−R22)/(R31+R22)<10.0 (7); 1.58<nd<1.95 (8); 15<vd<32 (9); and 0.90<dL/f<1.65 (10). Next, technical meanings of the above conditional expressions are described. The conditional expression (4) defines the shape of an air lens formed between the first lens L1 and the second lens L2. In order to correct the lateral chromatic aberration satisfactorily, the ray entering the lens surface of the second lens L2 on the image display surface IP side needs to have a strong incident angle. Thus, preferably, the lens surface of the first lens L1 on the observation surface EP side has a strong power above a certain level. When the ratio exceeds the upper limit value in the conditional expression (4), the power of the lens surface of the first lens L1 on the observation surface EP side becomes insufficient, and it becomes difficult to correct the lateral chromatic aberration. When the ratio falls below the lower limit value in the conditional expression (4), the power of the lens surface of the first lens L1 on the observation surface EP side becomes too strong, and it becomes difficult to correct the various aberrations. The conditional expression (5) defines the ratio between the focal length of the entire finder optical system L0 and the focal length of the first lens L1. When the ratio falls below the lower limit value in the conditional expression (5), it becomes difficult to obtain a wide viewing angle, thus this is not preferable. When the ratio exceeds the upper limit value in the conditional expression (5), the optical performance becomes deteriorated, thus this is not preferable. The conditional expression (6) defines the ratio between the focal length of the first lens L1 and the focal length of the third lens L3. When the ratio falls below the lower limit value in the conditional expression (6), it becomes difficult to obtain a wide viewing angle, thus this is not preferable. When the ratio exceeds the upper limit value in the conditional expression (6), the chromatic aberration becomes increased and the optical performance becomes deteriorated, thus this is not preferable. The conditional expression (7) defines the shape of an air lens formed between the second lens L2 and the third lens L3. When the ratio exceeds the upper limit value or falls below the lower limit value in the conditional expression (7), the lens surface of the second lens L2 on the observation surface EP side and the lens surface of the third lens L3 on the image display surface IP side become too far from each other. This makes it difficult to sufficiently secure the effective range of the third lens L3, and the eye relief becomes short. Thus, this is not preferable. The conditional expression (8) defines the refractive index of the material forming the second lens L2. When the value falls below the lower limit value in the conditional expression (8), the curvature of the lens surface becomes too strong, and molding thereof becomes difficult. When the value exceeds the upper limit value in the conditional expression (8), the Petzval sum of the finder optical system L0 becomes increased, and the field curvature and the astigmatism become increased. Thus, this is not preferable. The conditional expression (9) defines the Abbe number of the material forming the second lens L2. When the Abbe number vd of the material forming the second lens L2 becomes small such that the value falls below the lower limit value in the conditional expression (9) and, the chromatic aberration becomes corrected excessively, thus this is not preferable. When the value exceeds the upper limit value in the conditional expression (9), the correction of the chromatic aberration becomes insufficient, thus this is not preferable. The conditional expression (10) defines the ratio between the optical total length of the finder optical system L0 and the focal length of the entire finder optical system L0. When the ratio exceeds the upper limit value in the conditional expression (10), the positions of the principal points of the finder optical system L0 become far from the image display surface IP, and it becomes difficult to obtain a wide viewing angle. Thus, this is not preferable. When the ratio falls below the lower limit value in the conditional expression (10), it becomes difficult to make the curvature of each lens sufficiently strong, and it becomes difficult to obtain a wide viewing angle. Thus, this is not preferable. A lens surface herein means a surface having a power. In addition to this, an optical element having no refractive power such as a flat lens may be inserted in front, rear, or middle of the finder optical system. Preferably, the numerical ranges of the conditional expressions (1) to (10) are replaced with the numerical ranges of the conditional expressions (1a) to (10a): −0.64<f2/f<−0.30 (1a); 0.72<(R22+R21)/(R22−R21)<1.40 (2a); 0.31<H/f<0.45 (3a); −8.0<(R21+R12)/(R21−R12)<−2.0 (4a); 0.35<f1/f<1.00 (5a); 0.50<f3/f1<3.08 (6a); −3.0<(R31−R22)/(R31+R22)<9.0 (7a); 1.60<nd<1.94 (8a); 15<vd<25 (9a); and 1.10<dL/f<1.55 (10a). More preferably, the numerical ranges of the conditional expressions (1) to (10) are replaced with the numerical ranges of the conditional expressions (1b) to (10b): −0.64<f2/f<−0.31 (1b); <(R22+R21)/(R22−R21)<1.38 (2b); 0.31<H/f<0.40 (3b); −7.8<(R21+R12)/(R21−R12)<−2.2 (4b); 0.40<f1/f<0.95 (5b); 0.60<f3/f1<3.06 (6b); −2.0<(R31−R22)/(R31+R22)<8.0 (7b); 1.60<nd<1.93 (8b); 19<vd<24 (9b); and 1.10<dL/f<1.50 (10b). The finder optical system L0 of each example includes a mechanism that enables diopter adjustment. In Examples 1, 3, 4, 5, and 6, the diopter is adjusted by moving the first lens L1 to the third lens L3 integrally in the optical axis direction while the lens closest to the observation surface EP does not move. Since the general optical system of the electronic viewfinder adjusts the diopter by moving all the lenses in the optical axis direction, it is required to insert a protection glass on the observation surface EP side of the last lens in the context of dust prevention. On the other hand, when the diopter is adjusted while the lens closest to the observation surface EP does not move, no protection glass is required, and the finder optical system L0 can be downsized. Thus, this is preferable. Additionally, with no protection glass inserted, appearance of a ghost due to the reflection of the protection glass can be prevented, and it is preferable in this context as well. Preferably, the finder optical system L0 of each example includes the first lens L1 with a positive refractive power, the second lens L2 with a negative refractive power, the third lens L3 with a positive refractive power, and the fourth lens L4 with a positive refractive power arranged in this order from the image display surface IP side to the observation surface EP side. This configuration makes it possible to correct the various aberrations satisfactorily, and this is preferable. Next, an embodiment of an image pickup apparatus using the observation device of each example is described with reference toFIG.15. An object image formed by an image pickup optical system101is converted to an electric signal by an image pickup element102, which is a photoelectric conversion element. A CCD sensor, a CMOS sensor, or the like may be used as the image pickup element102. The output signal from the image pickup element102is processed by an image processing circuit103, and an image is formed. The thus-formed image is recorded in a recording medium104such as a semiconductor memory, a magnetic tape, or an optical disc. The image formed by the image processing circuit103is displayed in a finder optical system unit105. The finder optical system unit105includes an image display element1051and a finder optical system1052of each example. The image display element1051is formed of a liquid crystal display element LCD, an organic EL element, or the like. As described above, with the finder optical system of the present invention applied to an image pickup apparatus such as a digital camera or a video camera, it is possible to obtain an image pickup apparatus that has a small size with a wide viewing angle and high optical performance. Numerical data for each of the examples of the present invention is shown below. In the numerical data, in the order from the image display surface IP to the observation side EP, “ri” represents a paraxial curvature radius of an i-th surface. r1 and r2 are surfaces of the image display element, and r1 is the image display surface. r3 and r4 are surfaces of the protection member CGI. The last surface is the observation surface EP. di represents an on-axis surface distance between an i-th surface and an i+1-th surface in the order from the image display surface IP. Additionally, ndi represents a refractive index of material between an i-th surface and an i+1-th surface with respect to the d-line (wavelength=578.6 nm), and vdi represents the Abbe number of the material between an i-th surface and an i+1-th surface with respect to the d-line. A unit of the length used in the numerical data is [mm] unless otherwise stated. However, since the finder optical system can obtain similar optical performance even in proportional enlargement or proportional reduction, the unit is not limited to [mm], and any other suitable unit can be used. A surface with the index of “*” written in the column of the curvature radius in the numerical data is an aspheric surface shape defined based on the following expression: X=h2/R1+1-(1+k)(h/R)2+A4h4+A6h6+A8h8+A10h10 In the above expression, X is a distance in the optical axis direction from the vertex of the lens surface, h is a height in a direction perpendicular to the optical axis, R is a paraxial curvature radius at the vertex of the lens surface, k is a conical constant, A4, A6, A8, and A10 are polynomial coefficients. In an aspheric surface coefficient, “E−i” represents an index expression with the base of 10, that is, “10−i.” The calculation results of the above-described conditional expressions using the numerical data are shown in Table 1. Numerical data 1 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——528.7595.491.534855.76−9.016 *2.25——7−6.750 *1.901.651021.58500.0000.30—9500.0006.221.882037.210−12.694 *(variable)——11−37.1122.681.491757.412−24.664 *23.00——13∞——— (Various data)Diopter−4−10+2Focal length17.7717.6417.6117.53d40.801.832.122.76d102.261.230.940.30 (Aspheric surface data)6th surfaceK−5.568E+00A4−2.901E−04A62.711E−06A80.000E+00A100.000E+007th surfaceK−3.409E+00A4−6.069E−04A65.492E−06A8−2.365E−08A101.639E−1110th surfaceK−1.132E+00A41.869E−05A6−1.957E−07A81.976E−09A10−6.316E−1212th surfaceK0.000E+00A4−3.603E−05A60.000E+00A80.000E+00A100.000E+00 Numerical data 2 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——530.3045.701.534855.76−10.0142.53——7−7.109 *1.901.651021.58−87.3720.32——9−74.6325.131.882037.210−12.928 *0.30——11−27.9343.061.491757.412−18.082 *(variable)——13∞24.20——14∞——— (Various data)Diopter−4−10+2Focal length18.8618.8618.8618.86d41.772.993.334.05d122.581.361.020.30 (Aspheric surface data)6th surfaceK−2.116E+00A4−9.295E−06A63.036E−07A80.000E+00A100.000E+007th surfaceK−1.326E+00A4−1.303E−04A64.463E−07A8−9.594E−09A103.399E−1210th surfaceK−2.888E+00A4−1.052E−04A67.056E−07A8−3.496E−09A101.109E−1112th surfaceK0.000E+00A4−2.007E−05A60.000E+00A80.000E+00A100.000E+00 Numerical data 3 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——551.1485.701.801445.56−14.266 *3.40——7−6.050 *1.901.651021.58−39.5260.30——9−62.2584.011.882037.210−13.796 *(variable)——11−277.4373.391.534855.712−25.343 *23.00——13∞——— (Various data)Diopter−4−10+2Focal length19.7319.2619.1318.90d42.283.854.295.10d103.111.541.110.30 (Aspheric surface data)6th surfaceK−6.200E−01A46.389E−05A6−1.916E−07A80.000E+00A100.000E+007th surfaceK−1.481E+00A4−5.934E−05A61.082E−06A8−1.767E−08A109.673E−1110th surfaceK−1.569E+00A44.327E−05A6−8.443E−08A8−5.940E−10A103.680E−1212th surfaceK0.000E+00A48.877E−06A60.000E+00A80.000E+00A100.000E+00 Numerical data 4 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——517.6774.681.534855.76−13.331 *3.18——7−5.474 *1.901.651021.5851.6870.30——918.0726.601.882037.210−12.423(variable)——1119.8453.531.491757.41217.895 *23.00——13∞——— (Various data)Diopter−4−10+2Focal length16.1116.1316.1316.14d40.801.641.882.41d101.921.070.830.30 (Aspheric surface data)6th surfaceK−1.511E+01A4−3.626E−04A63.676E−07A84.016E−08A102.847E−107th surfaceK−8.205E−01A46.553E−05A69.571E−06A8−5.137E−08A103.464E−1010th surfaceK−9.854E−01A41.498E−04A62.644E−06A8−3.043E−08A101.147E−1012th surfaceK0.000E+00A4−8.612E−05A60.000E+00A80.000E+00A100.000E+00 Numerical data 5 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——586.9585.701.534855.76−10.312 *2.66——7−7.606 *2.041.651021.58−223.6110.30——9429.5996.601.882037.210−15.040 *(variable)——11−56.7342.451.491757.412−31.623 *23.00——13∞——— (Various data)Diopter−4−10+2Focal length21.6721.4421.3821.25d42.584.204.625.58d103.301.691.260.30 (Aspheric surface data)6th surfaceK−3.294E+00A4−7.628E−05A62.717E−07A80.000E+00A100.000E+007th surfaceK−1.217E+00A49.753E−05A6−3.074E−06A82.442E−08A10−8.684E−1110th surfaceK−2.827E+00A4−4.426E−05A62.231E−08A85.293E−10A10−1.571E−1212th surfaceK0.000E+00A4−1.150E−05A60.000E+00A80.000E+00A100.000E+00 Numerical data 6 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.11∞(variable)——517.5975.401.650055.76−6.5131.37——7−5.000 *1.901.651021.5838.2802.64——9−50.6586.601.688931.210−13.301(variable)——1135.0242.821.491757.412−112.158 *23.00——13∞——— (Various data)Diopter−4−10+2Focal length20.2019.7219.6019.34d40.792.352.753.72d103.231.671.260.30 (Aspheric surface data)6th surfaceK−5.788E+00A41.890E−06A62.288E−06A8−9.202E−08A101.060E−097th surfaceK−4.699E+00A49.666E−05A6−9.420E−06A81.368E−07A10−4.181E−1010th surfaceK−6.268E+00A4−1.915E−04A67.243E−07A80.000E+00A100.000E+00K1.158E+02A4−2.404E−0412th surfaceA62.595E−07A80.000E+00A100.000E+00 Numerical data 7 [Unit: mm] (Surface data)Surface numberrdndνd1(Image display surface)0.701.521065.12∞4.00——3∞0.501.521065.14∞(variable)——517.4635.581.534855.76−7.606 *2.56——7−5.305 *3.871.922920.98−124.3740.28——926.5866.671.882037.210−13.092 *(variable)——11∞23.00——12∞——— Various dataDiopter−4−10+2Focal length18.8718.8718.8718.87d40.781.942.273.00d103.151.991.660.93 (Aspheric surface data)6th surfaceK−1.131E+01A4−8.630E−06A61.001E−07A80.000E+00A100.000E+007th surfaceK−6.460E+00A4−1.245E−03A62.759E−05A8−2.863E−07A101.188E−0910th surfaceK−1.007E+01A4−2.248E−04A63.591E−06A8−2.224E−08A105.310E−11 TABLE 1Conditional expression2471(R22 + R21)/3(R21 + R12)/56(R31 − R22)/8910f2/f(R22 − R21)H/f(R21 − R12)f1/ff3/f1(R31 + R22)ndνddL/fExample 1−0.5800.9730.365−6.9570.7681.0440.0001.651021.51.24Example 2−0.6361.1770.341−5.8950.7851.153−0.0791.651021.51.18Example 3−0.5871.3610.336−2.4730.7571.3360.2231.651021.51.26Example 4−0.4650.8080.316−2.3930.9300.619−0.4821.651021.51.42Example 5−0.5681.0700.393−6.6210.8230.9433.1711.651021.51.20Example 6−0.3410.7690.328−7.6090.4093.0457.1851.651021.51.26Example 7−0.3231.0890.341−5.6110.5691.005−1.5441.922920.91.21 Although the preferable embodiments of the present invention are described above, the present invention is not limited to such embodiments, and various modifications and variations can be made within the scope of the gist. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2018-226995, filed Dec. 4, 2018, which is hereby incorporated by reference herein in its entirety. | 25,714 |
11860511 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Embodiments of the present invention will be described in detail below with reference to figures. Note that in the figures for describing these embodiments, in most cases the same reference characters are used for components that are the same, and redundant descriptions of such components will be omitted. Embodiment 1 <Basic Configuration of Image Pickup Device> Next, the basic configuration of an image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.1.FIG.1is a block diagram illustrating the basic configuration of the image pickup device according to Embodiment 1 of the present invention. As illustrated inFIG.1, the image pickup device includes a main camera20serving as a first camera sensor onto which an image of a subject is projected via an objective lens10, a digital signal processor (DSP)30that processes the output signal from the main camera20, an electronic view finder (EVF)40serving as a display unit that displays an image of a capturing area that represents the region around the subject that will be captured, an object extraction unit50that extracts the object to use as the subject, a database60that stores information for recognizing objects, a spatial frequency calculator70that calculates focusing conditions for bringing the subject into focus, a sub-camera80serving as a second camera sensor and constituted by a camera sensor or the like that captures the same subject as the main camera20but does so at a wider angle than the main camera20, a subject tracking unit90that tracks the subject, a distance information calculator100that calculates information about the distance to the subject, and an autofocus (AF) controller110that uses a motor to increase and decrease the relative distance between the objective lens10and the main camera20in accordance with the distance information calculated by the distance information calculator100in order to keep the subject in focus. Together, the DSP30, the object extraction unit50, the spatial frequency calculator70, the subject tracking unit90, the distance information calculator100, and the AF controller110form a processing unit. <Operation of Image Pickup Device> Next, the operation of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.1. First, an image of the subject that is projected via the objective lens10onto the main camera20is converted to an electronic signal and input to the DSP30. Here, the input signal is converted to a YUV signal that includes brightness and color difference signals and simultaneously presented to the user in the EVF40so that the user can pan the image pickup device to track the subject, particularly when the subject is moving. The output signal from the DSP30is also input to the object extraction unit50. The object extraction unit50extracts a plurality of subjects contained in the image represented by the input signal. For a person's face, for example, the object extraction unit50checks for eyes/nose/mouth, or face outline for profile shots in order to extract the person's face such that the same subject can continue to be recognized even if the point of view changes due to changes in the relationship between the orientations of the subject and the camera. For peoples' faces, various other characteristics such as skin color and hair color, for example, can also be taken into account when the subject is identified. Information needed to continue recognizing the same subject even when the orientation from which the subject is viewed changes, such as the constituent elements, shapes, and colors of subjects that are commonly captured by users, is stored in the database60in advance. For people, for example, the data stored in the database60includes data elements such as the shapes and colors that are characteristic of situations such as walking, running, or sitting. Moreover, for birds, for example, the data includes data elements such as the characteristic shapes and colors of little birds, birds of prey, or differences in flight characteristics such as the tendency to extend or bend the neck of birds with long necks such as herons or swans. For animals such as dogs, cats, or other animals, for example, the data includes data elements such as the characteristic shapes and colors of each animal. For automobiles such as sedans, sports cars, wagons, trucks, and racing cars, for example, the data includes data elements such as the characteristic shapes and colors of each type of automobile. For trains such as electric trains, steam trains, and other types of trains, for example, the data includes data elements such as the characteristic shapes and colors of each type of train. The data stored in the database60is supplied to the object extraction unit50. The data includes information that makes it possible to continue tracking subjects even if the orientations of those subjects change while those subjects are tracked by the image pickup device, particularly when those subjects are moving at high speed. Here, the color and shape data is used in an algorithm that can recognize subjects from their overall shapes and color mixtures even when the focus shifts away from the subject and the image becomes slightly blurry to some extent. In the captured image that is captured by the image pickup device, there may be several subjects that are initially present in the capturing area when the capturing process starts but later leave the capturing area during the capturing process or several subjects that only enter the capturing area after the capturing process starts and then continue to remain in the capturing area. However, these subjects can be tracked for a prescribed period of time, and then the subjects that always remain in the capturing area can be recognized as the actual subjects, that is, the objects that the user wants to capture, for example. Here, the subject that remains positioned in the approximate center of the main camera20for the longest period of time can be given a heavier recognition weight as the object that the user actually wants to capture among the several subjects that may be present. This process and determination is handled by the object extraction unit50. The subject tracking unit90is also notified of the object that is recognized as the subject so that the same subject can also be tracked in the captured image that is captured by the sub-camera80. Furthermore, the portion of the information needed to calculate the distance to the subject is sent to the distance information calculator100, and the subject captured by the sub-camera80is continued to be tracked even if that subject temporarily leaves the frame of the capturing area of the main camera20so that information that indicates the position of that subject can continue to be sent to the object extraction unit50. Moreover, in order to focus on the subject that the object extraction unit50has recognized as the subject to capture that has been tracked by the user, the spatial frequency calculator70calculates the focusing conditions for bringing that subject into focus and sends those results to the distance information calculator100via the object extraction unit50. The distance information calculator100then uses this information to send instructions to the AF controller110that can move the position of the objective lens10, and the AF controller110uses a motor to increase or decrease the relative distance between the objective lens10and the main camera20in order to bring the subject into focus. The spatial frequency calculator70then calculates a new and different spatial frequency on the basis of the newly obtained image, and this information is fed back into the AF controller110again using the same method as above, thereby making it possible to continuously keep the subject in focus. Here, while the subject is tracked, the sensitivity of the sensor is increased and the aperture is closed to achieve a suitable depth of field in order to broaden the area of the image that is in focus without having to move the lens. Then, when the image pickup device actually captures an image, the shutter speed is increased and the sensitivity of the sensor is decreased as much as possible in order to achieve a more suitable S/N ratio. In this case, since the aperture is opened and the depth of field is reduced, the AF controller110is used to reliably bring the subject into focus before actually activating the shutter and then the captured image is taken out from the main camera20. In this manner, it is possible to obtain a static image with good image quality, little camera shake, and an appropriately in-focus subject, particularly for moving subjects. When the image is captured, the objective lens10does not necessarily need to be continuously moved in a complex manner in order to keep the subject perfectly in focus at all times while the subject is tracked. Instead, once the subject is brought into focus to some extent, the subject can be tracked in that state without moving the objective lens10. For example, when the user presses the shutter button in order to capture an image of the subject, a final and more accurate focusing operation may be performed to capture the image. In this case, the objective lens10does not need to be moved in a complicated manner, thereby making it possible to reduce depletion of the battery that powers the image pickup device. Furthermore, once an object has been recognized as the subject and is being tracked, an icon that indicates that the subject has been recognized can be superimposed as an overlay on the subject in the image feed from the main camera20that is displayed in the EVF40. This makes it possible for the user to confirm that the subject has been recognized correctly. If the wrong object is recognized as the subject, a button or the like (not illustrated in the figures) on the body of the image pickup device can be pressed to reset the tracking or tracking can be continued after changing the orientation of the image pickup device, for example, in order to make it possible to re-acquire the subject that the user is attempting to capture as the object to be tracked. Alternatively, the subject to be tracked may be re-selected using a scheme in which the subject is tracked while the user is partially pressing the shutter button, an image is captured when the shutter button is fully pressed, and the subject is either not tracked or tracking is reset when the shutter button is not being pressed at all, for example. This scheme is effective in situations in which the user is attempting to capture images of subjects that move at high speeds and does not have time to temporarily remove his/her eyes from the EVF40to select the subject by tapping on another display unit (not illustrated in the figure), for example. Furthermore, the sub-camera80captures image at a wider angle than the main camera20. This makes it possible to continue tracking the subject in the image feed from the sub-camera80even when the subject leaves the capturing area captured by the main camera20. Moreover, displaying a panning guide in the EVF40that indicates the direction in which the subject has left the capturing area makes it possible for the user to see that direction in which the subject has left the capturing area. <Configuration of Digital Camera (Example of Image Pickup Device)> Next, the configuration of a digital camera that is an example of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.2.FIG.2illustrates the configuration of the digital camera that is an example of the image pickup device according to Embodiment 1 of the present invention. As illustrated inFIG.2, the digital camera projects an image of a subject120onto the main camera20via objective lenses10arranged in a triplet configuration and then captures images of the subject120. The sub-camera80arranged in the upper portion of the digital camera also captures images of the subject120. A captured image feed of the subject is displayed in the EVF40to allow the user to confirm which objects are being captured. The image feed is also displayed on a liquid crystal display (LCD)130, where the subject can also be confirmed and various operations or the like can be performed. As illustrated inFIG.2, the digital camera selects a moving object such as a bird that is flying at high speed as the subject120(the object to be captured). This object is recognized and selected as the subject when the light in the image of the subject120passes through the triplet objective lens10and is received by the main camera20for at least a prescribed period of time and continues to be received for that prescribed period of time, for example. Here, the user looks at the image feed of the subject that is being captured by the main camera20and that is displayed in the EVF40while changing the direction of the digital camera so as to track the subject120and keep the captured image of that subject on the main camera20. Then, at an appropriate time, the user presses the shutter button (not illustrated in the figure). This causes the objective lenses10to move so as to bring the subject120into focus, and then the main camera20captures an image of the subject, displays that image on the LCD130for several seconds, and saves the captured image to external memory (described later). The digital camera also includes the sub-camera80. The optical system of the sub-camera80is arranged in the same direction as and is substantially parallel to the optical system constituted by the objective lenses and the main camera20. The optical system of the sub-camera80also has a wide angle of view that makes it possible to capture images at a wider angle than the optical system of the main camera20. Once the subject120is recognized by the main camera20as the object to be captured, the sub-camera80also begins and continues to track that subject. The image captured by the sub-camera80is used to track the subject120. In addition, the focal point140of the subject120and the base length 1050, that is, the distance between the optical axes of the main camera20and the sub-camera80, can be used in a triangulation algorithm for measuring the distance between the subject120and the digital camera. More specifically, when the coordinates of the subject120as projected onto the sub-camera80are shifted away from the center axis of the optical system of the sub-camera80by an offset distance155, this distance can be used together with the dimensions of the optical system to estimate the position of the sub-camera80, that is, the distance between the subject120and the position of the camera. This information can therefore be used as information for focusing the main camera20as well. One method for recognizing the subject120as the object to be captured is the method described above in which the subject120is recognized after remaining present in the image feed from the main camera20for a prescribed period of time. In another method, the image feed captured by the main camera20and displayed in the EVF40may also be displayed in real time on the LCD130as a video, and then the user may use a finger or the like to tap the subject within that video on a touch panel (not illustrated in the figure) arranged on the LCD130in order to identify the subject120as the object to be captured. In this case, the user would typically temporarily remove his/her eyes from the EVF40, look at the LCD130and tap the subject, and then return to looking at the object to be captured in the EVF40. Here, the sub-camera80includes the optical system with a wider angle than the optical system of the main camera20and continues to track the subject120even if the subject120leaves the capturing area of the main camera20when user removes his/her eyes from the LCD130and looks back into the EVF40. Therefore, the position of the subject120can be indicated within the EVF40, thereby making it possible to bring the subject120back into the EVF40and to continue tracking the subject120(a moving object). <Internal Configuration of Digital Camera> Next, the internal configuration of the digital camera that is an example of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.3.FIG.3is a block diagram illustrating an internal configuration of the digital camera that is an example of the image pickup device according to Embodiment 1 of the present invention. As illustrated inFIG.3, the digital camera is a computer system configured around a CPU150as the core. The CPU150and various peripheral circuits are connected together via a main bus300. A hardware switch (SW)160is a group of switches that includes the shutter button and various other switches for performing operations such as adjusting a zoom lens. When one of these switches is pressed, a code corresponding to that switch is sent along with a code indicating that the switch was pressed to the CPU150, and the CPU150executes a process corresponding to that switch. Programs for executing these processes are stored in a flash memory220, and a working area for executing these processes is allocated in an SD-RAM230. Camera-specific key information170is a number that identifies the individual digital camera or information based on a user's name that was registered to the digital camera by the user. The name of the digital camera or the user that captured an image is then embedded in the image data of captured images, and in some cases, this key information can be used to encrypt and protect the captured image data so that the captured images cannot be viewed by another user. An encoder/decoder180is a circuit for compressing and decompressing the captured image data. The encoder/decoder180performs a JPEG compression process, for example, on the captured images from the main camera20and can also store those captured images in an external memory270such as an SD memory card, via an external memory interface260. The encoder/decoder180can also decompress the images (such as static images and video) stored in the external memory270in order to display those images on an LCD240. The main camera20is a camera for capturing images of the subject120. The main camera20is used both to track and capture images of the moving subject120. The sub-camera80is a camera for tracking the subject and also includes a signal processing circuit. Note that the DSP30inFIG.1that processes the signals from the main camera20is not illustrated inFIG.3. However, the processes executed by the DSP30may be implemented within the main camera20, for example. The subject tracking unit90to which the output signal from the sub-camera80is input is not illustrated inFIG.3either. However, the process executed by the subject tracking unit90may be implemented by the object extraction unit50, or the process executed by the subject tracking unit90may be implemented as a process executed by the CPU150, for example. A speaker190plays back audio when a video recorded by the digital camera is played back and also emits sounds when the hardware SW160is pressed or a touch panel250arranged on the LCD240(described later) is tapped and when information such as notifications or warnings needs to be presented to the user. When the digital camera is used to record video, an earphone200is used to monitor the audio data picked up by a microphone210while the video is recorded. Moreover, the earphone200can be used to play audio when previously recorded video is played back in order to check the recording results quietly without playing audio via the speaker190, thereby allowing the user to check both the image data and audio data contained in the recorded video. The LCD240can display images such as static images and videos that have been captured and saved. When the user inputs settings to the camera via the touch panel250, the LCD240can also display setting items at the coordinates within the touch panel250to be tapped, and the user can then tap those coordinates to perform various operations on the digital camera. The EVF40is used as the digital camera's finder when images are captured. The image displayed in the EVF40includes information such as the sensor sensitivity required to capture an image, the shutter speed, the aperture value, and the movement direction of the subject that is being tracked, and this information is displayed as an overlay on the real-time video feed from the main camera20. The external memory interface260is an interface into which the external memory270for saving the captured image data, such as a removable SD memory card, can be inserted/removed. As illustrated inFIG.1, the object extraction unit50has a feature for extracting and identifying the subject that the user is attempting to capture from within the images projected on the main camera20and the sub-camera80. Characteristics such as the shape and color of the subject that should be identified as the object to be captured are stored in advance in the database60, and the user can also add new characteristics. The object extraction unit50first uses a G sensor290to determine whether the position and orientation of the digital camera are currently fixed or whether the user is currently changing the orientation of the digital camera to track a moving subject. If the camera is currently fixed, the object extraction unit50reads the characteristics for subjects120that exhibit relatively little movement, such as the faces of people or cats, from the database60in order to identify and determine the subject. Moreover, if the orientation of the digital camera is currently changing, the object extraction unit50determines that the subject120is moving and therefore reads characteristics such as the shapes of flying birds, running animals, children, or cats or dogs as viewed from the side from the database60in order to identify and determine the subject120. For more unique subjects120that are not included in the database60, if the orientation of the digital camera is currently being changed to track the subject, an image feed is displayed on the LCD240, and the user can tap the touch panel250to identify the subject120and add the characteristics of that subject120to the database60. Moreover, when playing back video that was previously recorded, the user can tap the touch panel250while the video is playing or when the video is stopped with a PAUSE button in order to identify the subject120and add and store the characteristics of that subject to the database60. Here, although the information from the main camera20and the sub-camera80may be sent to the object extraction unit50via the main bus300, the large amount of data constituted by the captured images can potentially occupy the main bus300that is managed directly by the CPU150, and therefore the captured image data is sent via direct paths305. As a result, the captured image data is sent directly to the object extraction unit50without putting any load on the main bus300. This allows the CPU150to have enough bandwidth to handle the overall operation of the camera including the various camera operations, thereby making it possible to capture images without experiencing problems such as delays when other digital camera operations are performed. The distance information calculator100and the AF controller110operate the same as those inFIG.1. Information from the object extraction unit50is sent to the distance information calculator100via a direct path305as well, thereby making it possible to further reduce the load on the main bus300as well as execute tracking and autofocusing processes without affecting the normal operation of the digital camera. A wireless LAN280is a unit for establishing a network connection in order to perform wireless communications. This makes it possible to automatically save captured images on a network server to reduce the amount of space used in the external memory270as well as to post the captured images to social networking services or the like, for example. The wireless LAN280also makes it possible to access manufacturer services or the like to update the digital camera operation programs saved on the flash memory220or to add or modify the information stored in the database60that represents the characteristics of various types of subjects. The G sensor290detects movement of the digital camera and sends those detection results to the object extraction unit50, thereby making it possible to change the read priority used when the object extraction unit50reads the data that represents the characteristics of the subject from the database60. The G sensor290is an accelerometer inmost cases and can detect linear movement of the digital camera in the front, rear, left, and right directions as well as angular acceleration when the digital camera is panned and then can convert those movements of the digital camera into data. Moreover, implementing not only the functionality of an accelerometer but also the functionality of a magnetic field sensor in the G sensor290makes it possible to do the following. When the digital camera is pointing upwards, for example, flying birds can be prioritized when the object extraction unit50searches the data that represents the characteristics of the subject and is read from the database60. Similarly, when the digital camera is moved in an upward direction, objects that tend to move in an upward direction, such as a cat climbing a tree, can be prioritized when the object extraction unit50searches the database60. This makes it possible to extract the characteristics of the subject more quickly, thereby making it possible to start tracking that subject more quickly. <Imaging Areas of Main Camera and Sub-Camera> Next, the capturing areas of the main camera and the sub-camera of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIGS.4(A) to4(C).FIGS.4(A) to4(C)are explanatory drawings for explaining the capturing areas of the main camera and the sub-camera of the image pickup device according to Embodiment 1 of the present invention and illustrate the positional relationships between the subject and the capturing areas of the main camera and the sub-camera. A capturing area310of the sub-camera80has a wider angle than a capturing area320of the main camera20and therefore always captures a wider region. In other words, inFIGS.4(A) to4(C), the region that can be seen through the EVF40and that is actually captured is the capturing area320of the main camera20, and the capturing area310of the sub-camera80is always capturing a wider region than the capturing area320of the main camera20. InFIG.4(A), the subject120is within the capturing area320of the main camera20. Here, the user has selected a flying bird as the subject120and is tracking the subject120. The subject120is currently positioned substantially in the center of the field of view of the main camera20in a state that would be suitable for capturing an image, which makes it possible for the main camera20to track the subject120. FIG.4(B)illustrates a state in which the subject120is about to leave the capturing area320of the main camera20. Here, it is difficult to recognize and track the subject120using only main camera20, and therefore the image feed from the sub-camera80is referenced as well to continue recognizing and tracking the subject120. If the lens of the main camera20is interchangeable or is a zoom lens, the optical system of the sub-camera80is required to be configured in a special way to ensure that the optical system of the sub-camera80always has a wider angle than the capturing area320of the main camera20. In the present embodiment, the following approaches can be used, for example. In a first approach, a lens that has an angle of view wider than the angle of view of the widest-angle lens of the lenses that can be used for the main camera20is used for the sub-camera. In this case, when the subject120is tracked while using the widest-angle lens for the main camera20, the possibility of the subject leaving the frame of the main camera20is far smaller than when a telephoto lens is used, and therefore using a wide-angle lens whose focal length at the maximum angle of view is substantially the same or slightly greater than that of the lens used for the main camera20makes it possible to achieve sufficient functionality as the lens used for the sub-camera80. However, when a moving subject is attempted to be captured while a telephoto lens is used that has the longest focal length among the lenses that can be used for the main camera20, the subject120can only be recognized in a very small region of the capturing area of the sub-camera80. Therefore, a sensor that has a pixel count sufficient for recognizing the subject120even in this type of situation and a lens that has sufficient resolution are required to be used for the sub-camera80. In a second approach, given the relationship between the focal lengths of the lenses for the main camera20and the sub-camera80described in the first approach, under the assumption that the sub-camera80needs to be used when the focal length of the lens used for the main camera20that captures the subject120that needs to be tracked using the sub-camera80is greater than or equal to a certain value, for example, when the lens used for the main camera has an optical focal length of greater than or equal to approximately f=100 mm when converted to the focal length of a lens for a 35 mm film camera that captures images on a so-called “full-size” screen of approximately 36 mm×24 mm in size, such as in a lens used in a digital single-lens reflex camera or the like, the focal length of the lens for the sub-camera80is determined to a certain longer focal length instead of the focal length of the widest-angle lens, for example, the optical focal length of approximately f=50 mm when converted to the focal length of a lens for a 35 mm film camera. However, even in this case, if the lens used for the main camera20is an extremely powerful telephoto lens, such as a lens with an optical focal length of greater than or equal to f=500 mm when converted to the focal length of a lens for a 35 mm film camera, for example, the angle of view of the main camera20will only be 1/10 of the length of the sensor of sub-camera80, and the subject120still needs to be recognized and tracked in this small region. Therefore, like in the first approach, a sensor that has a pixel count sufficient for recognizing the subject120and a lens that has sufficient resolution are needed to be used for the sub-camera80. In a third approach, the lens used for the sub-camera80can have a variable focal length. More specifically, when the lens for the main camera20is changed, the focal length of the new lens for the main camera20may be read via an electronic interface between the lens and the camera body. Moreover, when the lens for the main camera20is a zoom lens and the focal length is changed while that zoom lens is used, the new optical focal length of the lens may be read in a similar manner. Furthermore, when a zoom lens is used as the lens for the sub-camera80and the focal length of the lens is set to a prescribed value so as to have equal angle of view, the focal length is changed to, for example, a value that is twice the optical focal length of the main camera20. Note that such a zoom lens does not necessarily need to be able to capture images at all focal lengths between the minimum focal length and the maximum focal length of the lens for the sub-camera80. For example, a lens that has a configuration that exhibits sufficient optical performance at three focal lengths f=35 mm, 50 mm, and 75 mm when converted to the focal length of a lens for a 35 mm film camera may be used. Then, when the imaging lens for the main camera20is changed, the appropriate focal length from among those three focal lengths can be selected and used for the lens for the sub-camera80. Another possible approach is to change the lens for the sub-camera80when the imaging lens for the main camera20is changed. As illustrated inFIG.4(C), these approaches make it possible for the sub-camera80to continue tracking the subject120even when the angle of view of the capturing area320of the main camera20changes. Moreover, as illustrated inFIG.2, there is a positional difference equal to the base length between the central optical axes of the main camera20and the sub-camera80. Particularly at close distances, the positional relationship between the central optical axis of the main camera20and the subject will not match the positional relationship between the central optical axis of the sub-camera80and the subject. However, even in this case, the optical relationship between the positions of the subject120, the main camera20, and the sub-camera80can be determined by calculating the distance between the digital camera or the like and the subject120. This makes it possible to correct for any error by an amount corresponding to the base length and continue to track the subject120using the sub-camera80. <Examples of Images Displayed in EVF> Next, examples of images displayed in the EVF of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIGS.5(A) to5(D).FIGS.5(A) to5(D)are explanatory drawings for explaining examples of images displayed in the EVF of the image pickup device according to Embodiment 1 of the present invention. Similar toFIG.4(A), inFIG.5(A), the subject120is within the capturing area320of the main camera20. Here, the user has selected a flying bird as the subject and is tracking the subject. The subject120is currently positioned substantially in the center of the field of view of the main camera20in a state that would be suitable for capturing an image. The user can verify this via the EVF40, which makes it possible for the subject to be tracked using the main camera20. FIG.5(B)illustrates a state in which the subject120has left the capturing area320of the main camera20. Here, the main camera20cannot track the subject120, but the subject120is still captured in the capturing area310of the sub-camera80that captures images at a wider angle than the capturing area320of the main camera20. Therefore, in the present embodiment, a panning guide330is displayed in the EVF40so that the subject120that is currently only being captured by the sub-camera80and cannot be seen in the field of view of the main camera20can be brought back within the field of view of the main camera. Pointing the digital camera or the like in the direction indicated by the arrow of the panning guide330makes it possible for the user to recapture the subject120in the EVF40. InFIG.5(C), unlike inFIG.5(B), the moving subject120is still currently within the field of view of the main camera20but is about to leave the field of view of the main camera20. Here, displaying the panning guide330in the EVF40as an arrow guide that indicates the direction in which to point the digital camera or the like makes it possible to keep the subject within the field of view of the main camera20. This process is executed primarily for the image from the main camera20. In the present embodiment, when the flying bird that is the subject120is moving towards bottom of the screen, a guide for tracking the subject is presented to the user in order to make it possible to continue tracking the subject120. InFIG.5(D), the subject120being captured by the main camera20is rapidly approaching the camera and has become larger than the field of view of the main camera20. Here, a “ZOOM OUT!” message340is displayed in the EVF40to alert the user that the focal length of the lens is currently too long. More specifically, when this message is displayed, the user can adjust the zoom lens of the main camera20to make the focal length shorter and thereby bring the subject120back into the field of view of the main camera20. Furthermore, in configurations in which the focal length of the zoom lens can be controlled by the system, the focal length of the zoom lens may be shortened automatically to assist the user in capturing images. Meanwhile, when the subject120rapidly moves away from the digital camera or the like, a “ZOOM IN!” may be displayed or the zoom lens may be adjusted automatically. The panning guides330and the “ZOOM OUT!” message340may be displayed simply as an on-screen display (OSD) in the EVF40, or colors or flashing effects may be applied to provide greater emphasis. In this case, the flashing speed may be set according to the relationship between the digital camera or the like and the subject120, with the speed being faster when the required user operation for the digital camera or the like is more urgent. Moreover, providing settings for the digital camera or the like that can be configured in advance to determine how the guides are displayed and whether the guides are displayed at all makes it possible for the user to freely select whether to use the guides. <Operation of Image Pickup Device> Next, the operation of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.6.FIG.6is a flowchart illustrating the operation of the image pickup device according to Embodiment 1 of the present invention and illustrates the overall operation of a digital camera. First, once the power is turned on, all of the circuits are activated and the initial settings are configured. Here, the digital camera is booted and starts operating in accordance with the initial values specified in advance by the user such as those illustrated inFIG.5(D), for example (step S100). Next, the camera enters a loop. First, it is determined whether a power OFF instruction has been issued due to the user pressing a power OFF button or the like, for example (step S110). If it is determined in step S110that a power OFF instruction has been issued, the digital camera is completely powered OFF (step S120). Otherwise, the loop continues to the following operations. However, if the remaining charge of a battery (not illustrated in the figures) becomes less than or equal to a prescribed value, the camera is powered OFF automatically. In this case, a message such as “Battery depleted” may be displayed to the user in the EVF40or on the LCD130. Next, a main camera imaging process is executed to get an image feed from the main camera20at 30 frames per second (30 fps), for example (step S130), and this image feed is then displayed in the EVF40(step S140). The user can then target and track the subject120by using the EVF40in which the 30 fps video feed is displayed as the view finder of the digital camera. Next, it is determined whether the user has pressed the shutter button (step S150). If it is determined in step S150that the shutter button has been pressed, the camera enters a state for getting an image from the main camera20, and the subject120is brought into focus. At this time, the camera is switched from a mode in which not all of the pixels of the main camera are used while the main camera is being used as a finder to provide a high speed image feed to a mode for getting the data from all of the pixels of the main camera20, for example, the data from all of the pixels is obtained (step S160), and then the obtained image data is compressed and saved (step S170). In step S170, the data is compressed to a JPEG format, for example, and then the compressed image data is saved to a removable memory such as SD memory. Next, the captured image is displayed on the LCD130for a prescribed time (step S180) so that the user can verify the captured image. Then, the camera returns to step S110, and the basic overall loop for capturing images is repeated. Furthermore, if it is determined in step S150that the shutter button has not been pressed, the camera executes an object extraction process for identifying the subject120in the image feeds from the main camera20and the sub-camera80so that that moving subject can be tracked (step S190). In this object extraction process, characteristics such as the shapes and colors of expected subjects120are read from the database60, and the correlation between these characteristics and the object in the image feeds from the main camera20and the sub-camera80is calculated in order to extract the actual subject. Next, it is determined whether there is an object that needs to be tracked among the objects extracted as the subject120in step S190(step S200). If it is determined in step S200that there is no object that needs to be tracked among the extracted objects, the camera returns to step S110, and the basic overall loop for capturing images is repeated. In the determination in step S200, the presence of an object that needs to be tracked is determined according to whether there is a subject that was specified as the object that the user wants to capture because that subject always remains in the capturing area or because the user has selected that subject120from among those displayed on the LCD130, for example. If it is determined in step S200that there is an object that needs to be tracked among the extracted objects, it is determined whether the object to be tracked is present in the image from the sub-camera80(step S210). If it is determined in step S210that the object to be tracked is present in the image from the sub-camera80, for example, when the position of the subject120is about to leave the capturing area of the main camera20while that subject is being captured by the sub-camera80, or when the subject is only being captured by the sub-camera80, the panning guides330illustrated inFIGS.5(B) and5(C)are displayed in the EVF40in order to display the direction in which the subject120is present in the EVF40with arrows (step S220). If it is determined in step S210that the object to be tracked is not present in the image from the sub-camera80, and once the arrows that indicate the direction in which the subject120is present have been displayed in the EVF40in step S220, the object is brought into focus (step S230) and then continues to be tracked. Then, the camera returns to step S110, and the basic overall loop for capturing images is repeated while waiting for the user to press the shutter button. Even when the object that the user is filming as the subject120leaves the capturing area320of the main camera20, the process described above makes it possible for the user to identify the direction in which the subject120is present, thereby making it possible to continue tracking the subject120while keeping that subject in focus. <Method of Correcting for Sub-Camera Mounting Error> Next, a method of correcting for mounting error in the sub-camera of the image pickup device according to Embodiment 1 of the present invention will be described with reference toFIG.7.FIG.7is an explanatory drawing for explaining the method of correcting for mounting error in the sub-camera of the image pickup device according to Embodiment 1 of the present invention. Unlike the main camera20, the sub-camera80is not a camera for actually capturing images, and therefore the sub-camera80does not necessarily need to have the level of performance that makes it possible to capture high resolution static images. It is preferable that a camera module of the type utilized in small mobile devices such as mobile phones and smartphones, which is a single package that includes both the camera body and the signal processing circuits, be used due to design requirements such as having a pixel count of greater than or equal to some prescribed value or having a small overall size. The sub-camera80of the present embodiment needs to simultaneously recognize the subject projected on the main camera20, and therefore it is preferable that the optical axes of the main camera20and the sub-camera80be parallel. However, when a camera module of the type used in mobile phones and smartphones is used, for example, it is difficult to align the optical axis accurately and precisely when the camera module is mounted. Therefore, a method for correcting for mounting error in the sub-camera80when the main camera20and the sub-camera80are attached to the body of the digital camera will be described next. As illustrated inFIG.7, a subject360that is recognized as an infinitely distant image is simultaneously captured by the main camera20and the sub-camera80through a condenser lens350. When the subject image400of the subject360is brought into the center of the image feed380from the main camera20, the subject image400appears at an off-center position in the image feed390from the sub-camera80due to the error in mounting position precision in the sub-camera80. This off-center position can be numerically measured in X-Y coordinates from the left edge of the image. Then, these values are used to pass coordinate correction values to an image data processing circuit of the sub-camera80so that the subject image400in the image feed from the sub-camera80is brought to the center of the sub-camera80. This makes it possible to recognize the subject image400in the center of the image feed380from the main camera20and the subject image400in the image feed from the sub-camera80as the same subject, thereby making it possible to correct for the mounting error in the sub-camera80. In the present embodiment, a digital camera was described as an example of the image pickup device. However, the present embodiment may be applied to any device in which an image of a subject is projected through a lens onto a camera sensor and then the amount of light incident on each pixel is measured in order to capture two-dimensional images, such as a video camera. Embodiment 2 Embodiment 1 includes the sub-camera80. Embodiment 2 makes it possible to continue tracking a subject even when the subject leaves the capturing area using only a main camera20. <Basic Configuration of Image Pickup Device> Next, the basic configuration of an image pickup device according to Embodiment 2 of the present invention will be described with reference toFIG.8.FIG.8is a block diagram illustrating the basic configuration of the image pickup device according to Embodiment 2 of the present invention. As illustrated inFIG.8, the present image pickup device has a configuration in which the sub-camera80and the subject tracking unit90for tracking the subject in the image feed from the sub-camera80have been removed from the image pickup device illustrated inFIG.1. Here, the image pickup device includes the main camera20that is a camera sensor onto which an image of a subject is projected via an objective lens10, a DSP30, an EVF40, an object extraction unit50, a database60, a spatial frequency calculator70, a distance information calculator100, and an AF controller110. Together, the DSP30, the object extraction unit50, the spatial frequency calculator70, the distance information calculator100, and the AF controller110form a processing unit. Similar to Embodiment 1, the present image pickup device selects a moving object such as a bird that is flying at high speed as a subject that is the object to be captured. Also similar to Embodiment 1, this object is recognized as the subject when the light in the image of the subject passes through the objective lens10and is received by the main camera20for at least a prescribed period of time and continues to be received by the main camera20for that prescribed period of time. Similar to Embodiment 1 as illustrated inFIG.1, in the present embodiment, an image of the subject that is projected via the objective lens10onto the main camera20is converted to an electronic signal and input to the DSP30. Here, the input signal is converted to a YUV signal that includes brightness and color difference signals and simultaneously presented to the user in the EVF40so that the user can pan the image pickup device to follow the subject, particularly when the subject is moving. The operations of the object extraction unit50, the database60, the spatial frequency calculator70, the distance information calculator100, and the AF controller110for tracking the moving subject are the same as in Embodiment 1 as illustrated inFIG.1. <Subject-Tracking Operation of Image Pickup Device> Next, the subject-tracking operation of the image pickup device according to Embodiment 2 of the present invention will be described with reference toFIGS.9(A) to10(B).FIGS.9(A) to10(B)are explanatory drawings for explaining the subject-tracking operation of the image pickup device according to Embodiment 2 of the present invention.FIGS.9(A) and9(B)illustrate a case in which the subject is captured using all of the pixels of the main camera, andFIGS.10(A) and10(B)illustrate a case in which the subject is captured using only some of the pixels of the main camera. Unlike Embodiment 1, the present embodiment does not include a sub-camera80. Therefore, the moving subject is tracked using only the information from the main camera20. This can be done using either of the following two methods. In a first method, the subject is captured using the all of the pixels of the main camera20. As illustrated inFIG.9(A), in this case, a subject120cannot be re-captured if that subject leaves a capturing area410of the main camera20, for example. As a countermeasure, when the subject120leaves the capturing area410of the main camera20, the focal length of a zoom lens is automatically shortened and then the image pickup device searches for and captures the subject120again using the wider-angle lens, thus setting the capturing area410to the state illustrated inFIG.9(B), for example. This makes it possible to continue tracking the subject120. In a second method, rather than using all of the pixels of the main camera20, a subset of pixels that does not include all of the pixels of the main camera20and forms a region of a prescribed size is used as the imaging pixels for the capturing area410. In this case, the capturing area410illustrated inFIG.10(A)is the region displayed in the EVF40, which is the region that the user sees when actually capturing images. However, an additional wider-angle region is allocated within the main camera20as a capturing area310. In this way, even when the subject120leaves the region displayed in the EVF40, the main camera20continues tracking the subject120and displays a panning guide330to indicate the position of the subject120to the user. This makes it possible to display an instruction to move the image pickup device so that the subject can continue to be tracked further. Moreover, as illustrated inFIG.10(B), when the subject120has only partially left the capturing area410, the subset of all of the pixels of the main camera20that is allocated as the capturing area410is shifted away from the center of the main camera20towards the direction in which the subject120is present in order to re-allocate the subset of pixels for capturing the subject120. This makes it possible to continue tracking and capturing the subject120. Next, the method of using a subset of pixels that does not include all of the pixels of the main camera20and forms a region of a prescribed size as the imaging pixels for the capturing area410will be described in more detail. The main camera20covers a wider-angle region than the capturing area410that is used for actually capturing images. Here, if the subject120is outside of the capturing area410, the panning guide330is displayed in the EVF40in order to display an instruction to move the image pickup device to the user. In the example illustrated inFIG.10(A), the subject120is in the lower right corner, and therefore an instruction to point the camera more towards the lower right direction is displayed. Moreover, when the capturing area410itself is moved in the lower right direction to bring the subject120back into the capturing area410, even if the subject120is successfully brought back into the capturing area410, the capturing area410will be in an off-center position relative to all of the pixels of the main camera20. Therefore, as illustrated inFIG.10(B), the panning guide330continues to be displayed in the EVF40in order to continue indicating the direction in which the user should track the subject until the capturing area410is brought back into the approximate center of the main camera20. In the present embodiment, similar to Embodiment 1, even when the object that the user is filming as the subject120leaves the capturing area410, the process described above makes it possible for the user to find the subject120again, thereby making it possible to continue tracking the subject120while keeping that subject in focus. Embodiment 3 In Embodiments 1 and 2, the actual capturing area is displayed in the EVF40. However, in Embodiment 3, when a subject120leaves a capturing area410, the image displayed in an EVF40is switched according to the positional relationship between the subject120and the capturing area410in order to display a region outside of the capturing area410as a supplementary area. Other than the display process for the EVF40, the operation of Embodiment 3 is the same as in Embodiments 1 and 2. <Examples of Images Displayed in EVF> Next, examples of images displayed in the EVF of the image pickup device according to Embodiment 3 of the present invention will be described with reference toFIGS.11(A) and11(B).FIGS.11(A) and11(B)are explanatory drawings for explaining examples of images displayed in the EVF of the image pickup device according to Embodiment 3 of the present invention. InFIG.11(A), the subject120in the capturing area410displayed in the EVF40is about to leave the main camera20. Here, the image pickup device continues to capture the subject120either by using a sub-camera80or by using portion of the overall capturing area of a main camera20as the actual capturing area of the main camera20while the subject begins to move to a position that is not visible to the user. Then, as illustrated inFIG.11(B), the region visible to the user in the EVF40is switched to an image feed from the sub-camera80or to an image feed from the entire capturing area of the main camera20as a supplementary area420, and a frame that indicates the capturing area410that will actually be captured is superimposed on the supplementary area420. This allows the user to track the subject120in the wider supplementary area420and then satisfactorily frame the subject in the capturing area410and capture an image. It is also possible to integrate the process of the image pickup device of Embodiment 2 that does not include the sub-camera80into the image pickup device of Embodiment 1 that does include the sub-camera80. In this case, the capturing area of the sub-camera80may be set to a different region than the capturing area that includes all of the pixels of the main camera20. Then, when the subject120leaves the actual capturing area, the optimal capturing area outside of the actual capturing area can be selected in order to make it possible to track the subject with higher precision. REFERENCE SIGNS LIST 10Objective lens20Main camera30DSP40EVF50Object extraction unit60Database70Spatial frequency calculator80Sub-camera90Subject tracking unit100Distance information calculator110AF controller | 55,658 |
11860512 | Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. DETAILED DESCRIPTION The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent to one of ordinary skill in the art. The sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent to one of ordinary skill in the art, with the exception of operations necessarily occurring in a certain order. Also, descriptions of functions and constructions that would be well known to one of ordinary skill in the art may be omitted for increased clarity and conciseness. The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to one of ordinary skill in the art. Herein, it is noted that use of the term “may” with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists in which such a feature is included or implemented while all examples and embodiments are not limited thereto. Throughout the specification, when an element, such as a layer, region, or substrate, is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. Although terms such as “first,” “second,” and “third” may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples. Spatially relative terms such as “above,” “upper,” “below,” and “lower” may be used herein for ease of description to describe one element's relationship to another element as shown in the figures. Such spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, an element described as being “above” or “upper” relative to another element will then be “below” or “lower” relative to the other element. Thus, the term “above” encompasses both the above and below orientations depending on the spatial orientation of the device. The device may also be oriented in other ways (for example, rotated 90 degrees or at other orientations), and the spatially relative terms used herein are to be interpreted accordingly. The terminology used herein is for describing various examples only, and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof. Due to manufacturing techniques and/or tolerances, variations of the shapes shown in the drawings may occur. Thus, the examples described herein are not limited to the specific shapes shown in the drawings, but include changes in shape that occur during manufacturing. The features of the examples described herein may be combined in various ways as will be apparent after an understanding of the disclosure of this application. Further, although the examples described herein have a variety of configurations, other configurations are possible as will be apparent after an understanding of the disclosure of this application. Hereinafter, while examples will be described in detail with reference to the accompanying drawings, it is noted that examples are not limited to the same. The various examples to a camera module, and may be applied to portable electronic devices such as mobile communications terminals, smartphones, table PCs, and the like. A camera module is an optical device for capturing still or moving images. A camera module may include a lens, refracting light reflected from a subject, and a lens driving device moving the lens to adjust a focus or to compensate for the shaking of the camera module while images are captured. FIG.1is an assembled perspective view of a camera module according to an example, andFIG.2is an exploded perspective view of a camera module according to an example. Referring toFIGS.1and2, a camera module1000may include a housing1100, a lens module1500including a lens barrel1510accommodated in the housing1100, a lens driving device moving the lens module1500, and an image sensor unit1150converting light, incident through the lens barrel1510, into an electrical signal. The camera module1000may further include a case1110or an upper cover1301covering the housing1100from above. The lens barrel1510may be a hollow cylindrical shape allowing a plurality of lenses for capturing a subject to be accommodated therein (the configuration is not limited thereto, and the lens barrel1510may have a partially cut exterior, and the inside of the lens barrel1510may be provided with a circular lens or a D-cut lens, a lens having one side partially cut), and a plurality of lenses are mounted in the lens barrel1510. The plurality of lenses is arranged as many as necessary depending on a design of the lens barrel1510, and each of the plurality of lenses has the same or different optical characteristics such as a refractive index, or the like. The lens driving device moves the lens barrel1510in an optical axis direction or a direction perpendicular to the optical axis direction. As an example, the lens driving device may move the lens barrel1510in an optical axis direction (a Z-axis direction) to adjust a focus, and may move the lens barrel1510in X-axis and Y-axis directions, perpendicular to the optical axis direction (the Z-axis direction), to correct shaking at the time of capturing an image. The lens driving device includes a focusing unit (an autofocusing part) and a shake correction unit (a shake correction portion). The image sensor unit1150converts light, incident through the lens barrel1510, into an electrical signal. As an example, the image sensor unit1150may include an image sensor1151and a printed circuit board (PCB)1153connected to the image sensor1151, and may further include an infrared filter. The filter serves to block light in a predetermined area among light incident through the lens barrel1510. For example, the filter may be an infrared filter, and may serve to block light in an infrared area. The image sensor1151converts the light, incident through the lens barrel1515, into an electrical signal. For example, the image sensor1151may be a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) image sensor. The electrical signal, converted by the image sensor1151, is output as an image through a display unit of a portable electronic device. The image sensor1151is fixed to the printed circuit board1153and may be electrically connected to the printed circuit board1153by wire bonding or the like. The lens module1500, including the lens barrel1510, and the lens driving device are accommodated in the housing1100. As an example, the housing1100has a shape with an open top and bottom, and the lens module1500and the lens driving device may be accommodated in an internal space of the housing1100. The image sensor unit1150is disposed below the housing1100. The case1110is coupled to the housing1100to surround an external surface of the housing1100, and serves to protect internal components of the camera module1000. In addition, the case1110may serve to shield electromagnetic waves. As an example, the case1110may shield electromagnetic waves generated by the camera module1000such that electromagnetic waves do not affect other electronic components in the portable electronic device. Since a portable electronic device is equipped with various electronic components other than the camera module, the case1110may shield electromagnetic waves generated by such electronic components such that the electromagnetic waves do not affect the camera module1000. Referring toFIGS.2to4, the focusing unit of the lens driving device according to an example is illustrated. The lens driving device includes a focusing unit, moving a carrier1300in an optical axis direction to perform autofocusing, and a shake correction unit moving the lens module disposed inside of the carrier1300in a direction perpendicular to the optical axis direction, to perform shake correction. The focusing unit has a structure generating driving force to move the carrier1300, accommodating the lens module1500, in the optical axis direction (the Z-axis direction). A driving portion of the focusing unit includes a magnet1320and a coil1330. The magnet1320is mounted on the carrier1300. As an example, the magnet1320may be mounted on one surface of the carrier1300. The coil1330is mounted in the housing1100. As an example, the coil1330may be mounted in the housing1100through a substrate1130. The coil1330may be fixed to the substrate1130, and the substrate1130may be fixed to the housing1100in a state in which fixing driving coils of the shake correction unit to described later are also fixed together. The magnet1320is a movable member mounted on the carrier1300to move in the optical axis direction (the Z-axis direction) together with the carrier1300, and the coil1330is a fixed member fixed to the housing1100. However, the configuration is not limited thereto, and positions of the magnet1320and the coil1330are interchangeable with each other. When power is applied to the coil1330, the carrier1300may be moved in the optical axis direction (the Z-axis direction) by electromagnetic interaction between the magnet1320and the coil1330. Since the lens barrel1510is accommodated in the carrier1300, the lens barrel1510is also moved in the optical axis direction (the Z-axis direction) by the movement of the carrier1300. When the carrier1300is moved, a rolling member1370is disposed between the carrier1300and the housing1100to reduce friction between the carrier1300and the housing1100. The rolling member1370may have a ball shape. The rolling members1370may be disposed on both sides of the magnet1320. A yoke1350is disposed in the housing1100. For example, the yoke1350is disposed to oppose the magnet1320with the coil1330interposed therebetween. For example, the coil1330and the magnet1320are disposed to oppose each other, and the yoke1350is disposed on a rear surface of the coil1330such that the carrier1300is closely supported on the housing1100with the rolling member1370interposed therebetween. Attractive force acts between the yoke1350and the magnet1320in a direction perpendicular to the optical axis direction (the Z-axis direction). Accordingly, the rolling member1370may be maintained in a state of contact with the carrier1300and the housing1100by the attractive force between the yoke1350and the magnet1320. The yoke1350may also serve to focus magnetic force of the magnet1320, and may prevent magnetic flux from leaking outwardly. The various examples discussed herein use a closed loop control method in which a position of the lens barrel1510, and the carrier1300, is detected and feed-backed. Accordingly, a position sensor1360is required for closed loop control. The position sensor1360may be a hall sensor. The position sensor1360is disposed inside or outside of the coil1330. The position sensor1360may be mounted on the substrate1130on which the coil1330is mounted. When the camera module1000is powered on, an initial position of the carrier1300is detected by the position sensor1360. Then, the carrier1300is moved from the detected initial position to an initially set position. The term “initial position” may refer to a position of the carrier1300in an optical axis direction when the camera module1000is powered on, and the term “initially set position” may refer to a position at which a focus of the carrier1300is infinite. The carrier1300is moved from the initially set position to a target position by a driving signal of a circuit element. During a focusing process, the carrier1300may be moved forward and backward in the optical axis direction (the Z-axis direction) (for example, bi-directionally). A magnet and a coil may be additionally provided to secure sufficient driving force during focusing. When an area, in which a magnet is mounted, is reduced with the trend for slimming of a camera module, a size of the magnet is decreased, and thus, sufficient driving force required for focusing may not be secured. According to the various examples, although not illustrated, magnets may be respectively attached to the different surfaces of the carrier1300and coils may be respectively provided on different surfaces of the housing1100to oppose the magnet. Thus, sufficient driving force for focusing may be secured even when a camera module is slimmed. Referring toFIGS.2to7, a shake correction unit of the lens driving device according to an example is disclosed. The lens driving device includes a focusing unit, moving the carrier1300in an optical axis direction to perform focusing, and a shake correction unit moving the lens module1500disposed inside of the carrier1300in a direction perpendicular to the optical axis direction, to perform shake correction. The shake correction unit has a structure generating driving force to moves the lens module1500, accommodated in the carrier1300, in a first direction (an X-axis direction) and a second direction (a Y-axis direction), both perpendicular to the optical axis direction (the Z-axis direction). The shake correction unit is used to correct image blurring or video shaking caused by user hand-shake, or the like, when an image or a video is captured. For example, when the shake occurs due to user hand-shake, or the like, at the time of capturing an image, a relative displacement corresponding to the shake is provided to the lens barrel1510to correct the shaking. As an example, the shake correction unit corrects the shaking by moving the lens barrel1510in a direction perpendicular to an optical axis (a Z axis). The shake correction unit may include a first frame1400, the lens module1500, and a second frame1700sequentially provided inside of the carrier1300. The lens module1500includes a lens holder1600to which the lens barrel1510is coupled. The first frame1400and the second frame1700may be supported with a rolling member interposed between surfaces thereof parallel to the optical axis direction of the carrier1300. The carrier1300may include the upper cover1301covering the first frame1400, the lens module1500, and the second frame1700from above while they are disposed inside of the carrier1300. A rolling member may be interposed between the first frame1400, the lens holder1600, and the second frame1700, sequentially provided in the optical axis direction, such that they may mutually move in a rolling motion. The shake correction unit according to this example may implement a structure in which the lens barrel1510may be moved as the first frame1400and the second frame1700are moved in the second direction (the Y-axis direction) and the first direction (the X-axis direction), respectively. For example, the lens module1500including the lens barrel1510should be moved as the first frame1400is moved in the second direction (the Y-axis direction) or the second frame1700is moved in the first direction (the X-axis direction). Accordingly, the lens holder1600, to which the lens barrel1510is coupled, may be provided with a guide groove1675formed on at least one of a lower surface of the lens holder1600and an upper surface of the first frame1400to be elongated in the first direction (the X-axis direction) such that a rolling member1670disposed between the lens holder1600and the first frame1400may be freely moved in a rolling motion in the first direction (the X-axis direction), a direction in which the second frame1700is moved. When the lower surface of the lens holder1600and the upper surface of the first frame1400are respectively provided with guide grooves1675, the guide grooves1675may be provided as ‘┐’ or ‘└’-shaped grooves formed on edge portions of the lower surface of the lens holder1600and the upper surface of the first frame1400, respectively. The guide grooves1675may be vertically coupled to each other to prevent separation of the rolling member1670. Similarly, a guide groove1685is formed on at least one of an upper surface of the lens holder1600and a lower surface of the second frame1700to be elongated in the second direction (the Y-axis direction) such that a rolling member1680disposed between the lens holder1600and the second frame1700may be freely moved in a rolling motion in the second direction (the Y-axis direction), a direction in which the first frame1400is moved. When the upper surface of the lens holder1600and the lower surface of the second frame1700are respectively provided with guide grooves1685, the guide grooves1685may be provided as ‘┐’ or ‘└’-shaped grooves formed on edge portions of the upper surface of the lens holder1600and the lower surface of the second frame1700, respectively. The guide grooves1685may be vertically coupled to each other to prevent separation of the rolling member1680. Due to the above structure, the lens module1500is also moved when the first frame1400is moved in the second direction (the Y-axis direction) or the second frame1700is move in the first direction (the X-axis direction), and thus, shake may be corrected. Each of the rolling members1670and1680may be provided with three rolling members to form a triangle (the configuration is not limited thereto, each of the rolling members1670and1680may be provided with four rolling member). As a first magnet1420and a second magnet1720, to be described later, are disposed to be adjacent to each other, the rolling members1670and1680may be provided on both end portions of the first magnet1420and the second magnet1720, respectively. When each of the rolling members1670and1680are provided with three rolling member, an auxiliary rolling member1690may be provided between opposing surfaces in the optical axis direction of the first frame1400and the second frame1700. As such, at least one of the opposing surfaces of the first frame1400and the second frame1700may be provided with a guide groove1691in which the auxiliary rolling member1690is seated. The driving portion of the shake correction unit includes a first driving portion, driving the first frame1400, and a second driving portion driving the second frame1700. The first frame1400and the second frame1700are driven while being in closely supported on a surface parallel to the optical axis direction of the carrier1300. The first frame1400is provided with the first magnet1420. The first magnet1420is disposed to oppose a first coil1430, provided in the housing1100, in the first direction (the X-axis direction) perpendicular to the optical axis direction. The first magnet1420is magnetized to have at least N and S poles in a second direction (a Y-axis direction) perpendicular to a direction opposing the first coil1430(for example, the first magnet1420is magnetized such that a surface opposing the first coil1430has at least N and S poles in a direction perpendicular to the optical axis). Accordingly, when power is applied to the first coil1430, force is generated to move the first frame1400in the second direction (the Y-axis direction) depending on electromagnetic interaction of the first magnet1420and the first coil1430. The second frame1700is provided with the second magnet1720. The second magnet1720is disposed to oppose a second coil1730, provided in the housing1100, in the second direction (the Y-axis direction) perpendicular to the optical axis direction and the first direction (the X-axis direction). The second magnet1720is magnetized to have at least N and S poles in the first direction (the X-axis direction) perpendicular to a direction opposing the second coil1730(for example, the second magnet1720is magnetized such that a surface opposing the second coil1730has at least N and S poles in a direction perpendicular to the optical axis). Accordingly, when power is applied to the second coil1730, force is generated to move the second frame1700in the first direction (the X-axis direction) depending on electromagnetic interaction of the second magnet1720and the second coil1730. The first coil1430and the second coil1730may be fixed to the substrate1130together with the driving coil1330of the focusing unit, and the substrate1130may be fixed to the housing1100. Each of the first coil1430and the second coil1730may be provided with one or two or more coils. The first frame1400and the second frame1700are closely supported on a sidewall of the carrier1300, for example, a surface of the carrier1300parallel to the optical axis direction. The first frame1400and the second frame1700are supported on the sidewall of the carrier1300by attractive force with a first yoke1450and a second yoke1750provided in the housing1100. Since each of the first yoke1450and the second yoke1750may be a metallic or non-metallic magnetic material to shield a magnetic field, magnetic flux (a magnetic field) generated by a coil, a magnet, or an interface thereof may be prevented from leaking outwardly of the camera module1000. The first yoke1450is disposed to oppose the first magnet1420with the first coil1430interposed therebetween, and the second yoke1750is disposed to oppose the second magnet1720with the second coil1730interposed therebetween. For example, the first yoke1450and the second yoke1750may be disposed on rear surfaces of the first coil1430and the second coil1730, respectively. The first yoke1450and the second yoke1750may allow the first frame1400and the second frame1700to be closely supported on an internal wall of the carrier1300by the attractive force with the first magnet1420and the second magnet1720, respectively. The first frame1400and the second frame1700may include a first rolling member1470and a second rolling member1770disposed between the first and second frames1400and1700and the internal wall of the carrier1300to easily move in a sliding or rolling motion on the internal wall of the carrier1300, respectively. A surface, on which the internal walls of the first frame1400and the carrier1300oppose each other, may be provided with a first guide groove1475formed to be elongated in the second direction (the Y-axis direction) such that the first rolling member1470is easily moved in a sliding or rolling motion. A surface, on which the internal walls of the second frame1700and the carrier1300oppose each other, may be provided with a second guide groove1775formed to be elongated in the first direction (the X-axis direction) such that the second rolling member1770is easily moved in a sliding or rolling motion. The first rolling member1470and the second rolling member1770may be provided with two first magnets1420and two second magnets1720on external sides of both end portions thereof, respectively (the configuration is not limited thereto, and the first rolling member1470and the second rolling member1770may be provided with three or more first magnets1420and three or more second magnets1720, respectively). The first guide groove1475may be formed such that the movement of the first rolling member1470is limited only in the first direction (the X-axis direction), a direction in which the first frame1400is supported, and the movement or tilting of the first rolling member1470is not limited in the optical axis direction (the Z-axis direction) and the second direction (the Y-axis direction). For example, in addition to the movement of the first frame1400in the second direction (the Y-axis direction), the first frame1400may be tilted based on a shaft connecting the two first rolling members1470provided on both sides, or the first guide groove1475may be provided to have a width greater than a width of the first rolling member1470in all directions such that the rolling motion of the first rolling member1470is not limited (a depth of the first guide groove1475should be constantly maintained because the movement thereof is limited in the first direction (the X-axis direction)). The second guide groove1775may be formed such that the movement of the second rolling member1770is limited only in the second direction (the Y-axis direction), a direction in which the second frame1700is supported, and the movement of the second rolling member1770is not limited in the optical axis direction (the Z-axis direction) and the first direction (the Z-axis direction). For example, in addition to the movement of the second frame1700in the first direction (the X-axis direction), the second frame1700may be tilted based on a shaft connecting the two second rolling members1770provided on both sides, or the second guide groove1775may be provided to have a width greater than a width of the second rolling member1770in all directions such that the rolling motion of the second rolling member1770is not limited (a depth of the second guide groove1775should be constantly maintained because the movement thereof is limited in the second direction (the Y-axis direction)). The first and second magnets1420and1720of the shake correction driving unit including the first driving unit and the second driving unit are mounted on the first and second frames1400and1700, respectively. The first and second coils1430and1730, respectively opposing the first and second magnets1420and1720, are mounted in the housing1100. For ease of description, in a portion of the drawings, the first and second coils1430and1730are illustrated as being disposed on a side of the carrier1300. However, referring toFIG.2, both of the first and second coils1430and1730may be mounted in the housing1100. The first and second magnets1420and1720are movable members moved together with the lens module1500in a direction perpendicular to the optical axis (the Z-axis), and the first and second coils1430and1730are fixed members fixed to the housing1100. However, the configuration is not limited thereto, and positions of the first and second magnets1420and1720and the first and second coils1430and1730are interchangeable with each other. The shake correction driving unit may use a closed loop control method in which the positions of the first and second frames1400and1700are continuously sensed and reflected on driving. Accordingly, the first and second frames1400and1700may include first and second position sensors1460and1760, opposing the first and second magnets1420and1720, to sense the positions of the first and second frames1400and1700. In this case, the first and second position sensors1460and1760may be provided inside or by the first and second coils1430and1730of the substrate1130. This example includes all structures in which one or two or more first and second coils1430and1730, opposing the first and second magnets1420and1720provided on the first and second frames1400and1700, are provided, respectively. When two or more first and second coils1430and1730are provided, the amount of magnetic flux may be adjusted to more efficiently prevent leakage of the magnetic flux. In the camera module1000according to this example, side surfaces of the housing1100using a VCM actuator using a magnet and a coil may all be finished with a yoke, capable of preventing leakage of magnetic flux. As a result, leakage of a magnetic field may be effectively prevented. FIG.1is an assembled exploded perspective view of a camera module according to another example, andFIG.8is an exploded perspective view of the camera module according to another example. Referring toFIGS.1and8, a camera module2000includes a housing2100, a lens module2500including a lens barrel2510accommodated in the housing2100, a lens driving device moving the lens module2500, and an image sensor unit2150converting light, incident through the lens barrel2510, into an electrical signal. The camera module2000may further include a case2110or an upper cover2301covering the housing2100from above. The lens barrel2510may have a hollow cylindrical shape allowing a plurality of lenses for capturing a subject to be accommodated therein (the configuration is not limited thereto, and the lens barrel2510may have a partially cut exterior, and the inside of the lens barrel2510may be provided with a circular lens or a D-cut lens which is a lens having one side partially cut), and a plurality of lenses are mounted in the lens barrel2510. The plurality of lenses is arranged in as large a number as necessary, depending on a design of the lens barrel2510, and each of the plurality of lenses has the same or different optical characteristics such as refractive index, or the like. The lens driving device moves the lens barrel2510in an optical axis direction or a direction perpendicular to the optical axis direction. As an example, the lens driving device may move the lens barrel2510in an optical axis direction (a Z-axis direction) to adjust a focus, and may move the lens barrel2510in X-axis and Y-axis directions, perpendicular to the optical axis direction (the Z-axis direction), to correct shaking at the time of capturing an image. The lens driving device includes a focusing unit (an autofocusing part) and a shake correction unit (a shake correction portion). The image sensor unit2150converts light, incident through the lens barrel2510, into an electrical signal. As an example, the image sensor unit2150may include an image sensor2151and a printed circuit board (PCB)2153connected to the image sensor2151, and may further include an infrared filter. The lens module2500, including the lens barrel2510, and the lens driving device are accommodated in the housing2100. As an example, the housing2100has a shape with an open top and bottom, and the lens module2500and the lens driving device may be accommodated in an internal space of the housing2100. The image sensor unit2150is disposed below the housing2100. The case2110is coupled to the housing2100to surround an external surface of the housing2100, and serves to protect internal components of the camera module2000. The case2110may serve to shield electromagnetic waves. As an example, the case2110may shield electromagnetic waves generated by the camera module2000such that electromagnetic waves do not affect other electronic components in the portable electronic device. Since a portable electronic device is equipped with various electronic components other than the camera module2000, the case2110may shield electromagnetic waves generated by such electronic components such that the electromagnetic waves do not affect the camera module. Referring toFIGS.8to13, a focusing unit of the lens driving device according to another example is illustrated. The lens driving device includes a focusing unit, moving a carrier2300in an optical axis direction to perform autofocusing, and a shake correction unit moving the lens module2500disposed inside of the carrier2300in a direction perpendicular to the optical axis direction, to perform shake correction. The focusing unit has a structure generating driving force to move the carrier2300, accommodating the lens module2500, in the optical axis direction (the Z-axis direction). A driving portion of the focusing unit includes a magnet2320and a coil2330. The magnet2320is mounted on the carrier2300. As an example, the magnet2320may be mounted on one surface of the carrier2300. The coil2330is mounted in the housing2100. As an example, the coil2330may be mounted in the housing2100through a substrate2130. The coil2330may be fixed to the substrate2130, and the substrate2130may be fixed to the housing2100in a state in which fixing driving coils of the shake correction unit, to described later, are also fixed together. The magnet2320is a movable member mounted on the carrier2300to move in the optical axis direction (the Z-axis direction) together with the carrier2300, and the coil2330is a fixed member fixed to the housing2100. However, the configuration is not limited thereto, and positions of the magnet2320and the coil2330are interchangeable with each other. When power is applied to the coil2330, the carrier2300may be moved in the optical axis direction (the Z-axis direction) by electromagnetic interaction between the magnet2320and the coil2330. Since the lens barrel2510is accommodated in the carrier2300, the lens barrel2510is also moved in the optical axis direction (the Z-axis direction) by the movement of the carrier2300. When the carrier2300is moved, a rolling member2370is disposed between the carrier2300and the housing2100to reduce friction between the carrier2300and the housing2100. The rolling member2370may have a ball shape. The rolling members2370may be disposed on both sides of the magnet2320. A yoke2350is disposed in the housing2100. For example, the yoke2350is disposed to oppose the magnet2320with the coil2330interposed therebetween. For example, the coil2330and the magnet2320are disposed to oppose each other, and the yoke2350is disposed on a rear surface of the coil2330such that the carrier2300is closely supported on the housing2100with the rolling member2370interposed therebetween. An attractive force acts between the yoke2350and the magnet2320in a direction perpendicular to the optical axis direction (the Z-axis direction). Accordingly, the rolling member2370may be maintained in a state of contact with the carrier2300and the housing2100by the attractive force between the yoke2350and the magnet2320. The yoke2350may also serve to focus magnetic force of the magnet2320, and may prevent magnetic flux from leaking outwardly. The various examples use a closed loop control method in which a position of the lens barrel2510, and the carrier2300, is detected and feed-backed. Accordingly, a position sensor2360is required for closed loop control. The position sensor2360may be a hall sensor. The position sensor2360is disposed inside or outside of the coil2330. The position sensor2360may be mounted on the substrate2130on which the coil2330is mounted. A magnet and a coil may be additionally provided to secure sufficient driving force during focusing. When an area, in which a magnet is mounted, is reduced with the trend for slimming of a camera module, a size of the magnet is decreased, and thus, sufficient driving force required for focusing may not be secured. According to the various examples, although not illustrated, magnets may be respectively attached to the different surface of the carrier2300and coils may be respectively provided on different surfaces of the housing2100to oppose the magnet. Thus, sufficient driving force for focusing may be secured even when a camera module is slimmed. Referring toFIGS.8to13, a shake correction unit of the lens driving device according to an example is disclosed. The lens driving device includes a focusing unit, moving the carrier2300in an optical axis direction to perform focusing, and a shake correction unit moving the lens module2500disposed inside of the carrier2300in a direction perpendicular to the optical axis direction, to perform shake correction. The shake correction unit has a structure generating driving force to move the lens module2500, accommodated in the carrier2300, in a first direction (an X-axis direction) and a second direction (a Y-axis direction), perpendicular to the optical axis direction (the Z-axis direction). The shake correction unit is used to correct image blurring or video shaking caused by user hand-shake, or the like, when an image or a video is captured. For example, when the shake occurs due to user hand-shake, or the like, at the time of capturing an image, a relative displacement corresponding to the shake is provided to the lens barrel2510to correct the shaking. As an example, the shake correction unit corrects the shaking by moving the lens barrel2510in a direction perpendicular to an optical axis (a Z axis). The shake correction unit may include a frame2400and the lens module2500sequentially provided inside of the carrier2300. The lens module2500includes a lens holder2700to which the lens barrel2510is coupled. The carrier2300may include the upper cover2301covering the frame2400and the lens module2500from above while they are disposed inside of the carrier2300. The shake correction unit according to this example may implement a structure in which the lens barrel2510may be moved as the frame2400and the lens holder2700are moved in the second direction (the Y-axis direction) and the first direction (the X-axis direction), respectively. For example, the lens holder2700, to which the lens barrel2510is fixed, is moved when the frame2400is moved in the second direction (the Y-axis direction) or the lens holder2700is moved in the first direction (the X-axis direction). For example, the lens barrel2510is moved with the movement of the lens holder2700because the lens barrel2510is fixed to the lens holder2700, and is moved together with the frame2400even when the frame2400is moved because the lens holder2700is a member moved while being supported on a side surface of the frame2400. Due to the above structure, the lens barrel2510is also moved when the frame2400is moved in the second direction (the Y-axis direction) or the lens holder2700is moved in the first direction (the X-axis direction), and thus, shake may be corrected. A driving portion of the shake correction unit includes a first driving portion, driving the frame2400, and a second driving portion driving the lens holder2700. The frame2400is driven while being closely supported on a surface parallel to an optical axis direction of the carrier2300, and the lens holder2700is driven while being closely supported on a surface parallel to an optical axis direction of the frame2400. The frame2400includes a first magnet2420. The first magnet2420is disposed to oppose a first coil2430, provided in the housing2100, in the first direction (the X-axis direction) perpendicular to the optical axis direction. The first magnet2420is magnetized to have at least N and S poles in a second direction (a Y-axis direction) perpendicular to a direction opposing the first coil2430(for example, the first magnet2420is magnetized such that a surface opposing the first coil2430has at least N and S poles in a direction perpendicular to the optical axis). Accordingly, when power is applied to the first coil2430, force is generated to move the frame2400in the second direction (the Y-axis direction) depending on electromagnetic interaction of the first magnet2420and the first coil2430. The lens holder2700is provided with a second magnet2720. The second magnet2720is disposed to oppose a second coil2730, provided in the housing2100, in the second direction (the Y-axis direction) perpendicular to the optical axis direction and the first direction (the X-axis direction). The second magnet2720is magnetized to have at least N and S poles in the first direction (the X-axis direction) perpendicular to a direction opposing the second coil2730(for example, the second magnet2720is magnetized such that a surface opposing the second coil2730has at least N and S poles in a direction perpendicular to the optical axis). Accordingly, when power is applied to the second coil2730, force is generated to move the lens holder2700in the first direction (the X-axis direction) depending on electromagnetic interaction of the second magnet2720and the second coil2730. The first coil2430and the second coil2730may be fixed to the substrate2130together with the driving coil2330of the focusing unit, and the substrate2130may be fixed to the housing2100. The frame2400is closely supported on a sidewall of the carrier2300, for example, a surface of the carrier2300parallel to the optical axis direction. The lens holder2700is closely supported on a sidewall of the frame2400, for example, a surface of the frame2400parallel to the optical axis direction. The frame2400and the lens holder2700are supported on sidewalls of the carrier2300and the frame2400by attractive force with a first yoke2450and a second yoke2750provided in the housing2100. Since each of the first yoke2450and the second yoke2750may be a metallic or non-metallic magnetic material to shield a magnetic field, magnetic flux (a magnetic field) generated by a coil, a magnet, or an interface thereof may be prevented from leaking outwardly of the camera module2000. The first yoke2450is disposed to oppose the first magnet2420with the first coil2430interposed therebetween, and the second yoke2750is disposed to oppose the second magnet2720with the second coil2730interposed therebetween. The first yoke2450and the second yoke2750may be disposed on rear surfaces of the first coil2430and the second coil2730, respectively. The first yoke2450and the second yoke2750may allow the first frame2400and the lens holder2700to be closely supported on internal walls of the carrier2300and the frame2400by the attractive force with the first magnet2420and the second magnet2720, respectively. The frame2400may include a first rolling member2470between the internal wall of the carrier2300and the frame2400to be easily moved in a sliding or rolling motion. The lens holder2700may include a second rolling member2770between the internal wall of the frame2400and the lens holder2700to be easily moved in a sliding or rolling motion. A surface, on which the internal walls of the frame2400and the carrier2300oppose each other, may be provided with a first guide groove2475formed to be elongated in the second direction (the Y-axis direction) such that the first rolling member2470is easily moved in a sliding or rolling motion on at least one of the surfaces. A surface, on which the internal walls of the lens holder2700and the carrier2300oppose each other, may be provided with a second guide groove2775formed to be elongated in the first direction (the X-axis direction) such that the second rolling member2770is easily moved in a sliding or rolling motion on at least one of the surfaces. The first rolling member2470and the second rolling member2770may be provided with one or two first magnets2420and one or two second magnets2720on external sides of both end portions thereof, respectively, to form a triangle or a quadrangle. Each rolling member may be provided with one rolling member in each guide groove. The first and second magnets2420and2720of the shake correction driving unit including the first driving unit and the second driving unit are mounted on the first frame2400and the lens holder2700, respectively. The first and second coils2430and2730, respectively opposing the first and second magnets2420and2720, are mounted in the housing2100. For ease of description, in a portion of the drawings, the first and second coils2430and2730are illustrated as being disposed on a side of the carrier2300. However, referring toFIG.8, both of the first and second coils2430and2730may be mounted in the housing2100. The first and second magnets2420and2720are movable members moved together with the lens module2500in a direction perpendicular to the optical axis (the Z-axis), and the first and second coils2430and2730are fixed members fixed to the housing2100. However, the configuration is not limited thereto, and positions of the first and second magnets2420and2720and the first and second coils2430and2730are interchangeable with each other. The shake correction driving unit may use a closed loop control method in which the positions of the frame2400and the lens holder2700are continuously sensed and reflected on driving. Accordingly, the frame2400and the lens holder2700may include first and second position sensors2460and2760, opposing the first and second magnets2420and2470, to sense the positions of the frames2400and the lens holder2700. In this case, the first and second position sensors2460and2760may be provided inside or by the first and second coils2430and2730of the substrate2130. This example includes all structures in which one or two or more first and second coils2430and2730, opposing the first and second magnets2420and2720provided on the frame2400and the lens holder2700, are provided, respectively. When two or more first and second coils2430and2730are provided, the amount of magnetic flux may be adjusted to more efficiently prevent leakage of the magnetic flux. In the camera module2000according to this example, all side surfaces of the housing2100using a VCM actuator using a magnet and a coil may be finished with a yoke capable of preventing leakage of magnetic flux. As a result, magnetic field leakage of the camera module2000may be effectively prevented. As described above, leakage of a magnetic field may be significantly reduced while employing an actuator using a magnet and a coil. Thus, miniaturization and accurate driving of a camera module may be achieved. In addition, even when camera modules are arranged to be adjacent to each other, magnetic field interference may be significantly reduced. Thus, the camera modules may be freely arranged. While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in forms and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure. | 47,842 |
11860513 | DETAILED DESCRIPTION OF DISCLOSURE The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are in direct contact, and may also include embodiments in which additional features may be disposed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Moreover, the formation of a feature on, connected to, and/or coupled to another feature in the present disclosure that follows may include embodiments in which the features are in direct contact, and may also include embodiments in which additional features may be disposed interposing the features, such that the features may not be in direct contact. In addition, spatially relative terms, for example, “vertical,” “above,” “over,” “below,”, “bottom,” etc. as well as derivatives thereof (e.g., “downwardly,” “upwardly,” etc.) are used in the present disclosure for ease of description of one feature's relationship to another feature. The spatially relative terms are intended to cover different orientations of the device, including the features. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It should be appreciated that each term, which is defined in a commonly used dictionary, should be interpreted as having a meaning conforming to the relative skills and the background or the context of the present disclosure, and should not be interpreted in an idealized or overly formal manner unless defined otherwise. Use of ordinal terms such as “first”, “second”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term) to distinguish the claim elements. In addition, in some embodiments of the present disclosure, terms concerning attachments, coupling and the like, such as “connected” and “interconnected”, refer to a relationship wherein structures are secured or attached to one another either directly or indirectly through intervening structures, as well as both movable or rigid attachments or relationships, unless expressly described otherwise. FIG.1is a schematic view of a driving mechanism1in some embodiments of the present disclosure. The driving mechanism1may mainly include a case10, a base20, a movable portion30, a driving assembly40, a position sensing assembly50, a holder60, a substrate70, an optical sensor80, and a control assembly82. The driving mechanism1may be used for driving an optical assembly100, and the optical assembly100has a main axis O. In particular, the case10and the base20may be called as a fixed portion F, and an accommodation space S is formed in the case10and the base20. The movable portion30and the optical assembly100are disposed in the accommodation space S. The movable portion30is movably connected to the fixed portion, and the optical assembly100is disposed on the movable portion30. For example, the movable portion30may be connected to the optical assembly100through a connecting portion34. Therefore, when the movable portion30is moving relative to the fixed portion F, the optical assembly100may be moved by the movable portion30to move relative to the fixed portion F. However, the present disclosure is not limited thereto. The driving mechanism also may be used for driving other mechanisms, such as vibration-typed motors, depending on design requirement. FIG.2is a top view of the base20, the movable portion30, and the driving assembly40. In some embodiments, as shown inFIG.2, the movable portion30has an opening36, and the optical assembly100may be disposed in the opening36. The driving assembly40may include a first driving element42and a second driving element44, the first driving element42includes a first driving unit42A and a second driving unit42B, and the second driving element44includes a third driving unit44A and a fourth driving unit44B. The first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B are used for driving the movable portion30to move relative to the fixed portion F. In some embodiments, the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B are made of a shape memory alloy (SMA) and are strip-shaped. The length of the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B changes as the crystal structure of SMA changes with temperature. From the center of the transition temperature, the length of the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B increases as the temperature decreases, and the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B contracts as the temperature increases. In some embodiments, when a signal (e.g. voltage or current) is provided to the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B, the temperature may be increased by the thermal effect of a current, so that the length of the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B may be decreased. Conversely, if a signal having a lower intensity is provided which makes the heating rate lower than the heat dissipation rate of the environment, the temperature of the first driving unit42A, the second driving unit42B, the third driving unit44A, or the fourth driving unit44B may be decreased, and the length may be increased. As shown inFIG.1andFIG.2, in some embodiments, the base20of the fixed portion F has an extension portion22, the movable portion30has a protruding portion32, an end of the driving assembly40may be disposed on the extension portion22, and another end of the driving assembly40may be disposed on the protruding portion32. As shown inFIG.2, in some embodiments, the first driving unit42A extends from the extension portion22to the protruding portion32in a first direction D1(X direction), the second driving unit42B extends from the extension portion22to the protruding portion32in a second direction D2(−X direction), the third driving unit44A extends from the extension portion22to the protruding portion32in a third direction D3(−Y direction), the fourth driving unit44B extends from the extension portion22to the protruding portion32in a fourth direction D4(Y direction). The first direction D1is different than the third direction D3, the second direction D2is different than the third direction D3, the first direction D1is substantially parallel to the second direction D2, and the third direction is substantially parallel to the fourth direction D4. As a result, the movable portion30may be moved relative to the fixed portion F in different directions by controlling the lengths of the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B. For example, the first driving element42may drive the movable portion30to move relative of the fixed portion F in a first dimension (X direction or −X direction), and the second driving element44may drive the movable portion30to move relative of the fixed portion F in a second dimension (Y direction or −Y direction). In other words, the driving assembly40may connect the fixed portion F and the movable portion30to drive the movable portion30moving relative to the fixed portion F. The position sensing assembly50may include a first position sensor52, a second position sensor54, a first reference element56, and a second reference element58. The first position sensor52and the second position sensor54may be disposed on the fixed portion F (e.g. the base20), and the first reference element56and the second reference element58may be disposed on the optical assembly100. The first position sensor52corresponds to the first reference element56(e.g. align in Z direction), and the second position sensor54corresponds to the second reference element58(e.g. align in Z direction). In some embodiments, the first position sensor52and the second position sensor54may be, for example, a Hall sensor, a magnetoresistance effect sensor (MR sensor), a giant magnetoresistance effect sensor (GMR Sensor), a tunneling a magnetoresistance effect sensor (TMR Sensor), or a fluxgate sensor. In some embodiments, the first reference element56and the second reference element58may be sensing magnets, such as having a first magnetic unit and a second magnetic unit, respectively. When the movable portion30moves relative to the fixed portion F, the first position sensor52and the second position sensor54may detect the intensity difference of the magnetic field generated by the first reference element56and the second reference element58, so the position of the movable portion30relative to the fixed portion F may be achieved. In some embodiments, the position sensing assembly50is at least partially disposed in the accommodation space S, such as the whole position sensing assembly50is disposed in the accommodation space S, or there may be a portion of the position sensing assembly50disposed outside the accommodation space S, but the present disclosure is not limited thereto. As a result, the distance between the position sensing assembly50and the movable portion30may be decreased, so more accurate position information of the movable portion30may be achieved. A holder60and a substrate70may be provided on another side of the base20. The holder60may be disposed on the substrate70, and the base20may be disposed on the holder60. An optical sensor80may be disposed in the holder60to detect the light passing through the optical assembly100. Furthermore, a control assembly82may be disposed on the substrate70to control the driving mechanism1. Although the control assembly82is illustrated as disposed on the substrate70, the present disclosure is not limited thereto. For example, the control assembly82may be separated from the driving mechanism1. For example, when the driving mechanism1is disposed in an electronic apparatus (e.g. cell phone or tablets), the control assembly82may be the central processing unit (CPU) of the electronic apparatus, depending on design requirement. FIG.3is a top view of some elements of the optical assembly100. Refer toFIG.1andFIG.3, the optical assembly100includes a cover110, a bottom12, an inner movable portion130, a first inner driving element140, a second inner driving element160, a first resilient element170, and a second resilient element172. Moreover, an optical element (not shown) may be disposed in the optical assembly100, such as disposed on the inner movable portion130. The optical element may include a lens, a mirror, a prism, a splitter, or an aperture. The optical assembly100may move the optical element to achieve auto focus (AF) or optical image stabilization (OIS). The cover110and the bottom120may be called as an inner fixed portion IF. The cover110and the bottom120may be combined to form as the outer case of the optical assembly100. For example, the bottom120may be fixed on the cover110. It should be realized that an opening may be formed on the cover110, and another opening may be formed on the bottom120. The center of the opening of the cover110corresponds to the main axis O of the optical assembly100, and the opening of the bottom120corresponds to the optical sensor80disposed outside the optical assembly100. Therefore, the optical element disposed in the optical assembly100may perform focus to the optical sensor80along the main axis O. A through hole may be formed on the inner movable portion130, the optical element may be fixed in the through hole, and the first inner driving element140may be disposed on the outer surface of the inner movable portion130. The second inner driving element160may be affixed on the cover110. It should be noted that in this embodiment, the second inner driving element160and the first reference element56or the second reference element58are the same magnetic element. In other words, the second inner driving element160may be used for driving the inner movable portion130, and may act as the first reference element56and the second reference element58as well, so that the position of the optical assembly100may be detected by the first position sensor52and the second position sensor54. Therefore, the number of the elements in the driving mechanism1may be reduced to achieve miniaturization. The first inner driving element140and the second inner driving element160may be called as the inner driving assembly ID to drive the inner movable portion130to move relative to the inner fixed portion IF. It should be realized that the interaction between the second inner driving element160and the first inner driving element140may generate a magnetic force to move the inner movable portion130relative to the inner fixed portion IF along the main axis O, so fast focus may be achieved. In this embodiment, the inner movable portion130and the optical element disposed therein are movably disposed in the inner fixed portion IF. More specifically, the inner movable portion130may be connected to the inner fixed portion IF and suspended in the inner fixed portion IF (FIG.3) through the first resilient element170and the second resilient element172including metal material. When current is passed through the first inner driving element140, the first inner driving element140may interact with the magnetic field of the second inner driving element160to generate an electromagnetic force. As a result, the inner movable portion130and the optical element may be moved relative to the inner fixed portion IF along the main axis O to achieve auto focus. In some embodiments, additional circuit may be provided on the bottom120to electrically connect to other electronic elements disposed in or outside the optical assembly100for auto focus or optical image stabilization. The circuit on the bottom120may transmit electric signal to the first inner driving element140through the first resilient element170or the second resilient element172, so the movement of the inner movable portion130in X, Y, or Z direction may be controlled. When the optical assembly100is assembled, the second resilient element172and the bottom120may be combined by soldering or laser welding to allow the first inner driving element140being electrically connected to external circuit. Moreover, in some embodiments, a plurality of additional driving coils (not shown) may be embedded in the bottom120to interact with the second inner driving element160, so the inner movable portion130may be moved. When the first inner driving element140and the additional driving coils in the bottom120interact with the second inner driving element160, driving forces having different directions may be generated to achieve auto focus and optical image stabilization. FIG.4is a block diagram showing the connection of some elements of the driving mechanism1. In some embodiments, as shown inFIG.4, the control assembly82may provide a control signal C to the driving assembly40, so that the driving assembly40may be driven by the control signal C from the control assembly82. The position sensing assembly50may provide a position signal P (e.g. including a first position signal P1and a second position signal P2) to the control assembly82. For example, the first position sensor52may provide the first position signal P1to the control assembly82, and the second position sensor54may provide the second position signal P2to the control assembly82. The first position signal P1and the second position signal P2may include the position information of the movable portion30relative to the fixed portion F in different dimensions. Therefore, the control assembly82may provide a control signal C to the driving assembly40corresponding to the position signal P provided by the position sensing assembly50, so the position of the movable portion30relative to the fixed portion F may be controlled by the driving assembly40. In some embodiments, the control signal C may include a first driving signal C1provided to the first driving unit42A, a second driving signal C2provided to the second driving unit42B, a third driving signal C3provided to the third driving unit44A, and a fourth driving signal C4provided to the fourth driving unit44B to separately drive the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B. In some embodiments, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4may include periodic signal having a lower frequency than the maximum frequency. In other words, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4does not include the periodic signal having a frequency higher than the maximum frequency, so that the movable portion30may vibrate relative to the fixed portion F by a frequency less than the maximum frequency. In some embodiments, the maximum frequency may be, for example, about 10000 Hz, and the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4does not include the periodic signal having a frequency higher than 10000 Hz, but the present disclosure is not limited thereto. As a result, other elements in the driving mechanism1may be prevented from being interfered by the signal with a frequency that is too high. In some embodiments, the frequency of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4may be less than the maximum response frequency of the driving mechanism to effectively drive the driving mechanism1. In some embodiments, the driving mechanism1may also include an environment sensing assembly84(shown inFIG.4), which may be disposed on the substrate70to detect the influence of the environment on the driving mechanism1, and then provide an environmental signal E to the control assembly82. The control assembly82may provide the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4based on the environmental signal E. In some embodiments, the environment sensing assembly84may include an inertia sensing element, such as a gyroscope, an accelerometer, an angular velocity meter, or a gravity direction sensor, etc., to detect the inertia of the driving mechanism1. In some embodiments, after receiving the environmental signal E, the control assembly82filters and removes the high-frequency signals in the environmental signal E to prevent the environmental signal E being interfered by high-frequency noise. For example, signals in the environmental signal E with a frequency higher than 10000 Hz may be removed, or signals with a frequency higher than the maximum frequency may be removed. Next, how the control assembly82controls the driving assembly40is described.FIG.5is a schematic view of the control signal C when the control assembly82is in a preparation mode. At this time, the control assembly82drives the driving assembly40according to the position signal P, so that the movable portion30is positioned at a predetermined position relative to the fixed portion F (as shown inFIG.2). For example, at this time, the control assembly82may drive the first driving element42A according to the first position signal P1, and may drive the second driving element42B according to the second position signal P2, so that the movable portion30is at the predetermined position relative to the fixed portion F. Also, as an applied driving method, the first driving signal C1and the second driving signal C2may be calculated based on the first position signal P1to control the first driving element42, that is, the first driving unit42A and the second driving unit42B. The third driving signal C3and the fourth driving signal C4may be calculated based on the second position signal P2to control the second driving element44, that is, the third driving unit44A and the fourth driving unit44B. It should be noted that at this time, the control signal C (for example, the first driving signal C1, the second driving signal C2, the third driving signal C3, or the fourth driving signal C4) does not include a periodic signal with a frequency higher than the maximum frequency (e.g. higher than 10000 Hz). For example, the control signal C may only include a DC signal (e.g. DC current or DC voltage). As shown inFIG.5, at this time, the signal intensities of the first driving signal C1, the second driving signal C2, the third driving signal C3, or the fourth driving signal C4are respectively shown as overlapped to a first original intensity C01, a second original intensity C02, a third original intensity C03, and a fourth original intensity C04, respectively. However, it should be noted that at this time, the first original intensity C01, the second original intensity C02, and the third original intensity C03, and the fourth original intensity C04of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are voltages or currents higher than zero. In other words, in the preparation mode, the intensities (such as voltage or current) of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are higher than zero, so that a voltage or current that is not equal to zero passes through the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B. In this way, the probability of occurrence of surges may be reduced. Since the surge includes high-frequency signals, if the probability of occurrence of surge is reduced, high-frequency noise may be reduced, thereby reducing the probability of noise generated when the driving mechanism1operating. It should be noted that under this condition, the heating rate of the control signal C to the driving assembly40is less than the heat dissipation rate of the environment to the driving assembly40, so the temperature of the driving assembly40will not keep increase, but will be maintained at a basic temperature. FIG.6Ais a schematic view of the control signal C when the control assembly82is in the first control mode, andFIG.6Bis a schematic view showing the tension differences of the first driving unit42A, the second driving unit42B, the third driving unit42C, and the fourth driving unit42D between the first control mode and the preparation mode inFIG.5. In the first control mode, the control assembly82controls the driving assembly40to drive the movable portion30to move in a first target direction relative to the fixed portion F. This embodiment uses the −X direction as an example, but it is not limited thereto. In the first control mode, the control assembly82may control the driving assembly40according to the first position signal P1and the second position signal P2. As shown inFIG.6A, during the first control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4have intensities (such as voltage or current) higher than zero. At this time, the intensity of the first driving signal C1is higher than the first original intensity C01, the intensity of the second driving signal C2is less than the second original intensity C02, the intensity of the third driving signal C3is higher than the third original intensity C03, and the intensity of the fourth driving signal C4is higher than the fourth original intensity C04. In other words, when compared to the preparation mode, the control assembly82increases the voltage or current of the first driving signal C1, decreases the voltage or current of the second driving signal C2, increases the voltage or current of the third driving signal C3, and increases the voltage or current of the fourth driving signal C4in the first control mode. As a result, as shown inFIG.6B, when compared to the preparation mode, the tension of the first driving unit42A increases, the tension of the second driving unit42B decreases, the tension of the third driving unit44A increases, and the tension of the fourth driving unit44B increases in the first control mode, so that the movable portion30may be driven to move in the −X direction. In some embodiments, in the first control mode, the voltage or current of the first driving signal C1is higher than the voltage or current of the second driving signal C2. For example, as shown inFIG.6A, the intensity of the first driving signal C1is higher than the first original intensity C01, and the intensity of the second driving signal C2is less than the second original intensity C02. In some embodiments, the first original intensity C01may be substantially equal to the second original intensity C02, so the voltage or current of the first driving signal C1may be higher than the voltage or current of the second driving signal C2. In other words, the tension of the first driving unit42A increases, and the tension of the second driving unit42B decreases, whereby a force in the −X direction may be applied to the movable portion30to move the movable portion30in the −X direction. In the first control mode, the control signal C (including the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4) does not include a periodic signal with a frequency higher than the maximum frequency (for example, 10000 Hz). In this way, the elements of the driving mechanism1may be prevented from being interfered by signals with high frequencies. In some embodiments, as shown inFIG.6A, in the first control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4only have a DC voltage or DC current rather than AC voltage or AC current. In other words, in the first control mode, the intensities of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are substantially constants. In this way, the element of the driving mechanism1may be protected from interference by signals with excessive frequencies. In some embodiments, in the first control mode, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82(i.e. the absolute value of the intensity difference between the first driving signal C1and the first original intensity C01) is different than the absolute value of the voltage or current of the third driving signal C3increased by the control assembly82(i.e. the absolute value of the intensity difference between the third driving signal C3and the third original intensity C03). For example, as shown inFIG.6A, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82may be higher than the absolute value of the voltage or current of the third driving signal C3increased by the control assembly82, that is, the intensity difference between the first driving signal C1and the first original intensity C01may be higher than the intensity difference between the third driving signal C3and the third original intensity C03. Thereby, the first driving unit42A receiving the first driving signal C1may generate a higher driving force than the third driving unit44A receiving the third driving signal C3to control the moving direction of the movable portion30. In some embodiments, in the first control mode, the absolute value of the intensity difference between the first driving signal C1and the first original intensity C01is about 2 times of the absolute value of the intensity difference between the third driving signal C3and the third original intensity C03, but the present disclosure is not limited thereto. In addition, in some embodiments, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82(i.e. the absolute value of the difference between the first driving signal C1and the first original intensity C01) is different than the absolute value of the voltage or current of the fourth driving signal C4increased by the control assembly82(i.e. the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04) in the first control mode. For example, as shown inFIG.6A, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82may be higher than the absolute value of the voltage or current of the fourth driving signal C4increased by the control assembly82. In other words, the absolute value of the difference between the first driving signal C1and the first original intensity C01may be higher than the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04. Thereby, when compared with the fourth driving unit44B which receives the fourth driving signal C4, the first driving unit42A which receives the first driving signal C1may generate a higher driving force to control the moving direction of the movable portion30. In some embodiments, the absolute value of the difference between the first driving signal C1and the first original intensity C01is about 2 times of the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04in the first control mode, but the present disclosure is not limited thereto. In some embodiments, the intensity difference between the third driving signal C3and the third original intensity C03may be substantially equal to the intensity difference between the fourth driving signal C4and the fourth original intensity C04in the first control mode. In other words, the net force received by the movable portion30in the Y direction is about 0 at this time, and the force applied on the movable portion30by the third driving unit44A and the fourth driving unit44B may be balanced to stabilize the movable portion30in the Y direction. Therefore, the first driving element42may be used to drive the movable portion30to move in the −X direction, and the second driving element44may be used to prevent the movable portion30from rotating during translation to stabilize the movable portion30. In addition, the control assembly82further includes a second control mode for controlling the driving assembly40.FIG.7Ais a schematic view of the control signal C of the control assembly82in the second control mode, andFIG.7Bis a schematic view showing the tension differences of the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B between the second control mode and the preparation mode inFIG.5. In the second control mode, the control assembly82controls the driving assembly40to move the movable portion30relative to the fixed portion F in a second target direction. Counterclockwise rotation is used as an example in this embodiment, but it is not limited thereto. In the second control mode, the control assembly82may control the driving assembly40according to the first position signal P1and the second position signal P2. As shown inFIG.7A, during the second control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4have intensities (such as voltage or current) higher than zero. At this time, the intensity of the first driving signal C1is higher than the first original intensity C01, the intensity of the second driving signal C2is higher than the second original intensity C02, the intensity of the third driving signal C3is less than the third original intensity C03, and the intensity of the fourth driving signal C4is less than the fourth original intensity C04. In other words, when compared to the preparation mode, the control assembly82increases the voltage or current of the first driving signal C1, increases the voltage or current of the second driving signal C2, decreases the voltage or current of the third driving signal C3, and decreases the voltage or current of the fourth driving signal C4in the second control mode. As a result, as shown inFIG.7B, when compared to the preparation mode, the tension of the first driving unit42A increases, the tension of the second driving unit42B increases, the tension of the third driving unit44A decreases, and the tension of the fourth driving unit44B decreases in the second control mode, so that the movable portion30may be driven to rotate in the counterclockwise direction. In the second control mode, the control signal C (including the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4) does not include a periodic signal with a frequency higher than the maximum frequency (for example, 10000 Hz). In this way, the elements of the driving mechanism1may be prevented from being interfered by signals with high frequencies. In some embodiments, as shown inFIG.7A, in the second control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4only have a DC voltage or DC current rather than AC voltage or AC current. In other words, in the second control mode, the intensities of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are substantially constants. In this way, the element of the driving mechanism1may be protected from interference by signals with excessive frequencies. In addition, the control assembly82further includes a third control mode for controlling the driving assembly40.FIG.8Ais a schematic view of the control signal C of the control assembly82in the third control mode, andFIG.8Bis a schematic view showing the tension differences of the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B between the third control mode and the preparation mode inFIG.5. In the third control mode, the control assembly82controls the driving assembly40to move the movable portion30relative to the fixed portion F in a first target direction. −X direction is used as an example in this embodiment, but it is not limited thereto. In the third control mode, unlike the first control mode, the control assembly82not only controls the driving assembly40according to the position signal P (for example, including the first position signal P1and the second position signal P2), but also controls the driving assembly40according to the environmental signal E. Thereby, the influence of the environment on the driving mechanism1may be reduced, and optical image stabilization may be achieved by translational movement. As shown inFIG.8A, during the third control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4have intensities (such as voltage or current) higher than zero. At this time, the intensity of the first driving signal C1is higher than the first original intensity C01, the intensity of the second driving signal C2is less than the second original intensity C02, the intensity of the third driving signal C3is less than the third original intensity C03, and the intensity of the fourth driving signal C4is less than the fourth original intensity C04. In other words, when compared to the preparation mode, the control assembly82increases the voltage or current of the first driving signal C1, decreases the voltage or current of the second driving signal C2, decreases the voltage or current of the third driving signal C3, and decreases the voltage or current of the fourth driving signal C4in the third control mode. As a result, as shown inFIG.7B, when compared to the preparation mode, the tension of the first driving unit42A increases, the tension of the second driving unit42B decreases, the tension of the third driving unit44A decreases, and the tension of the fourth driving unit44B decreases in the third control mode, so that the movable portion30may be driven to rotate in the −X direction. Moreover, the required energy of the driving mechanism1may be decreased by decreasing the tensions of the third driving unit44A and the fourth driving unit44B to save energy. In some embodiments, in the third control mode, the voltage or current of the first driving signal C1is higher than the voltage or current of the second driving signal C2. For example, as shown inFIG.8A, the intensity of the first driving signal C1is higher than the first original intensity C01, and the intensity of the second driving signal C2is less than the second original intensity C02. In some embodiments, the first original intensity C01may be substantially equal to the second original intensity C02, so the voltage or current of the first driving signal C1may be higher than the voltage or current of the second driving signal C2. In other words, the tension of the first driving unit42A increases, and the tension of the second driving unit42B decreases, whereby a force in the −X direction may be applied to the movable portion30to move the movable portion30in the −X direction. In the second control mode, the control signal C (including the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4) does not include a periodic signal with a frequency higher than the maximum frequency (for example, 10000 Hz). Alternatively, in some embodiments, the control signal C only has a periodic signal with identical frequency to the environmental signal E in the third control mode. In this way, the elements of the driving mechanism1may be prevented from being interfered by signals with high frequencies. In some embodiments, as shown inFIG.8A, in the third control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4only have a DC voltage or DC current rather than AC voltage or AC current. In other words, in the first control mode, the intensities of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are substantially constants. In this way, the element of the driving mechanism1may be protected from interference by signals with excessive frequencies. In some embodiments, in the third control mode, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82(i.e. the absolute value of the intensity difference between the first driving signal C1and the first original intensity C01) is different than the absolute value of the voltage or current of the third driving signal C3increased by the control assembly82(i.e. the absolute value of the intensity difference between the third driving signal C3and the third original intensity C03). For example, as shown inFIG.8A, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82may be higher than the absolute value of the voltage or current of the third driving signal C3increased by the control assembly82, that is, the intensity difference between the first driving signal C1and the first original intensity C01may be higher than the intensity difference between the third driving signal C3and the third original intensity C03. Thereby, the first driving unit42A receiving the first driving signal C1may generate a higher driving force than the third driving unit44A receiving the third driving signal C3to control the moving direction of the movable portion30. In some embodiments, in the third control mode, the absolute value of the intensity difference between the first driving signal C1and the first original intensity C01is about 2 times of the absolute value of the intensity difference between the third driving signal C3and the third original intensity C03, but the present disclosure is not limited thereto. In addition, in some embodiments, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82(i.e. the absolute value of the difference between the first driving signal C1and the first original intensity C01) is different than the absolute value of the voltage or current of the fourth driving signal C4increased by the control assembly82(i.e. the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04) in the third control mode. For example, as shown inFIG.8A, the absolute value of the voltage or current of the first driving signal C1increased by the control assembly82may be higher than the absolute value of the voltage or current of the fourth driving signal C4increased by the control assembly82. In other words, the absolute value of the difference between the first driving signal C1and the first original intensity C01may be higher than the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04. Thereby, when compared with the fourth driving unit44B which receives the fourth driving signal C4, the first driving unit42A which receives the first driving signal C1may generate a higher driving force to control the moving direction of the movable portion30. In some embodiments, the absolute value of the difference between the first driving signal C1and the first original intensity C01is about 2 times of the absolute value of the difference between the fourth driving signal C4and the fourth original intensity C04in the third control mode, but the present disclosure is not limited thereto. Therefore, the first driving element42may drive the movable portion30to move in the −X direction. Since the intensity of the driving signal of the second driving element44is reduced, the energy required in the third control mode may be reduced to save energy. In addition, the control assembly82further includes a fourth control mode for controlling the driving assembly40. The fourth control mode is substantially similar to the second control mode, so please referring back toFIG.7AandFIG.7B.FIG.7Ais a schematic view of the control signal C of the control assembly82in the fourth control mode, andFIG.7Bis a schematic view showing the tension differences of the first driving unit42A, the second driving unit42B, the third driving unit44A, and the fourth driving unit44B between the fourth control mode and the preparation mode inFIG.5. In the fourth control mode, the control assembly82controls the driving assembly40to move the movable portion30relative to the fixed portion F in a second target direction. Counterclockwise rotation is used as an example in this embodiment, but it is not limited thereto. In the second control mode, the control assembly82may control the driving assembly40according to the first position signal P1and the second position signal P2. In the fourth control mode, unlike the second control mode, the control assembly82not only controls the driving assembly40according to the position signal P (for example, including the first position signal P1and the second position signal P2), but also controls the driving assembly40according to the environmental signal E. Thereby, the influence of the environment on the driving mechanism1may be reduced, and optical image stabilization may be achieved by rotational movement. As shown inFIG.7A, during the fourth control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4have intensities (such as voltage or current) higher than zero. At this time, the intensity of the first driving signal C1is higher than the first original intensity C01, the intensity of the second driving signal C2is higher than the second original intensity C02, the intensity of the third driving signal C3is less than the third original intensity C03, and the intensity of the fourth driving signal C4is less than the fourth original intensity C04. In other words, when compared to the preparation mode, the control assembly82increases the voltage or current of the first driving signal C1, increases the voltage or current of the second driving signal C2, decreases the voltage or current of the third driving signal C3, and decreases the voltage or current of the fourth driving signal C4in the fourth control mode. As a result, as shown inFIG.7B, when compared to the preparation mode, the tension of the first driving unit42A increases, the tension of the second driving unit42B increases, the tension of the third driving unit44A decreases, and the tension of the fourth driving unit44B decreases in the fourth control mode, so that the movable portion30may be driven to rotate in the counterclockwise direction. In the fourth control mode, the control signal C (including the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4) does not include a periodic signal with a frequency higher than the maximum frequency (for example, 10000 Hz). In this way, the elements of the driving mechanism1may be prevented from being interfered by signals with high frequencies. In some embodiments, as shown inFIG.7A, in the fourth control mode, the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4only have a DC voltage or DC current rather than AC voltage or AC current. In other words, in the fourth control mode, the intensities of the first driving signal C1, the second driving signal C2, the third driving signal C3, and the fourth driving signal C4are substantially constants. In this way, the element of the driving mechanism1may be protected from interference by signals with excessive frequencies. FIG.9Ais a schematic view of an optical assembly100A according to some embodiments of the present disclosure, andFIG.9Bis a top view of some elements of the optical assembly100A. The portions of the optical assembly100A that are similar to the aforementioned optical assembly100will not be repeated here. It should be noted that the first reference element56, the second reference element58and the second inner driving element160of the optical assembly100A are different elements. For example, the first reference element56and the second reference element58of the optical assembly100A may be disposed on the base120and separated from the inner driving element160. In this way, the distance between the first reference element56and the first position sensing element52and the distances between the second reference element58and the second position sensing element54may be reduced to improve the performance of the sensors. In some embodiments, as shown inFIG.9B, when viewed along the main axis O, the inner fixed portion IF (e.g. the base120) has a polygonal structure, and the inner driving assembly ID (e.g. the second inner driving element160) is positioned at a first side (e.g. the lower side) of the inner fixed portion IF, and the first position sensing element52is also positioned at the first side. In addition, when viewed along the main axis O, the inner driving assembly ID at least partially overlaps the first position sensing element52to reduce the size of the driving mechanism1in other directions, thereby achieving miniaturization. FIG.10Ais a schematic view of an optical assembly100B according to some embodiments of the present disclosure, andFIG.10Bis a top view of some elements of the optical assembly100B. The portions of the optical assembly100B that are similar to the aforementioned optical assembly100will not be repeated here. It should be noted that when viewed along the main axis O, the inner fixed portion IF (base120) has a polygonal structure, and the optical assembly100B has a second inner driving element162disposed at the corner. The first position sensing element52and the first reference element56are positioned at the first side of the base120(e.g. the lower side), and the second position sensing element54and the second reference element58are positioned at the second side of the base120(e.g. left side). Thereby, the first position sensing element52and the second position sensing element54may respectively sense the movement of the optical assembly100B in different directions. In addition, magnetic interference may be avoided to enhance the accuracy of sensing by positioning the second inner driving element162at the corner and the position sensing element50at the side. FIG.11Ais a schematic view of an optical assembly100C according to some embodiments of the present disclosure, andFIG.11Bis a top view of some elements of the optical assembly100C. The portions of the optical assembly100C that are similar to the aforementioned optical assembly100will not be repeated here. It should be noted that when viewed along the main axis O, the inner fixed portion IF has a polygonal structure, and the inner driving assembly ID2(including the first inner driving element142and the second inner driving element160) is positioned at the first side of the inner fixed portion IF (e.g. the left side), the first position sensing element52is positioned at the second side (e.g. the lower side), and the second position sensing element54is positioned at the first side. The second reference element58and the second inner driving element160are the same element, and the first reference element56and the second inner driving element160are disposed separately. In addition, as shown inFIG.11A, the second inner driving element164may include a multipolar magnet, and may have different magnetic pole directions. In the Z direction, the magnetic pole directions of the upper and lower sides of the second inner driving element160may be opposite. In the X direction, the magnetic pole direction of the left and right sides of the second inner driving element160may be opposite. At this time, the first inner driving element142may have a ring shape. Therefore, the driving force of the inner driving component ID2may be increased. FIG.12is a top view of some elements of the optical assembly100D according to some embodiments of the present disclosure. When viewed along the main axis O, the inner fixing part IF has a polygonal structure, the inner driving element ID3(e.g. the second inner driving element160) is positioned at the first side of the inner fixing part IF, the first position sensing element52and the first reference element56are positioned at a first corner of the inner fixed portion IF, and the second position sensing element54and the second reference element58are positioned at a second corner of the inner fixed portion IF. Because the first position sensing element52and the second position sensing element54are positioned at different corners, the movement of the optical assembly100D in different directions may be detected. In addition, the first position sensing element52, the second position sensing element54, and the second inner driving element160are positioned at different positions (for example, not overlap each other in the Z direction), so the chance of magnetic interference between the elements may be decreased to increase the accuracy of sensing. FIG.13is a top view of some elements of the optical assembly100E according to some embodiments of the present disclosure. As shown inFIG.13, when viewed along the main axis O, the inner fixed portion IF has a polygonal structure, and the inner driving assembly (for example, the second inner driving elements162) is positioned at the first corner and the second corner of the inner fixed portion IF. The first position sensing element52is positioned at a first corner, and the second position sensing element54is positioned at a second corner. In other words, when viewed along the main axis O, the inner driving assembly ID (including the first inner driving element140and the second inner driving element162) at least partially overlaps the first position sensing element52. In this way, no additional first reference elements56and second reference elements58is required, and the second inner driving elements162at the first corner and the second corner are respectively used as the first reference element56and the second reference element58to reduce the number of required elements, and miniaturization is achieved. FIG.14is a top view of some elements of the optical assembly100F according to some embodiments of the present disclosure. As shown inFIG.14, when viewed along the main axis O, the inner fixed portion IF has a polygonal structure, and the inner driving assembly ID (e.g. the second inner driving element162) is positioned at a first corner of the inner fixed portion IF. The position sensing element52and the first reference element56are positioned at a second corner, and the second position sensing element54and the second reference element58are positioned at a third corner. The first position sensing element52, the second position sensing element54, and the second inner driving element162are positioned at different corners of the inner fixed portion IF, so the magnetic interference between the elements may be reduced. In summary, a driving mechanism is provided. The driving mechanism includes a fixed portion, a movable portion, and a driving assembly. The movable portion is movably connected to the fixed portion. The driving assembly is used for driving the movable portion to move relative to the fixed portion. The driving assembly is driven by a control signal provided by a control assembly. The driving assembly includes shape memory alloy. Therefore, the control accuracy of the driving mechanism may be increased, and miniaturization may be achieved. Although embodiments of the present disclosure and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, and composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope of such processes, machines, manufacture, and compositions of matter, means, methods, or steps. In addition, each claim constitutes a separate embodiment, and the combination of various claims and embodiments are within the scope of the disclosure. | 56,291 |
11860514 | DETAILED DESCRIPTION A camera according to one or more embodiments of the present invention will now be described in detail with reference toFIGS.1to10. InFIGS.1to10, like reference numerals denote like or corresponding components. Such components will not be described repeatedly. InFIGS.1to10, the scale and dimensions of each component may be exaggerated, or one or more components may not be shown. FIG.1is a perspective view of a camera1according to one embodiment of the present invention. Although the camera1according to the present embodiment is a camera (instant camera) that uses a photographic film to be automatically developed after shooting, the present invention is also applicable to a camera other than such an instant camera. For ease of explanation in the present embodiment, the term front or frontward refers to the positive X-direction inFIG.1, and the term rear or rearward refers to the negative X-direction inFIG.1. As shown inFIG.1, the camera1includes a front cover2, a rear cover3attached to the rear of the front cover2, a top cover4sandwiched between the front cover2and the rear cover3, and a lens barrel10accommodated in a cylindrical portion2A of the front cover2. The front cover2has a viewfinder5. A flash window6is located adjacent to the viewfinder5. A release button7is located in the negative Z-direction from the viewfinder5. The top cover4has an ejection slit4A extending in Y-direction, from which a photographic film developed after shooting is ejected. FIG.2is an exploded perspective view of some of the components accommodated in an internal space defined by the front cover2, the rear cover3, and the top cover4. As shown inFIG.2, the camera1includes a frame20including a film compartment21and a cylindrical barrel22. The film compartment21has an accommodating space S in which a photographic film is placed. The barrel22is attached to the front (positive X-direction) of the film compartment21. The barrel22has threaded holes22A. The film compartment21also has threaded holes21A. The barrel22is attached to the film compartment21with screws (not shown) screwed into the threaded holes22A and21A to form the frame20. The barrel22attached to the film compartment21extends frontward (positive X-direction) from the film compartment21and holds the lens barrel10inside. The lens barrel10in the present embodiment is extendable in the positive X-direction.FIG.3is a perspective view of the lens barrel10extending to its maximum length in the positive X-direction. As shown inFIG.3, the lens barrel10includes a first cylinder11, a second cylinder12, and a third cylinder13. The first cylinder11is movable in X-direction relative to the barrel22in the frame20. The second cylinder12is movable in X-direction relative to the first cylinder11. The third cylinder13is movable in X-direction relative to the second cylinder12. In the present embodiment, the first cylinder11and the second cylinder12are included in one lens barrel unit (rear lens barrel unit), and the third cylinder13is included in another lens barrel unit (front lens barrel unit). As shown inFIG.3, the lens barrel10includes, on the rear edge of the first cylinder11, two engagement protrusions14protruding radially outward. As shown inFIG.2, the barrel22in the frame20has two guide grooves24corresponding to the engagement protrusions14and extending in the direction of an optical axis P (X-direction). Each engagement protrusion14on the lens barrel10is received in and engaged with the guide groove24on the barrel22. The guide grooves24each have a width in Z-direction slightly larger than the width of the engagement protrusion14in Z-direction. The engagement protrusions14are thus movable in the direction of the optical axis P (X-direction) in the guide grooves24while being guided by the guide grooves24. As shown inFIG.2, the barrel22in the frame20includes an operation button8urged in the positive X-direction. As shown inFIG.1, the operation button8protrudes from the front cover2in the positive X-direction near the cylindrical portion2A of the front cover2. A user can depress the operation button8in the negative X-direction. When the user depresses the operation button8in the negative X-direction, the engagement protrusion14on the lens barrel10is pushed by a lens barrel extension assembly9attached to the barrel22in the positive X-direction, causing the lens barrel10to extend in the positive X-direction as shown inFIG.4. In response to the lens barrel10protruding from the barrel22in the positive X-direction, a switch assembly (not shown) turns on the camera1. The structure of the lens barrel extension assembly9is not limited to a specific structure. The lens barrel extension assembly9may have any structure that can extend the lens barrel10in the positive X-direction. In the present embodiment, the second cylinder12is moved by a moving assembly (not shown) in the positive X-direction relative to the first cylinder11as the first cylinder11moves in the positive X-direction relative to the barrel22. Thus, as shown inFIG.4, the second cylinder12extends from the first cylinder11in the positive X-direction. In this state, the user can perform a normal photographing operation. This state is hereafter referred to as a first photographing state. In the present embodiment, the user can manually pull the third cylinder13from the second cylinder12further in the positive X-direction in the state shown inFIG.4. The third cylinder13, which is pulled from the second cylinder12further in the positive X-direction, extends from the second cylinder12in the positive X-direction as shown inFIG.5. In this state, the user can perform, for example, a short-distance macro photographing operation. This state is hereafter referred to as a second photographing state. FIG.1shows the lens barrel10accommodated in the barrel22in the frame20. The lens barrel10has a minimum length in X-direction in this state. This state is hereafter referred to as a retracted state.FIG.6is a partial longitudinal cross-sectional view of the camera1in the retracted state. As shown inFIG.6, the third cylinder13accommodates a barrier131, a first lens132, an aperture133, and a second lens134in this order from a subject (in the positive X-direction). A photographic film F is placed at a predetermined position in the accommodating space S in the film compartment21. As shown inFIG.6, a rear bellows50is located between the first cylinder11in the lens barrel10and the film compartment21in the frame20. The rear bellows50is formed from a flexible material such as rubber and expands and contracts as the first cylinder11moves relative to the barrel22. As shown inFIG.2, the rear bellows50has a rectangular opening T inside. The opening T gradually enlarges from the first cylinder11toward the frame20. The rear bellows50includes, on its rear end, a rectangular rear connector51flaring outward. The rear connector51is held between the film compartment21and the barrel22as shown inFIG.6. FIG.7is an exploded perspective view describing attachment of the rear bellows50. As shown inFIGS.2and7, the rear bellows50includes, on its front end, a rectangular front connector52extending inward. The front connector52has multiple threaded holes52A. As shown inFIG.7, the first cylinder11in the lens barrel10includes a rectangular frame-shaped connecting flange41(first connecting flange) in its rear end. The connecting flange41has threaded holes41A corresponding to the threaded holes52A in the front connector52in the rear bellows50. A rectangular frame-shaped attachment plate55is located behind the front connector52in the rear bellows50(in the negative X-direction). The attachment plate55also has threaded holes55A corresponding to the threaded holes52A in the front connector52in the rear bellows50. Screws56are screwed into the threaded holes55A in the attachment plate55, the threaded holes52A in the front connector52in the rear bellows50, and the threaded holes41A in the connecting flange41in the first cylinder11to hold the front connector52in the rear bellows50between the attachment plate55and the connecting flange41in the first cylinder11. As shown inFIG.6, a front bellows60is located between the first cylinder11and the third cylinder13in the lens barrel10. The front bellows60is formed from a flexible material such as rubber and expands and contracts as the third cylinder13moves relative to the first cylinder11. FIG.8is an exploded perspective view describing attachment of the front bellows60. As shown inFIG.8, the front bellows60has a rectangular opening U inside. The opening U gradually enlarges from the third cylinder13toward the first cylinder11. As shown inFIG.8, the front bellows60includes, on its rear end, a rectangular rear connector61flaring outward. A rectangular frame-shaped attachment plate65is located in front of the rear connector61in the front bellows60(in the positive X-direction). The attachment plate65has multiple threaded holes65A. The connecting flange41in the first cylinder11described above has threaded holes42A corresponding to the threaded holes65A in the attachment plate65. Screws66are screwed into the threaded holes65A in the attachment plate65and the threaded holes42A in the connecting flange41in the first cylinder11from one end of the connecting flange41facing the third cylinder13to hold the rear connector61in the front bellows60between the attachment plate65and the connecting flange41in the first cylinder11. As shown inFIG.8, the front bellows60includes, on its front end, a rectangular front connector62extending inward. The front connector62has multiple threaded holes62A. The third cylinder13in the lens barrel10includes a rectangular frame-shaped connecting flange31(second connecting flange) on its rear end. The connecting flange31has threaded holes31A corresponding to the threaded holes62A in the front connector62in the front bellows60. The connecting flange31may be integral with the third cylinder13or may be a separate component from the third cylinder13. A rectangular frame-shaped attachment plate67is located behind the front connector62in the front bellows60(in the negative X-direction). The attachment plate67also has threaded holes67A corresponding to the threaded holes62A in the front connector62in the front bellows60. Screws68are screwed into the threaded holes67A in the attachment plate67, the threaded holes62A in the front connector62in the front bellows60, and the threaded holes31A in the connecting flange31on the third cylinder13to hold the front connector62in the front bellows60between the attachment plate67and the connecting flange31on the third cylinder13. In the retracted state shown inFIG.6, the first cylinder11and the second cylinder12in the lens barrel10are accommodated radially inside the barrel22, and the third cylinder13is accommodated radially inside the second cylinder12. The first cylinder11is at its farthest extent in the negative X-direction. The rear bellows50is contracted in X-direction. The third cylinder13is nearest the first cylinder11in X-direction. The front bellows60is contracted in X-direction. In this state, the camera1is not turned on, and photographing with the camera1is not performed. FIG.9is a partial longitudinal cross-sectional view of the camera1in the first photographing state shown inFIG.4. As described above, when the user depresses the operation button8in the retracted state shown inFIG.6, the first cylinder11in the lens barrel10extends in the positive X-direction relative to the barrel22, and the second cylinder12is extended by the moving assembly (not shown) in the positive X-direction relative to the first cylinder11, causing the camera1to be in the first photographing state shown inFIG.9. In the first photographing state, the third cylinder13in the lens barrel10is accommodated radially inside the second cylinder12. In the first photographing state, the first cylinder11in the lens barrel10extends in the positive X-direction relative to the barrel22, and the rear bellows50is thus expanded. The rear bellows50is outside a light beam B1projected from the second lens134onto the photographic film F and thus avoids being captured in an image projected onto the photographic film F. In the present embodiment, the second cylinder12in the lens barrel10extends in the positive X-direction relative to the first cylinder11in the first photographing state. The front bellows60is thus expanded slightly further than in the retracted state shown inFIG.6. The front bellows60is outside the light beam B1projected from the second lens134onto the photographic film F and thus avoids being captured in an image projected onto the photographic film F. In this manner, in the first photographing state, both the rear bellows50and the front bellows60are outside the light beam B1projected from the second lens134onto the photographic film F and avoid being captured in an image projected onto the photographic film F. FIG.10is a partial longitudinal cross-sectional view of the camera1in the second photographing state shown inFIG.5. When the user manually pulls the third cylinder13from the second cylinder12in the positive X-direction in the first photographing state shown inFIG.9, the camera1enters the second photographing state shown inFIG.10. In the second photographing state, the third cylinder13in the lens barrel10in the first photographing state described above extends in the positive X-direction relative to the second cylinder12. The front bellows60is thus expanded further than in the first photographing state shown inFIG.9. The front bellows60is outside a light beam B2projected from the second lens134onto the photographic film F and thus avoids being captured in an image projected onto the photographic film F. In this manner, in the second photographing state as well, both the rear bellows50and the front bellows60are outside the light beam B2projected from the second lens134onto the photographic film F and avoid being captured in an image projected onto the photographic film F. As described above, in the present embodiment, the rear bellows50connecting the frame20and the first cylinder11in the lens barrel10and the front bellows60connecting the first cylinder11and the third cylinder13in the lens barrel10are included. Thus, both in the first photographing state shown inFIG.9and in the second photographing state shown inFIG.10, the rear bellows50and the front bellows60are outside the light beams B1and B2projected from the second lens134onto the photographic film F and avoid being captured in an image projected onto the photographic film F. Thus, when the positions of the lenses132and134are changed, photographing without vignetting can be performed with the rear bellows50and the front bellows60preventing entry of light and dust. This allows two different photographing (e.g., normal photographing and macro photographing) operations with the lenses132and134at different positions. In the above embodiment, the rear bellows50and the front bellows60are both connected to the single connecting flange41in the rear end of the first cylinder11in the lens barrel10. This structure includes fewer components to reduce cost and also downsizes the camera1. Although the rear lens barrel unit includes the first cylinder11and the second cylinder12in the lens barrel10in the above embodiment, the rear lens barrel unit may include a single cylinder or three or more cylinders. In the structure including the rear lens barrel unit that includes the first cylinder11and the second cylinder12in the lens barrel10as in the present embodiment, the second cylinder12is accommodated radially inside the first cylinder11in the retracted state. The camera1can thus be thinner in the optical axis direction. Although the front lens barrel unit includes the third cylinder13in the lens barrel10in the above embodiment, the front lens barrel unit may include two or more cylinders. The terms front, frontward, rear, rearward, up, upward, down, downward, and other terms used herein to indicate the positional relationships are used in connection with the illustrated embodiment and are thus changeable depending on the relative positional relationship in the device. Although the embodiments of the present invention have been described above, the present invention is not limited to the above embodiments and may be modified variously within the scope of its technical idea. As described above, a camera according to the above aspects of the present invention can perform two different photographing operations with a lens at different positions without vignetting while preventing entry of light and dust. The camera includes a frame including a film compartment to contain a photographic film and a barrel extending from the film compartment in an optical axis direction, and a lens barrel movable in the barrel in the optical axis direction. The lens barrel includes a rear lens barrel unit accommodated radially inside the barrel and movable frontward relative to the barrel, and a front lens barrel unit accommodated radially inside the rear lens barrel unit and movable frontward relative to the rear lens barrel unit. The front lens barrel unit accommodates at least one lens. The camera further includes a rear bellows connecting the frame and the rear lens barrel unit, and a front bellows connecting the rear lens barrel unit and the front lens barrel unit. The lens barrel has a retracted state in which the rear lens barrel unit is accommodated radially inside the barrel, and the front lens barrel unit is accommodated radially inside the rear lens barrel unit, a first photographing state in which the rear lens barrel unit extends frontward relative to the barrel, and the front lens barrel unit is accommodated radially inside the rear lens barrel unit, and a second photographing state in which the rear lens barrel unit extends frontward relative to the barrel, and the front lens barrel unit extends frontward relative to the rear lens barrel unit. In the first photographing state, the rear bellows is expanded and is outside a light beam projected from the at least one lens onto the photographic film, and the front bellows is at least partially contracted and is outside the light beam projected from the at least one lens onto the photographic film. In the retracted state, both the front bellows and the rear bellows may be at least partially contracted. This structure includes the rear bellows connecting the frame and the rear lens barrel unit and the front bellows connecting the rear lens barrel unit and the front lens barrel unit. Thus, both in the first photographing state and in the second photographing state, the rear bellows and the front bellows are outside the light beams projected from the lens onto the photographic film and avoid being captured in an image projected onto the photographic film. Thus, when the position of the lens is changed, photographing without vignetting can be performed with the rear bellows and the front bellows preventing entry of light and dust. This allows two different photographing (e.g., normal photographing and macro photographing) operations with the lens at different positions without vignetting. The rear lens barrel unit may include a first cylinder and a second cylinder accommodated radially inside the first cylinder and movable frontward relative to the first cylinder. In this case, the front lens barrel unit may include a third cylinder that is accommodated radially inside the second cylinder in the rear lens barrel unit in the first photographing state and extends frontward relative to the second cylinder in the rear lens barrel unit in the second photographing state. The camera with this structure can be thinner in the optical axis direction in the retracted state. The rear lens barrel unit may include a first connecting flange to which the rear bellows and the front bellows are connected. This structure in which the rear bellows and the front bellows are connected to the single connecting flange includes fewer components to reduce cost and also downsizes the camera. The front lens barrel unit may include a second connecting flange to which the front bellows is connected. The camera according to the above aspects of the present invention includes the rear bellows connecting the frame and the rear lens barrel unit and the front bellows connecting the rear lens barrel unit and the front lens barrel unit. Thus, both in the first photographing state and in the second photographing state, the rear bellows and the front bellows are outside the light beams projected from the lens onto the photographic film and avoid being captured in an image projected onto the photographic film. Thus, when the position of the lens is changed, photographing without vignetting can be performed with the rear bellows and the front bellows preventing entry of light and dust. This allows two different photographing (e.g., normal photographing and macro photographing) operations with the lens at different positions without vignetting. This application claims priority to Japanese Patent Application No. 2019-228954 filed on Dec. 19, 2019, the entire disclosure of which is incorporated herein by reference. INDUSTRIAL APPLICABILITY The camera according to one or more embodiments of the present invention is suitably used as a camera that allows a lens barrel to extend with a flexible bellows. | 21,481 |
11860515 | DETAILED DESCRIPTION FIG.3Ashows in perspective an embodiment of a folded Tele camera with an optical lens module having an adaptive aperture (AA) disclosed herein and numbered300. Camera300may include some elements similar to elements in camera200, for example an OPFE, an optical lens module and an image sensor, which are therefore numbered with same numerals as inFIG.2A. In contrast with camera200and in addition, camera300comprises an AA302located between OPFE204and optical lens module206and an adaptive aperture forming mechanism (“AA forming mechanism” or simply “AA mechanism”)310. In some embodiments, AA302is positioned close to native aperture212(i.e. external and close to a front panel216of an optical module housing214), for example at a distance close enough to prevent stray light from entering the lens module. In some embodiments, the AA may be a part of (integral with) the lens module. In some embodiments, the AA may be attached physically to the lens module. Adaptive apertures and AA mechanisms like310are characterized in that: a) when fully open, the AA does not limit the native aperture, and b) AA mechanism310does not increase a total folded Tele camera module height HM(shown in the Y direction). FIG.3Bshows a perspective view of AA302and optical lens module206in an open state or position, where AA302corresponds to native aperture212.FIG.3Cshows the same in a front view. AA mechanism310comprises six blades304a,304b,306a,306b,308aand308b, divided into left hand blades (304a,306aand308a) and right hand (304b,306band308b) blades, and one or more actuators (se e.g.714inFIG.7) and position sensors (not shown). The blades can slide inside respective sliding rails, (recesses) e.g. in a linear movement. Thus, blade308acan slide in rails312aand blade308bcan slide in rails312b, blade306acan slide in rails314aand blade306bcan slide in rails314b, blade304acan slide in rails316aand blade304bcan slide in rails316b. The blades may be part of an actuator (not shown here). A pair of blades can be referred to by a single number. That is, blades304aand304bcan be referred to as “blades304”, blades306aand306bcan be referred to as “blades306” and blades308aand308bcan be referred to as “blades308”. A height of AA mechanism310HAAdoes not exceed a total folded Tele camera module height HM. Mechanism310supports opening the AA to a size that is larger than the size of native lens aperture212, so that, when it is open widely, AA mechanism310does not block light that would have otherwise (had the AA mechanism not been included in the Tele camera) reached native lens aperture212. This property allows to set the adaptive aperture302to a large size in order to fully utilize the native Tele lens aperture size, in case it is important to collect as much light as possible, or in case a very shallow DOF is desired. Blades304,306,308have each an open state and a closed state. Blades304have to be closed in order to effectively close blades306, and blades306have to be closed in order to effectively close blades308, i.e. the overlapping of the blades underlies the functionality of AA mechanism310. FIG.3Dshows a more detailed perspective view of adaptive aperture302and optical lens module206of camera300in a first closed state, different from the one inFIGS.3A and3B.FIG.3Eshows the same in a front view. In these figures, blades304aand304bare closed while other blades, such as blades306and308are open. The folded Tele lens has an adaptive Tele aperture302that is rotationally symmetric. The folded Tele lens with adaptive aperture302and with blades304closed is smaller than the native Tele lens aperture212, corresponding to a lower amount of light reaching the sensor and a deeper DOF than in the case of native Tele lens aperture212. In an example, a stroke of the linear movement of each of the blades304aand304bfor forming a first closed state may be in the range of 0.1 mm to 2 mm. FIG.3Fshows optical lens module206in a second closed state, with blades306aand306b(as well as304aand304b) closed.FIG.3Gshows the same in a front view. Here, the size of AA302is smaller than in the case ofFIG.3D, and AA is rotationally symmetric. In an example, a stroke of the linear movement of each of the blades306aand306bfor forming a second closed state may be in the range of 0.3 mm to 2.5 mm. FIG.3Hshows optical lens module206in a third closed state with blades308aand308b(as well as304a,304b,306aand30b) closed.FIG.3Ishows the same in a front view. Here, the size of AA is even smaller than in the case ofFIG.3F, and AA is rotationally symmetric. The case shown inFIGS.3H and3I(with three blades of varying size), provides the lowest amount of light and the deepest DOF that can be adapted by this design. In an example, a stroke of the linear movement of each of the blades308aand308bfor forming a third closed state may be in the range of 0.5 mm to 4 mm. FIG.3Jshows in perspective view another embodiment of an optical lens module206with an AA mechanism310′.FIG.3Kshows the same in a front view. AA mechanism310′ comprises six blades304′a,304′b,306′a,306′b,308′aand308′b, divided into left hand blades (304′a,306′aand308′a) and right hand (304′b,306′band308′b) blades, and one or more actuators (se e.g.714inFIG.7) and position sensors (not shown). The functionality is identical to what is shown inFIG.3BtoFIG.3I. For the sake of illustration, the blades are in an intermediate state, which is not desired for photography. Here AA mechanism310supports the formation of the adaptive aperture such that: 1) when fully open, the adaptive aperture does not limit the native aperture, 2) the adaptive aperture does not increase a total folded Tele camera module height HM, and 3) a width of AA mechanism310WAAdoes not increase a total folded Tele camera module width WM, i.e. WAA≤WM. The design shown inFIGS.3A-3Hallows for four different, discrete adaptive aperture sizes formed by overlapping blades. FIG.4Ashows in front view of another embodiment of an adaptive aperture numbered402together with optical lens module206in an open state.FIG.4Bshows the embodiment ofFIG.4Ain a perspective view, showing also image sensor210. “Open state” means here that the adaptive aperture402has the same size as the native aperture212. An adaptive aperture forming mechanism410comprises only one blade pair404aand404bdesigned to form a semi-elliptic shape that corresponds to the non-symmetrical width and height of the native Tele lens aperture, as well an actuator (seeFIG.7). Blades404aand404bmove linearly inside, respectively, rails414aand414b. In this embodiment, the rails are external to front panel216of optical module housing214.FIG.4Cshows the embodiment ofFIG.4AandFIG.4Bwith blades404aand404bpartly closed in a first closed position. In this embodiment, the adaptive Tele aperture is non-rotationally symmetric. The semi-elliptic shape of the resulting aperture is retained when the adaptive aperture is in a different “closed” position but not fully closed as inFIG.4D, as long as the adaptive Tele aperture width is larger than the native Tele lens aperture height. FIG.4Dshows the embodiment ofFIG.4AandFIG.4Bwith blades404aand404bin a second closed position more closed that the first closed position. The blades close in a way that forms a rotationally symmetric, round aperture shape. FIG.4Eshows the embodiment ofFIG.4AandFIG.4Bwith blades404aand404bin a third closed position more closed that the second closed position. In this embodiment, a folded Tele camera with a faceted folded Tele lens has an adaptive Tele aperture that is non-rotationally symmetric. The design shown inFIGS.4A-4Eallows for continuously controlling the adaptive aperture size by linear actuation of the blades. In an example, a stroke of the linear actuation of each of the blades404aand404bto form adaptive apertures as shown here may be more than 0.1 mm and less than 4 mm. FIG.5Ashows a perspective view of yet another embodiment numbered of an optical lens module with cut lens design with an adaptive aperture502in open state or position. Image sensor210is also shown. Here, an AA forming mechanism510comprises (like AA302) six blades504a,b,506a,band508a,b, divided into left (a) and right (b) blades and one or more actuators (se e.g.714inFIG.7) and position sensors (not shown). FIG.5Bshows the embodiment ofFIG.5Ain a first closed state, with blades504aand504bclosed. In this embodiment, adaptive Tele aperture502is rectangular. The folded Tele lens has a smaller aperture than native Tele lens aperture212, corresponding to a lower amount of light reaching the sensor and a deeper DOF than in case of native Tele lens aperture212. In an example, a stroke of the linear movement of the blades504aand504bfor forming a first closed state may be in the range of 0.1 mm to 2 mm. FIG.5Cshows adaptive aperture502with blades506aand506bin a second closed state, closed. In this case, the folded Tele lens has a smaller aperture than in the case ofFIG.5B. FIG.5Dshows the embodiment ofFIG.5Awith blades in a third closed state,508aand508bclosed. As above, aperture502is rectangular and the adaptive aperture is smaller than in the case ofFIG.5C. For the embodiment shown here (with three blades of varying size), this is the lowest amount of light and the deepest depth of field that can be adapted. In an example, a stroke of the linear movement of the blades508aand508bfor forming a third closed state may be in the range of 0.5 mm to 4 mm. In another embodiment, the rectangular shape may form a square aperture (not shown), i.e. an aperture with identical height and width. The design shown inFIG.5A-FIG.5Dallows for four different, discrete adaptive aperture sizes formed by overlapping blades. FIG.6Ashows a perspective view and a front view of yet another embodiment of an optical lens module206with cut lens design with an adaptive aperture602. Image sensor210is also shown. An AA forming mechanism610comprises only one pair of blades604aand604b, which inFIG.6Aare in open position. An actuator (not shown) can move the blade pair604aand604bin a continuous manner, so that the AA mechanism supports opening and closing the adaptive Tele aperture with the properties that: 1) when fully open, adaptive Tele aperture602corresponds to native Tele lens aperture212, and 2) AA mechanism610does not increase the total folded Tele camera module height. FIG.6Bshows the embodiment ofFIG.6Awith blades604aand604bin a first closed position more closed than inFIG.6A. In this embodiment, the adaptive Tele aperture has a rectangular shape.FIG.6Cshows the embodiment ofFIG.6Awith blades604aand604bin a second closed position more closed that the first closed position. The design shown inFIG.6A-FIG.6Callows for continuously controlling the AA size. In an example, a stroke of the linear actuation of blades604aand604bto form AAs as shown here may be less than 4 mm. FIG.6Dshows a perspective view and a front view of yet another embodiment of an optical lens module with cut lens design with an adaptive aperture602. Image sensor210is also shown. An AA forming mechanism610′ comprises one pair of blades604′aand604′b, both in open position.FIG.6Eshows the embodiment ofFIG.6Dwith blades604′aand604′bin a first closed position.FIG.6Fshows a cross-sectional view on the embodiment shown inFIG.6DandFIG.6E. An actuator (not shown) can move blade pair604′aand604′blinearly and in a continuous manner inside rails614′aand614′b. AA mechanism610′ supports opening and closing of the AA with the properties that: 1) when fully open, adaptive Tele aperture602corresponds to the native Tele lens aperture212; 2) AA mechanism610′ does not increase the total folded Tele camera module height, HM; and 3) a width WAAof AA mechanism310does not increase a total folded Tele camera module width WM, i.e. WAA≤WM. FIG.7shows schematically in a block diagram an embodiment of a system disclosed herein and numbered750. System750comprises a folded Tele camera700with an image sensor702, a lens module704, an adaptive aperture706and an OPFE708. An AA forming mechanism710comprises AA blades712(as shown e.g. inFIGS.3-6) and one or more AA actuators714. The AA actuator(s) is/are mechanically coupled to the AA blades and may be realized by deploying actuator technologies such as voice coil motor (VCM), stepper motor, or shaped memory alloy (SMA) actuator technologies. Position sensors (e.g. Hall sensors, not shown inFIG.7) may be part of the actuator. A human machine interface (HMI)716allows a human user to choose specific AA settings, which are passed as specific control commands to AA mechanism710. In an embodiment, the human user may choose a specific imaging mode out of some possible imaging modes which are saved in a processing unit or “processor”718(e.g. a CPU or an application processor). In this case, processing unit718receives the human user input, optionally determines some optimized settings based on the human user input, and passes this information as specific control commands to AA mechanism710. In another embodiment, processor718may determine optimized adaptive aperture settings e.g. based on the available scene information, on object detection algorithms, or on typical human user behavior, and pass this information as specific control commands to AA mechanism710. System750may be included in an electronic mobile device (not shown) such as a smartphone. The Tele camera may be included with one or more additional cameras in a multi-camera. The additional camera(s) may be a Wide camera having a diagonal FOV of e.g. 50-100 degree and/or an Ultra-Wide camera having a diagonal FOV of e.g. 70-140 degree and/or a Time-of-Flight (ToF) camera. To clarify, a multi-camera may include any combination of two or more cameras where one camera is the Tele camera. In some embodiments, one or more of the cameras may be capable to capture image data that can be used to estimate a depth of scene or “scene depth”. Scene depth refers to the respective object-lens distance (or “focus distance”) between the objects within a scene and system750. The scene depth may be represented by a RGB-D map, i.e. by a data array that assigns a particular depth value to each RGB pixel (or to each group of RGB pixels). In general, the pixel resolution of a RGB image is higher than the resolution of a depth map. Image data used for estimating scene depth may be for example:Phase detection auto focus (PDAF) data, e.g. from the Tele camera or from an additional camera;Stereo image data, e.g. from the Tele camera and from an additional camera;Focus stacking visual image data;Focus stacking PDAF data;Visual image data from the Tele camera and/or from an additional camera (for estimating depth from defocus);Visual image data from the Tele camera and/or from an additional camera (for estimating depth from motion);Depth data from a ToF camera. In some embodiments, scene depth may be provided by an application programming interface (“API”), e.g. Google's “Depth API”. Knowledge on a scene depth may be desired as of the quadratic dependence of the DOF from the focus distance, i.e. from the depth of the object in focus. FIG.8presents a flow chart illustrating steps of a method performed in a folded Tele camera with adaptive aperture disclosed herein. In a scene sensing step802the camera's image sensors are used to detect the conditions and properties of a scene (e.g. lightning conditions, scene depth, visual content, etc.), which is done in pre-capture or preview mode. In some embodiments, additional sensor data (e.g. of ToF sensors, temperature sensors, humidity sensors, radar sensors etc.), e.g. of sensors present in the camera hosting device, may be read-out in the scene sensing step802. Data generated in step802is fed into a processor (e.g. CPU, application processor) where a scene evaluation step804is executed. In step804, the data is evaluated with the goal of determining ideal settings for the adaptive aperture, given the input of the human user or a dedicated algorithm. The term “ideal settings” refers here to settings that provide a maximum degree of user experience, e.g. a high image quality, or a high uniformity along stitching borders of panorama images. In case that the camera is operated in a mode highly reliant on automated image capturing, other steps may be performed besides sensor data evaluation. In some examples, ROIs and OOIs may be detected and automatically selected as focus targets by an algorithm in scene evaluation step804. The ideal settings from step804are fed into an AA mechanism such as710. The AA is set up according to these settings in an aperture adjustment step806. The scene is then captured in a scene capture step808. Steps802to806ensure improved user experience. In an example, processor718calculates control commands concerning the size of the adaptive Tele aperture based on Wide camera image information and/or Tele camera image information, while one or both cameras operate in preview and/or video recording mode. In another example, AA mechanism710receives, from the user or from an automated detection method, a desired ROI or OOI, for example where Wide and Tele cameras are focused on, or intend to focus on. The processor718detects OOIs or ROIs (for example faces of persons) in a Wide camera image (or alternatively, receives information about OOIs or ROIs detected by another module) by means of dedicated algorithms, and estimates the relative or absolute distance between the objects, for example, by comparing the size of faces or properties of landmarks in each face. The processor then calculates the desired aperture size to keep at least part of said objects of interest in focus, and submits these ideal aperture settings to AA mechanism710, which configures the adaptive Tele aperture to this aperture size. In another example, control software running on processor718calculates a depth map of part of the scene (or alternatively, receives such a depth map calculated by another module), for example, based on stereo information between a Wide camera image and a Tele camera image, or based on information from phase detection autofocus (PDAF) pixels in the Wide camera sensor, or based on a ToF camera. A dedicated algorithm running on processor718determines the required range of distances to be in focus from the depth map, and calculates the desired aperture size to keep at least some of the OOIs in focus. The information is transmitted to AA mechanism710, which configures the adaptive Tele aperture to this aperture size. In yet another example, the software may take into account the light levels in the scene, by analyzing the Wide camera image and the Tele camera image (for example, by calculating a histogram of intensity levels), or by receiving an estimation for the illumination in the scene (for example, LUX estimation, or the Wide sensor and/or Tele sensor analog gain) and calculates the ideal adaptive Tele aperture size based on the illumination estimation. In yet another example, the software may receive indications from the user (for example, by switching the camera between different imaging modes, e.g. to a dedicated portrait-mode or stitching mode, or by changing some parameter in the camera application) regarding the required DOF and aperture configuration, and may take this information into account to calculate ideal settings for the adaptive Tele aperture size to fulfill these requirements. In yet another example with the folded Tele camera being a scanning folded camera with an adjustable FOV, when operating the camera in a scanning mode, i.e. capturing Tele camera images having different FOVs and stitching the Tele camera images together to create an image with a larger FOV (as e.g. for a high resolution panoramic image), for example as described in U.S. provisional patent application 63/026,097, software running on processor718determines the ideal adaptive Tele aperture size before scanning starts and updates this value throughout the scanning and capturing of the images to be stitched. This may be desired e.g. for achieving a similar DOF for all captured Tele images or to achieve similar lightning for all captured Tele images. In yet another example, when operating the camera in a scanning mode and stitching the Tele camera images together to create an image with a larger FOV, for example as described in PCT/IB2018/050988, software running on processor718determines the ideal AA in a way such that single Tele images captured with this AA have very similar optical Bokeh, leading to a stitched image with larger FOV and very uniform appearance in terms of Bokeh, including along single Tele image borders. In yet another example, for supplying an image with Wide camera FOV and Tele camera resolution for specific ROIs or OOIs, the ROIs and OOIs are captured by the Tele camera and these Tele images are stitched into the Wide camera image with large FOV. To supply a natural or seamless transition between the two images, software running on processor718determines the ideal AA size so that the optical Bokeh of the Tele image to be stitched is very similar to the optical Bokeh of the Wide image. In yet another example, the adaptive Tele aperture is modified by AA mechanism710between two consecutive Tele image captures, (or between two Tele camera preview frames) to obtain two frames of largely the same scene with different depths of field and to estimate depth from the two images, for example by identifying features in one of these images that correspond to features in the other image, comparing the contrast in the local area of the image and based on this, calculating relative depth for the image region. Relevant methods are discussed in “Elder, J. and Zucker, S. 1998. Local scale control for edge detection and blur estimation” and “Depth Estimation from Blur Estimation, Tim Zaman, 2012”. In yet another example, a software running on processor718may calculate the ideal AA settings from the distance between the camera and the object that the camera is focused on. For example, Hall sensors provide the information on the focus position. As DOF has a quadratic dependence on the focus distance, and in order to supply sufficient DOF in the image to be captured, the control software may assign smaller AA setting to closer objects and larger AA setting to objects farther away. In yet another example, the camera may be operated in the native aperture state for high quality Tele images in low light conditions. To achieve the DOF necessary for achieving a crisp appearance of a specific ROI or OOI, an image series may be taken, whereas the focus scans the necessary DOF range and captures an image at each one of the different scan states, a technique known in the art as “focus stacking” to create a “focus stack”. In a second (computational) step, the output image may be assembled by stitching the crisp segments of the ROI or OOI from the series of images in a way so that the entire ROI or OOI appears crisp. In some examples, focus stacking may be also used for estimating scene depth. In conclusion, adaptive apertures and methods of use described herein expand the capabilities of folded Tele cameras to control the amount of light reaching the Tele sensor and the DOF of the Tele image by adapting the camera's f-number. In particular, they provide solutions to problems of very shallow DOF, particularly in more severe cases, for example:a) when using a scanning camera with a relatively long focal length (for example, the scanning camera in PCT/IB2016/057366);b) when using a plurality of images captured by a scanning camera such as described in co-owned U.S. provisional patent application No. 63/026,097. For example, using camera with specifications of “camera 1” above for scanning and capturing a scene in the X and Y directions and stitching 9 images together may result in a FOV equivalent to that of a camera with 10 mm EFL. This mix of a larger FOV with a very shallow DOF may result in a non-natural user experience (i.e. user experience that is very different from that of using a single shot of a wide camera)—objects at different distances from the camera will appear blurry over the stitched, larger FOV;c) when using a Tele camera having an EFL>10 mm and with-capability to focus to close objects (“Macro objects”), it may be desired to adapt the f/#, e.g. for achieving a higher DOF so that a larger part of a Macro object is at focus. Lens designs for such a Macro Tele camera are described in co-owned U.S. provisional patent application No. 63/070,501. Methods relating to such a Macro Tele camera are described in co-owned U.S. provisional patent application No. 63/032,576; andd) when solving focus miss that arises from the very shallow DOF associated with a long focal length folded Tele lens: when the autofocus engine moves the folded Tele lens for focus, a small mismatch in the position of the lens (for example, due to an error in the position sensing mechanism in a closed-loop autofocus actuator of the folded Tele lens) may result in focus miss—i.e. the important object in the scene will not be in-focus. While the description above refers in detail to adaptive apertures for folded Tele lenses with a cut lens design, it is to be understood that the various embodiments of adaptive apertures and AA mechanisms therefor disclosed herein are not limited to cut lens designs. Adaptive apertures and AA mechanisms therefor disclosed herein may work with, and be applied to, non-cut lens designs (i.e. lenses without a cut). Unless otherwise stated, the use of the expression “and/or” between the last two members of a list of options for selection indicates that a selection of one or more of the listed options is appropriate and may be made. It should be understood that where the claims or specification refer to “a” or “an” element, such reference is not to be construed as there being only one of that elements. All patents, patent applications and publications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual patent, patent application or publication was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present disclosure. | 26,495 |
11860516 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise”, “comprises”, and/or “comprising”, “include”, “includes”, and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that the term “system,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose. The modules (or units, blocks, units) described in the present disclosure may be implemented as software and/or hardware modules and may be stored in any type of non-transitory computer-readable medium or other storage device. In some embodiments, a software module may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules or from themselves, and/or can be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices can be provided on a computer readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code can be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions can be embedded in a firmware, such as an EPROM. It will be further appreciated that hardware modules (e.g., circuits) can be included of connected or coupled logic units, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as hardware modules, but can be software modules as well. In general, the modules described herein refer to logical modules that can be combined with other modules or divided into units despite their physical organization or storage. Generally, the word “module,” “sub-module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or another storage device. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in a firmware, such as an EPROM. It will be further appreciated that hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof. It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to,” or “coupled to,” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. An aspect of the present disclosure may provide a surveillance device to improve the accuracy of the intelligent parking lot management system for monitoring the parking space state, and improve the working efficiency of the surveillance device. The surveillance device may include a connector, a support bar, and a mounting housing mounted with a camera lens. The mounting housing is integrally rotatable with the connector with respect to the support bar. Thus, the facing angle of the camera lens may be adjusted in a comparatively convenient manner by adjusting the relative position of the connector and the support bar. Furthermore, the mounting housing may be mounted with an ultrasonic probe configured to detect a change of the occupation state of the parking space it is monitoring, and a supplement light to light up the parking space when the camera lens is working, allowing a more accurate determination of the occupation state of the parking state when the ambient light is relatively weak. Another aspect of the present disclosure may provide a camera with an opening formed in its housing. The facing angle of the camera lens located inside the housing may be adjusted in a relatively convenient manner through the opening. A further aspect of the present disclosure may provide a camera with a first camera module and a second camera module. The housings of the two camera modules share a same shaft axis, around which the two camera modules are respectively rotatable. Such a structure may allow the camera to monitor areas in different directions simultaneously. 1: support bar,11: threaded hole,2: mounting housing,21: front mounting housing,22: rear mounting housing,23: seal diaphragm,231: cable tie,3: connector,31: through-hole,32: screw mounting hole,4: camera lens,5: supplement light,6: ultrasonic probe,7: indicator,8: mount,81: mounting through-hole10: camera cover;120: camera housing;1120: bottom wall;20: transparent ring;220: camera lens assembly;30: transparent housing fixing base;320: indicator board frame;40: camera lens assembly;420: transparent cover;50: transparent lens housing;520: lens window;51: hook;52: leading plane;620: sliding rail;61: first end of the sliding rail;62: second end of the sliding rail;63: grooved portion;64: elastic arm;720: lens assembly holder;71: spherical cavity;8120: bolt;82: bolt;91: fixing portion;1030: camera cover;130: camera module;1130: housing;13: lens window;12: camera lens assembly;111: bottom wall;112: stopper;14: fixing frame;141: spherical cavity;2030: transparent indicator ring;230: adapter plate;2130: opening;2230: hook;2330: through hole;2430: limiting structure;3030: transparent housing fixing base;330: switching shaft;31: through-hole;32: screw mounting; hole;4030: camera lens assembly;430: damping strip;311: mounting hole321: ring groove;322: mounting groove;5030: transparent housing;530: camera cover;5130: fixing portion;630: mounting plate;730: indicator board;6130: fixing nut;830: indicator board frame;930: transparent indicator cover. Some embodiments of the present disclosure provide a surveillance device to improve the accuracy of an intelligent parking lot management system's determination of a parking space state, and with an improved efficiency. In some embodiments, as shown inFIG.1toFIG.5, a surveillance device may include a support bar1, a mounting housing2and a connector3for connecting the support bar1and the mounting housing2. In some embodiments, the connector3may be fixed to the mounting housing2. In some embodiments, the connector3may be connected with the support bar1through one more screws fitting into one or more threaded holes11in an end of the support bar1. The mounting housing2may be integrally rotatable with the connector3with respect to the support bar1. In some embodiments, a plurality of threaded holes11may be evenly distributed on one end of the support bar1in the circumferential direction. That is, the plurality of threaded holes11are arranged around the support bar1with a fixed interval between every two threaded holes. In some embodiments, the facing direction of the mounting housing2may be adjusted to variable directions. The angle differences between every two adjacent directions may be of the same interval. The mounting housing2may be located at one end of the support bar1. The side of the support bar1that is close to the mounting housing2forms the plurality of threaded holes11thereon in the circumferential direction of the support bar1. The connector3has at least one through-hole31facing at least one of the threaded holes11. Referring toFIG.5andFIG.6, the through-hole31may be a strip-type hole. When the connector3is mounted to the support bar1, the strip-type hole extends along the circumferential direction of the support bar1. When installing the connector3, the strip-type hole may be engaged with the corresponding threaded hole11through an engaging mechanism, such as a screw, a bolt, or the like, or any combination thereof. For example, an engaging mechanism may pass through the strip-type hole and a threaded hole11successively, connecting the strip-type hole and the threaded hole11. One may adjust the relative position of the strip-type hole and the corresponding threaded hole11before tighten the screw, in order to achieve a fine adjustment of the facing angle of the mounting housing2and the facing direction camera lens4. In some embodiments, the strip-type hole may face two or more threaded holes11in the support bar1, and the connector3may be fixed to the two or more threaded holes11by two or more screws, which may improve the mounting stability thereof. The mounting housing2may be mounted with an ultrasonic probe6for detecting whether a vehicle is entering or leaving the parking space or not. When the ultrasonic probe6detects that there may be a vehicle entering or leaving the parking space, the camera lens4and the supplement light5may be switched on. In some embodiments, the ultrasonic probe6, the camera lens4, and/or the supplement light5(or referred to as a lightening device) may connect to a controller. In some embodiments, the controller connected to the ultrasonic probe6, the camera lens4, and/or the supplement light5may be not specifically limited. For example, in some embodiments, the surveillance device may include at least one camera for monitoring vehicles in a parking space. The controller may be a chip installed in the at least one camera, or may be a control platform connected to the at least one camera. The ultrasonic probe6may scan a certain position of the parking space. When the ultrasonic probe6detects that a vehicle is entering or leaving the parking space, the controller may switch on the camera lens4and the supplement light5. The camera lens4may record vehicle information under the illumination of the supplement light5. In one aspect, the ultrasonic probe6uses ultrasonic waves to monitor the state of the parking space, which is not affected by the ambient light, making a more accurate determination of the parking space state. In another aspect, the camera lens4and the supplement light5is switchable according the change in the occupation state of the parking state monitored by the ultrasonic probe6. When the ultrasonic probe6detects that the vehicle may be entering or leaving the parking space, the controller starts the camera lens4and supplement light5, both of which may otherwise be in a closed state, improving the efficiency of the surveillance device. The connector3may connect the mounting housing2to the support bar1. The connector3may be disposed to face a certain threaded hole11on the support bar1, according to specific application scenarios. By changing the threaded hole11which the connector3faces, the facing direction of the mounting housing2, and thus the facing direction of the camera lens4and the ultrasonic probe6mounted on the mounting housing2may be adjusted. In some embodiments, due to the fact that the facing direction of the camera lanes may be adjusted after the install of the support bar1is completed, the support bar1may be installed on the ground in a relatively simplified manner in which there is no need to consider whether the facing direction of the camera lens is in a pre-defined direction. In some embodiments, after the surveillance device is installed, whenever there is a need to adjust the facing direction of the camera lens4, one can simply open the screw connecting the connector3and the support bar1and adjust the facing direction of the camera lens4accordingly. The adjustment process may be relatively convenient. Referring toFIG.1toFIG.3, the mounting housing2includes a front mounting housing21(or referred to as a front housing) for mounting the camera lens4, the supplement light5and the ultrasonic probe6, a rear mounting housing22(or referred to as a rear housing), and a seal diaphragm23arranged between the front mounting housing21and the rear mounting housing22. In some embodiments, a front seal washer (or referred to as a first seal washer) may be provided between the seal diaphragm23and the front mounting housing21. In some embodiments, when the camera lens4, the supplement light5and the ultrasonic probe6are mounted on the front mounting housing21, and a front seal washer is provided between the seal diaphragm23and the front mounting housing21, the camera lens4, the supplement light5and the ultrasonic probe6may be sealed in the cavity formed between the front mounting housing21and the seal diaphragm23, protecting the surveillance device from contamination and/or damage of moisture, dust, and the like. In some embodiments, a rear seal washer (or referred to as a second seal washer) may be provided between the seal diaphragm23and the rear mounting housing22. The sealing diaphragm23may be provided with sealing washers on its both sides, which may improve the sealing property of the mounting housing2and further improve the waterproof performance of the mounting housing2. Referring toFIG.3, a cable tie231may be provided on one side of the seal diaphragm23and face the rear mounting housing22. The cable tile231is accommodated in a cavity that is formed between the rear mounting housing22and the seal diaphragm23. In some embodiments, the cable tie231may be used to facilitate the connection of the cables of the surveillance device and to make the connection more reliable. The manner in which the connector3and the mounting housing2are connected may be not limited. For example, the connector3and the mounting housing2may be connected through welding, riveting, or the like, or any combination thereof. Technicians may also design other connection manners according to specific application scenario. In some embodiments, the connector3may be fixed to the mounting housing2, and therefore, the mounting housing2may be integrally rotatable with the connector3with respect to the support bar1. In some embodiments, the connector3and the mounting housing2may be connected through a screw. Referring toFIG.6, the connector3includes a screw mounting hole32into which a screw may be inserted. The connector3and the mounting housing2may be fixed together by inserting a screw through the screw mounting hole32and a corresponding mounting hole of the mounting housing2. The fact that the connector3and the mounting housing2being fixed through a screw may facilitate the disassembly and assembly therebetween, and the connection stability therebetween may be relatively good. Referring toFIG.1, the mounting housing2may also include an indicator7. The indicator7may be configured to indicate the occupied state of the parking space. In some embodiments, the indicator7may include two indicators or three indicators. For example, the indicator7includes two indicators, a red indicator and a green indicator. When there is no car parking in the parking space, the green indicator may be powered on. When there is a car parking in the parking space, the red indicator may be powered on. As a result, it may be available for the drivers to identify an unoccupied parking space at a remote location based on the indicator (e.g., the color of the indicator). Referring toFIG.4, an end of the support bar1facing away from the mounting house2may include a mount8. The mount8may include a plurality of through-holes81. When installing a surveillance device, it may be necessary to establish a mounting base and then install the surveillance device on the mounting base. In some embodiments of the present disclosure, the mount8may be mounted on the mounting base so that the support bar1may be firmly mounted on the ground. In some embodiments, the mount8may be fixed by inserting a rivet or a bolt through the through-holes. Apart from the surveillance device as illustrated inFIG.1toFIG.6, the present disclosure also provides a surveillance device having a structure as illustrated inFIG.8toFIG.17. Compared to the existing surveillance device whose structure may be illustrated inFIG.7, the surveillance device provided in the present disclosure may have a camera with an opening formed in its housing. The facing angle of the camera lens located inside the housing may be adjusted in a relatively convenient manner through the opening. In some embodiments, the surveillance device may be used to monitor a parking space. Thus, the surveillance device may be referred to as a parking space detector (e.g., a camera). FIG.7is structure diagram illustrating a camera in the prior art. The parking space detector is a security monitoring device that is applied in the intelligent parking lot management system to realize the parking space guiding and car finding functions of the intelligent parking lot. When monitoring a single-sided parking space, a monocular camera similar to that shown inFIG.1is often employed. As shown inFIG.7, the monocular camera is generally provided with, from top to bottom, a camera cover10, a transparent ring20, a transparent housing fixing base30, a camera lens assembly40, and a transparent lens housing50. The transparent ring20may be made of translucent acrylic light-homogenizing material, and an indicator board may be located inside the transparent ring20. When a lamp bead on the indicator board changes its illumination color, the change may be presented through the transparent ring20. The transparent housing fixing base30may be used for fixedly connecting the fixed transparent lens housing50. The camera lens assembly40is located directly below the transparent ring20and is enveloped by the transparent lens housing50. As shown inFIG.7, in the above-mentioned monocular camera, the lens angle of the camera lens assembly40is usually a preset angle determined at the time of shipment, and generally needs to be adjusted according to actual monitoring requirements. To adjust the lens angle, it is necessary to disassemble the transparent lens housing50. After adjusting the lens angle, it is necessary to reassemble the transparent lens housing50, which is inconvenient. Therefore, existing assembly process of the existing parking space detector is often troublesome. The present disclosure provides a camera as illustrated inFIG.8toFIG.17. As shown inFIG.8toFIG.10,FIG.14,FIG.16andFIG.17, a camera includes a cylindrical camera housing120having an opening on the wall of the camera housing120, and a camera lens assembly220installed in the camera housing120and disposed facing the opening of the camera housing120. The camera housing120may protect the camera lens assembly220. Moreover, an opening is provided on the wall of the camera housing. The camera lens assembly faces the window setting. The lens component of the camera lens assembly220may film and monitor the outside scenario through the window. In some embodiments, the lens angle of the camera lens assembly40may be adjusted by changing the position of the window. Therefore, the process for adjusting the lens angle of the camera lens assembly220, as well as the assembly process of the camera is relatively convenient. As shown inFIG.8toFIG.12, the camera may further include an indicator board that is arranged on one side of the bottom wall1120of the camera. An indicator is mounted on the side of the indicator board facing away from the camera housing120of the camera. The camera may further include an indicator cover that is disposed on the side of the indicator board facing away from the camera housing120. In some embodiments, more than one indicator is collectively arranged on the side of the bottom wall1120facing away from the camera housing120. Therefore, the illumination of the more than one indicator is more concentrated. The indication effect is more visible, and the number of the required beads in one indicator is less, which effectively reduces the power consumption and heat dissipation requirement of the whole camera. As shown inFIG.8toFIG.10,FIG.14andFIG.16toFIG.17, a lens window520may be installed before the opening formed on the wall of the camera housing120(or referred to as the side wall of the camera housing120). In some embodiments, the lens window520is made of transparent material. The lens window520may be closed when no lens adjustment is required, so as to protect the camera lens assembly220. Since the lens window520is transparent, it may allow light to pass without affecting the working of the camera lens assembly220. In addition, if the lens adjustment is required, the lens window520may be opened, and then an angle adjustment operation is performed through the lens window520. The angle adjustment process of the camera is more convenient than the conventional camera, which requires the lens housing to be unscrewed for angle adjustment. As shown inFIG.8toFIG.10,FIG.14andFIG.16, andFIG.17, the lens window520is slidable along the wall of the camera housing120to change between an open status and a closed status. The operation of opening and closing the lens window520is very convenient. Therefore, the entire process of lens adjustment may be significantly facilitated, so that the lens angle of the camera may be adjusted at any time if needed. As shown inFIG.8,FIG.14toFIG.17, the camera housing120is provided with a sliding rail620extending in the circumferential direction of the cameral housing120. The lens window520is slidable along the sliding rail620. Specifically, when the lens window520slides toward the closed status, the opening formed on the wall of the camera housing120may be closed, and when the lens window520slides toward the open status, the window opening formed on the wall of the camera housing120may be opened. The open status and closed status may be referred to as a first state and a second state, respectively. As shown inFIG.14, when the lens window520slides to the first end61of the sliding rail620, the lens window520may be in the close status. Further, when the lens window520slides to the second end62of the sliding rail620, the lens window520may be in the open status. In some embodiments, as shown inFIG.8,FIG.14toFIG.17, a buckle structure may also be provided between the lens window520and the sliding rail620for ensuring the stability of the lens window520and the sliding rail620in the second state. In some embodiments, the buckle structure may be implemented in the following method. As shown inFIG.15toFIG.17, the sliding rail620is provided with a hook51. The lens window520is provided with a grooved portion63to be engaged with the hook51. When the lens window520slides to the first end61of the sliding rail620, the hook51engages with the grooved portion63to lock the lens window520in the second state. The lens window520may be further provided with a leading plane52for guiding the hook51into the grooved portion63. The leading plane52is located on the side of the grooved portion63facing the first end61of the sliding rail620. That is, when the lens window520slides toward the first end61of the sliding rail620, the leading plane52may first come into contact with the hook51, and may gradually guide the hook51to the grooved portion63until the engagement of the hook51with the grooved portion63is achieved. As shown inFIG.15toFIG.17, an elastic arm64may be formed on the inner side of the sliding rail620, and the elastic arm64is disposed along the extension direction of the sliding rail620. The hook51is formed at one end of the elastic arm64toward the first end61of the sliding rail620. After the leading plane52on the lens window520contacts the hook51, the elastic arm64may gradually deform as the hook51slides along the leading plane52until the hook51is engaged with the grooved portion63. When the grooved portion63is at the position, the elastic arm64may retain deformed. At the same time, the hook51at the end thereof may rebind into the grooved portion63, thereby locking with the grooved portion63. As shown inFIG.14toFIG.17, the hook51may be disposed at the first end61of the sliding rail620. Correspondingly, the grooved portion63is disposed on the side of the lens window520facing the first end61of the sliding rail620. As shown inFIG.8,FIG.11andFIG.13, the camera may further include a lens assembly holder720. The lens assembly holder720is fixedly mounted to the bottom wall1120of the camera housing120. Preferably, the lens assembly holder720may be fixedly mounted to the bottom wall1120of the camera housing120by a bolt8120. Further, the camera lens assembly220is mounted on the lens assembly holder720. As shown inFIG.8,FIG.11toFIG.13, the camera lens assembly220is spherical. Further, as shown inFIG.13, the lens assembly holder720is provided with a spherical cavity71matched with the camera lens assembly220. The camera lens assembly220is mounted in the spherical cavity71and is rotable with respect to the spherical cavity71. As shown inFIG.8,FIG.11toFIG.13, a damping structure may be further provided on the inner side of the spherical cavity71. For example, the damping foam is attached to increase the friction between the spherical cavity71and the camera lens assembly220to prevent the spherical cavity71from rotating under the condition of non-human intervention. For example, it is possible to prevent the spherical lens unit220from rotating when the camera is subjected to an accidental vibration to cause an angle change of the camera lens assembly220. As shown inFIG.8toFIG.10, the indicator cover may include an indicator board frame320mounted on the bottom wall1120of the camera housing120and a transparent cover420mounted on the indicator board frame320. The transparent cover420may be a hemispherical uniform material which may improve the uniformity of the indicating light and improve the indication effect. As shown inFIG.8toFIG.10, the transparent cover420and the lamp holder320may be connected by a buckle. Further, the indicator board frame320and the bottom wall1120of the camera housing120may also be connected by the buckle. As shown inFIG.8toFIG.10, the camera may further include a camera cover10mounted on the top of the camera housing120. In some embodiments, the camera cover10and the camera housing120may be fixedly connected by one or more bolts82. In some embodiments, the top of the camera cover10has a cylindrical mounting part91. The fixing portion91is provided with an external thread, which may be used for mounting the camera to an external device. In the existing parking position detector products, when it is required to meet the scene requirements of monitoring two-side symmetric parking spaces or multiple parking spaces, a form similar to the binocular camera shown inFIG.18is often used. As shown inFIG.18, an existing binocular camera includes, from top to bottom, a camera cover1030, a transparent indicator ring2030, a transparent housing fixing base3030, two camera lens assemblies4030, and two lens transparent housings5030. Since both lens assemblies4030are arranged at the bottom of the transparent indicator ring2030, and the outside of the two lens assemblies4030are covered with the transparent housing5030, the adjustable range of the facing angles of the two lens assemblies4030is small, and the adjustment process also needs to disassemble the two transparent housing5030, which is very inconvenient. Therefore, it is desirable to provide a binocular parking space detector to meet the needs of different parking space scene monitoring in a parking lot. As used herein, the angle between two lens assemblies may refer to the angle between the facing directions of the two lens assemblies. The present disclosure discloses a multi-directional camera for solving the problem that the adjustment range for the angle between the facing angles of the two cameras is small and that the angle adjustment process is inconvenient. As shown inFIG.19,FIG.20, andFIG.25toFIG.28, a multi-directional camera according to some embodiments of the present disclosure includes at least two camera modules130. Each of the at least two camera modules130includes a housing1130and a camera lens assembly12mounted in the housing1130. The housing1130of each camera module130has the shape of a cylinder. An opening is provided on the wall of the housing1130. The housings1130of the at least two camera modules130share a same axis o. The at least two camera modules130are sequentially arranged along the shaft axis o, and any two adjacent camera modules130are rotatable about the shaft axis o, respectively. In some embodiments, by driving a relative rotation between the camera modules, the relative position between the camera lens assemblies in the camera modules in the horizontal direction (i.e., the direction which is perpendicular to the direction of the shaft axis o) can be adjusted. The housing1130of each camera module130has an opening provided on the cylinder wall. The camera lens assembly130of each camera module130may film and monitor the outside scenario through the opening. Due to the fact that the relative position between the camera lens assemblies in the camera modules may be adjusted, the facing angle of each camera lens assembly and the position of each opening can be further adjusted. The multi-directional camera provided according to some embodiments of the present disclosure can rotate the camera module130in the horizontal direction, which performs the angle adjustment of each camera lens assembly12in the horizontal direction. On the other hand, the camera lens assembly12can be directly adjusted in the horizontal and/or the vertical direction (i.e., the direction which is parallel to the direction of the shaft axis o). Therefore, the lens angle adjustment range of the multi-directional camera is broad, and the adjustment process is very convenient. Furthermore, the multi-directional camera can be used for parking space detection, and can well meet the monitoring requirements of different parking space scenes. As shown inFIG.20toFIG.26, in the multi-directional camera provided according to some embodiments of the present disclosure, each adjacent two camera modules130may be connected by a switching module. Specifically, as shown inFIG.21toFIG.24, the switching module may include a two-part structure of the adapter plate230and the switching shaft330. The adapter plate230is provided with an opening2130. The switching shaft330includes a shaft component3130and a disc component3230connected to the first end of the shaft component3130. The shaft component3130is arranged through the opening2130of the adapter plate230, and the diameter of the disc component3230is larger than the diameter of the opening2130of the adapter plate230, so that the disc component3230of the adapter plate230can be limited below the switching shaft330. Further, the switching shaft330and the adapter plate230are relatively rotatable about the shaft axis o1of the shaft component3130. Specifically, the shaft axis o1of the shaft component3130coincides with the shaft axis o of the housing1130of the camera module130. Further, by connecting the adjacent two camera modules130to the adapter plate230and the switching shaft330, respectively, the rotational connection between the adjacent two camera modules130can be achieved. As shown inFIG.21toFIG.24, in order to prevent the adapter plate230and the switching shaft330from being separated from each other during assembly of the camera module130, a buckle structure may be further arranged between the adapter plate230and the switching shaft330so as to relatively fix the plate230and the switching shaft330. In some embodiments, the adapter plate230is provided with a hook2230on the side facing the disc component3230. Correspondingly, the side of the disc component3230facing away from the adapter plate230is provided with a grooved portion to be engaged with the hook2230. As shown inFIG.21toFIG.24, the grooved portion provided on the disc component3230may be a ring groove321arranged along the peripheral of the disc component3230. Further, the adapter plate230is provided with a plurality of hooks2230. In some embodiments, the plurality of hooks2230are elastic hooks. The plurality of hooks2230may be evenly or unevenly distributed along the peripheral of the disc component3230, and each of the plurality of hooks2230may buckle the ring groove321of the disc component3230. The buckle connection between the hooks2230and the ring groove321do not affect the relative rotational movement between the adapter plate230and the switching shaft330. As shown inFIG.4, by the above embodiments, in some embodiments, the disc component3230and the adapter plate230may also be provided with a damping strip430. In some embodiments, the damping strip430is a silicone damping strip. The damping strip430can increase the sliding friction between the disc component3230and the adapter plate230so that relative rotation of the adjacent two camera modules under non-human intervention can be avoided. As shown inFIG.21andFIG.24, the side of the disc component3230facing the adapter plate230is provided with a plurality of strip-shaped mounting grooves322surrounding the shaft component3130, and the damping strip430is mounted inside one of the mounting grooves322. In some embodiments, in any two adjacent camera modules, the camera module near the top side of the at least two camera modules (e.g., the top of the first camera module) is a first camera module, and the camera near the bottom end side of the at least two camera modules (i.e. the bottom of the last camera module) is a second camera module. The housing of the first camera module is fixedly connected to the shaft component of the switching shaft. The housing of the second camera module is fixedly connected to the adapter plate. Taking the multi-aspect camera provided according to some embodiments of the present disclosure including two camera modules as an example, as shown inFIG.19toFIG.20,FIG.25toFIG.26, the first camera module101is located at the upper portion, and the housing1130is connected to the shaft component2130of the switching shaft330. Correspondingly, the second camera module102is located at the lower portion and the housing1130is fixedly connected to the adapter plate230. Furthermore, when the multi-aspect camera provided according to some embodiments of the present disclosure includes a plurality of camera modules, it functions as a first camera module when each camera module is connected correspondingly to a camera module adjacent to its bottom. It functions as a second camera module when the camera module is connected correspondingly to a camera module adjacent to the top thereof. Therefore, the settings of the top and bottom structures of all camera modules need to be consistent. Therefore, preferably, the housing structure of each camera module can be set to be identical, i.e. the camera module can be designed as a universal module. In some embodiments, as shown inFIG.27andFIG.28, the housing1130of the camera module130may only include a cylinder wall and a bottom wall111, and the camera lens assembly12is fixed on the bottom wall111of the housing1130. Further, as shown inFIG.19andFIG.20, a camera cover530is provided with a fixing portion5130for fixing the entire multi-directional camera to other structures, such as the mounting plate630. Specifically, the fixing portion5130is provided with an external thread, and the fixing portion5130passes through the mounting hole of the mounting plate630when installed. After the multi-directional camera is adjusted to an appropriate angle, the fixing portion5130and the mounting plate630are locked by the fixing nut6130. As shown inFIG.21andFIG.24toFIG.26, the mounting hole311may be mounted on the second end of the shaft component3130of the switching shaft330in the switching module. Therefore, the housing1130of the first camera module101and the shaft component3130of the switching shaft330can be fixedly connected by an engaging mechanism inserted into the mounting hole311. Specifically, the engaging mechanism may be a bolt. As shown inFIG.22toFIG.23, the adapter plate230may be provided with a through hole2330. The housing1130of the second camera module102and the adapter plate230can be fixedly connected by an engaging mechanism inserted into the through hole2330. Specifically, the engaging mechanism may be a bolt. As shown inFIG.21,FIG.22,FIG.25andFIG.27, a position-limiting mechanism2430is arranged on the side of the adapter plate230facing the first camera module101. The housing1130of the first camera module101facing the adapter plate230(i.e., the outer side of the bottom wall111of the camera module13) is provided with a stopper112corresponding to the limiting structure2430. When the first camera module101and the adapter plate230are rotated to a particular position, the limiting structure2430and the stopper112can lock each other from movement when they come into contact. That is, when the first camera module101and the second camera module102are rotated to a particular position, the limiting structure2430and the stopper112may contact each other. Thereby, the first camera module101and the second camera module102can be stopped from rotating with respect to each other. Further, it is possible to prevent the relative rotation angles of the adjacent two camera modules130from being excessively large, so that the internal cables connected to the two camera modules130can be prevented from being entangled. As shown inFIG.19andFIG.20, the multi-directional camera may further include an indicator board730mounted on a bottom wall of at least two camera modules130. The indicator board730is provided with indicator. An indicator board supporting structure such as an indicator board frame830may be arranged on the side of the indicator board730away from the at least two camera modules130, and a transparent indicator cover930may be mounted on the indicator board frame830. Further, the multi-directional camera may also have an instructing function and can realize a parking space guiding function when used for parking space detection. As shown inFIG.19andFIG.20, the indicator board frame830is mounted on the housing1130of the bottommost camera module130, and the indicator board frame830is connected to the housing1130of the bottommost camera module130through a buckle. Further, the transparent indicator cover930and the indicator board frame830may also be connected by a buckle. As shown inFIG.19,FIG.20andFIG.27, in each camera module130, a lens window13that can open and close the opening mounted on the wall of the housing1130is provided. The lens window13is made of a transparent material. The angle of the camera lens assembly12located in the housing1130can be adjusted through the opening on the housing1130. If the angle adjustment of the camera lens assembly12is not needed, the opening can be closed by the lens window13to protect the camera lens assembly12. Since the lens window13is transparent, it can allow light to pas without affecting the working of the camera lens assembly12. As shown inFIG.19,FIG.20andFIG.27, the lens window13can slide along the wall of the housing1130to open or close the opening. The cylinder wall of the housing is provided with a sliding rail extending along a circumferential direction thereof. The lens window13is slidably mounted on the housing1130along the slide rails. Further, when the lens window13is slided to the opening, the opening can be closed, and when the lens window is slided to the opening, the opening can be opened. As shown inFIG.19,FIG.20andFIG.27, a buckle structure may also be provided between the lens window13and the slide rail for allowing the lens window13to remain stable in the close status. As shown inFIG.28, the camera module130may further include a fixing frame14of the camera lens assembly12. The fixing frame14of the camera lens assembly14is fixedly mounted on the bottom wall111of the housing130of the camera module130. The camera lens assembly12is mounted on the fixing frame14of the camera lens assembly12. As shown inFIG.28, the camera lens assembly12is spherical. The fixing frame14of the camera lens assembly12is provided with a spherical cavity141matched with the camera lens assembly12. The camera lens assembly12is mounted within the spherical cavity141and is rollable with respect to the spherical cavity141. Further, a damping structure may be provided on the inner side of the spherical cavity141. For example, the damping foam may be attached to the inner side of the spherical cavity141to increase the friction between the spherical cavity141and the camera lens assembly12to prevent the camera lens assembly12from rotating under non-human intervention. For example, it is possible to prevent the camera lens assembly12from rotating when the parking space detector is subjected to an accidental vibration to cause an angle change. It should be noted that the multi-directional camera provided according to some embodiments of the present disclosure may include two camera modules, and may also include multiple camera modules. Moreover, the multi-directional camera may adjust the monitoring direction of the lens according to specific scenarios. The adjustment range is vast, and the adjustment process can be realized without any tools. The multi-directional camera has a wide range of use, and is particularly suitable as a parking space detector for monitoring a scene with a double-side parking space or a plurality of parking spaces. It should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purposes of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, may be not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment. In some embodiments, the numbers expressing quantities or properties used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate,” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein may be hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that may be inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail. In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and describe. | 48,242 |
11860517 | DETAILED DESCRIPTION OF THE INVENTION Embodiments of the present invention will now be described with reference to the accompanying drawings. Note that these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiment described herein may be made without departing from the spirit of the invention. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. Further, with regard to the structural components recited in the claims, they may be represented in divisions and/or combinations by any form, but they are still encompassed within the scope of the inventions. Furthermore, two or more embodiments may also be combined, and such examples of the combination are also within the scope of the invention. The drawings may be schematically represented with respect to the width, thickness, shape, etc., of each part compared to the actual state in order to make the explanation clearer. Further, the names and terms used here are not limited, and even if other expressions are used, they are included in the present invention as long as they are of substantially the same contents and purports. The embodiments of the present invention provides a versatile camera device to be mounted on a pole that is of a simple configuration, multiple uses, and implementable at low cost. According to one embodiment, the basic configuration comprises, for example, a cylindrical portion, a leg portion mounted to the cylindrical portion to extending one end portion of the cylindrical portion, a light-transmitting portion provided in a axial middle portion of the cylindrical portion toward the other end and forming a part of the cylindrical portion, a reflector which receives light entering from the light-transmitting portion and guides the light to one axial side thereof, and an image pickup camera disposed inside the cylindrical portion and receiving the light reflected from the reflector at an image pickup unit. The basis configuration further comprises a control substrate disposed inside the cylindrical portion and comprising a transmitter which generates radio waves including the image capture signal from the image capture camera and a controller, a battery disposed inside the cylindrical portion and driving the transmitter, the controller and the image pickup camera and a lid which seals the other end of the cylindrical portion and encloses the reflector, the image pickup camera, the control substrate and the battery. The first embodiment will now be described. This embodiment is extremely advantageous when used in the following situations. (1) There is a demand of viewing, for example, the ball on the green heading for the cup from the cup side to as to enjoy golf more. There is also a demand of being able to observe how the golf ball changes its course near the cup and rolls in a direction away from the cup. There is also a demand of viewing how the ball flying away around the pole. An example of such a scene is a situation that, when a player hits a ball from a bunker aiming at the flagpole, the ball may hit the flagpole directly, or fly to the right or left of the flagpole. To meet such demands, according to an embodiment, there is provided a versatile camera device mountable to the pole, which can shoot such a scene of how a golf ball rolling on the green toward at least the cup from the cup side. More specifically, according to this embodiment, there is provided a versatile camera device mountable to a pole, which is placed in the axial middle of the pole comprising leg portions to stand on the bottom of the cup. (2) There is also a demand for a camera device that is effective and easy to use when observing living things in rivers, forests, woods, fields, grounds, gardens, etc., or when observing winds and rains, their influences on the ground and changes in the ground surface. (2-1) Under these circumstances, such a versatile camera device is provided that is to be mounted on a pole, whose multiple camera installation modes (usage types or installation states) can be easily changed over therebetween. Thus, the leg portion includes a locking section, to which an adapter can be mounted to extend the distal end portion of the leg portion or change the direction of the end portion thereof. (2-2) Further, a versatile camera device to be mounted to a pole is provided, which can be used as hanging a cylindrical portion containing a built-in image pickup camera, from a branch or the like when the adapter is formed into a hook shape. With this device, it is easy to observe insects living on tree branches, small birds gathering on tree branches, bird nests on tree branches, etc. (2-3) Furthermore, such a versatile camera device to be mounted to a pole is provided, that can be used with the leg portions standing up, for example, on the bottom of a river, when the adapter is flange-shaped. With this device, it is possible to observe the ecology of various living organisms in rivers, lakes, and the sea. (2-4) Furthermore, such a versatile camera device to be mounted to a pole is provided, that can be used by easily sticking the leg portions into any position on the ground when the adapter is arrowhead-shaped. With this device, it is easy to observe wildlife (insects, reptiles, wild animals, or livestock) in the wilderness. According to the above-described versatile camera devices to be mounted to a pole, it is possible to easily observe and monitor insects, animals, plants, agricultural products, rivers, road conditions, and areas around houses. (2-5) Furthermore, the above-described versatile camera devices can also be used as a sensor and/or guide for a guidance system when multiple devices are set. For example, multiple devices are placed along a mountain trail or along a predetermined pathway. In this manner, the guidance system can monitor passersby on the trail or along the predetermined pathway, and can also alert passersby who goes off the track by voice or wireless system. In this case, the multiple devices are connected to the server via relay means/network, and the server is connected to the control center. The embodiment inFIG.1provides a versatile camera device that is mounted to a pole placed in a axial middle portion of the pole (in this case, it may be referred to as a flagpole), which used with its leg portion standing on the bottom of a cup. The embodiment comprises a cylindrical portion112, a leg portion111attached by extending one end of the cylindrical portion112and tapering off towards its distal end, and a light-transmitting portion113provided in the middle portion along the axial direction toward the other end of the cylindrical portion112, which also forms a part of the cylindrical portion112. The device also comprises a reflector303that receives light incident from the light-transmitting portion113and guides it to one side of the axial direction, and an image pickup camera310disposed inside the cylindrical portion112and receives light reflected from the reflector303at an image pickup portion. Moreover, the device comprises a control substrate320disposed inside the cylindrical portion112and equipped with a transmitter and a controller that generate radio waves including an image pickup signal from the image pickup camera310, and a battery1000disposed inside the cylindrical portion112, which drives the transmitter and the controller mounted on the control substrate320and the image pickup camera310. Further, a lid450is provided to seal the other end side of the tube112and confine the reflector303, the image pickup camera310, the control substrate320and the battery1000inside the cylindrical member112. The above-described embodiment is effective when applied to a flagpole.FIG.1shows a hole11in a green, a cup12disposed in the hole11, and a pole100is set to stand in the center of the cup12. A distal end (one end) of a leg portion111, which is one end of the pole100, is inserted into a pole insertion hole formed in the bottom of the cup12. With this structure, the pole100stands upright vertically. A proximal end (the other end) of the leg portion111is joined coaxially to the cylindrical portion112of the pole100. The cylindrical member112and the leg portion111are different from each other in diameter, the light-transmitting portion113is used as a coupling means. The light-transmitting portion113forms a part of the cylindrical portion112and may be referred to as a coupling or joining tool. One end of the light-transmitting portion113forms a bottom portion, and the bottom portion is joined to the other end of the leg portion111, for example by welding. The other end portion of the light-transmitting portion113is open and is integrated with and joined to the opening of the cylindrical member112. Thus, in this case, the light-transmitting portion113comprises an inclined side wall. Window portions201to204are formed in multiple locations on the side wall. The window portions201to204are arranged, for example, circumferentially around the side wall. The window portions201to204may be of a flat type, such as transparent synthetic resin or glass in practice, or a wide angle lens or a fisheye lens may as well be disposed. The light-transmitting portion113which includes the window portions201to204described above can be regarded as a part of the cylindrical portion112. Note that the wall thickness of the light-transmitting portion113including the window portions201to204should preferably be greater than that of the cylindrical portion112for reinforcement. The pole100includes the cylindrical portion112, the leg portion111and the light-transmitting portion113which is a part of the cylindrical portion112. The leg portion111is located in the axial one end of the cylindrical portion112and is less than the cylindrical portion112in thickness. The light-transmitting portion113includes a large diameter portion continuously joined to one end of the cylindrical portion112and a small diameter portion continuously joined to the leg portion111. Furthermore, the light-transmitting portion113comprises an inclined wall that is inclined with respect to the axis between its large and small diameter portions, and the multiple window portions201to204made in the inclined wall. Further, to the leg portion111, an adapter can be attached to extend its distal end portion or changes the direction thereof. Therefore, the leg portion111is formed with a lock portion461which locks the adapter. In this example, the lock portion461has a screw structure. When, for example, a hook-type adapter is mounted to the lock portion461, the extended posture of the leg portion111is bent, for example, into a V- or U-shape. Then, with the adapter, the pole100can be hanged on a tree branch, for example. Further, it can be used in a variety of ways, making it easy to use. For example, it is easy to hang the camera device for storage or install in a high position. FIG.2Ais a cross-sectional view of the pole100cut along line A1-A2inFIG.1, viewing the light-transmitting portion113from below (a leg portion side) and above the pole.FIG.2Bis a cross-sectional view of the pole100cut along line B1-B2inFIG.1, viewing the light-transmitting portion113from above (a cylindrical portion side). The window portions201to204may be released open, but usually a window material such as glass or plastic is fit therein with waterproof and dustproof process. A reflector303is disposed in a deep section of each of the window portions201to204. The reflector303is, for example, a mirror and comprises a mirror (a reflective component) corresponding to the respective one of the window portions201to204. As shown inFIG.2B, the reflective components303a,303b,303cand303dcorrespond respectively to the window portions201to204. The reflective components303a,303b,303cand303dare combined in a pyramidal shape of a structure to ensure a view around the axis (360 degrees). The reflective components303a,303b,303cand303dare mounted to a fixation base304, for example, at a set reflection angle. Therefore, surfaces on which the reflective components303a,303b,303cand303dare placed respectively on the fixation base304are precisely manufactured so as to introduce the reflection light accurately to the lens of the microelectronic camera310. Note that the reflector303has a square pole-like shape in appearance, but it may as well be a triangle pole. The window portion may as well be constituted by three windows to correspond to a triangular prism-shaped reflector. Furthermore, the reflector303may as well be a conical cylindrical mirror. The image processing unit of the camera, which will be described later, may be equipped with a distortion adjustment function to adjust the ratio between vertical and lateral directions (aspect ratio) of the captured images. Thus, it is possible to adjust the distortion of the image captured with a lens such as a wide-angle lens or a fisheye lens, for example, and the distortion of images can be corrected. With the above-described configuration, the reflector303reflects the light entering from outside through the window portion (which may be referred to as a window as well) and directs it toward the other end (the upper portion) of the cylindrical portion112. The light reflected by the reflector303then enters the image pickup section of the microelectronic camera310. The microelectronic camera310is disposed and fixed inside the cylindrical portion112via a camera holder311. Therefore, the microelectronic camera310can shoot a subject (for example, a golf ball) on the green through the reflector303and the window portion201. For example, in order to shoot a golf ball10rolling on the green and approaching a cup12, when viewing on the green from the lens of the camera through the reflector303, it is preferable that the installation angle of each reflective component be adjusted to capture the front and obliquely downward side of the ball. It is then important that the camera device300catches the golf ball10located at the edge of the hole11or cup in its field of view through the reflector303. In other words, it is important that the area including a part of the edge of the hole11or a part of the edge of the cup12is covered by the shooting area. Further, a substrate320is mounted to the camera holder311, and on the substrate320, the control unit that controls the microelectronic camera310, the image processing unit and a communication unit (transmitter/receiver) are mounted. The camera holder311holding the microelectronic camera310is held by stoppers121to124formed to protrude from the inner wall of the cavity of the cylindrical portion112. Note that, although not shown in the figure, a flag can be tied to the extension of the other end of the cylindrical portion112. In the cylindrical portion112, a battery holder330is further disposed above the substrate320. The battery holder330is cup-shaped with a bottom, and a thin battery1000can be placed therein. The power from the battery1000is supplied to the various circuits of the above-described substrate320and the microelectronic camera310. As in the case of the above-described camera holder311, the battery holder330is also held by stoppers131to134formed to protrude from the inner wall of the cavity of the cylindrical portion112. Note that the battery holder330may be configured to be integrated with the camera holder311as one body. With this configuration, the camera310, the substrate320and the battery1000can be mounted to the camera holder311beforehand, and the camera holder311can be incorporated to the cylindrical portion112. FIG.3shows a part of the cylindrical portion112of the camera device200described above, to illustrate an example of how to hold the microelectronic camera310. The cylindrical portion112can be divided, for example, into two parts along the diameter direction, to prepare symmetrized half-cylindrical portions112aand112b. To the half-cylinder portion112b, stoppers121to124are formed to be integrated therewith in advance, to clip the camera holder311from above and below in the axial direction. The camera holder311is pushed into the groove of the semi-cylindrical portion112bbefore the semi-cylindrical portions112aand112bare combined together, held by the stoppers121to124as shown inFIG.1, and further fixed by adhesive. Although not shown, the battery holder330is also fixed to the semi-cylindrical portion112bin a similar manner. After that, the semi-cylindrical portions112aand112bare assembled together to be integrated as one body. The method of integrating the semi-cylindrical portions112aand112bis not limited to that discussed in this embodiment, but various methods may as well be possible. Various methods can be adopted, for example, a cap method used for the body and cap of a fountain pen, and the tightening method using a screw structure, for example. FIG.4is an explanatory diagram showing the above-described microelectronic camera and the various circuit portions provided on the substrate320. An image signal from the microelectronic camera is input to the image processing unit331and subjected to processing including a compression process (encoding) and the like. The encoded image signal is converted into a transmission signal in the transmitter/receiver unit332, which includes a transmitter and a receiver, and is sent to the user's wireless receiver (not shown), for example, by a wireless signal such as Blue Tooth (registered trademark) or Wi-Fi (registered trademark). The wireless receiver can, for example, relay the received signal and transmit it to the recording device. Or, the wireless receiver can transmit the received signal to the monitor room of the broadcasting facility. The image processing unit331comprise a distortion adjustment function to adjust the ratio between vertical and lateral directions (aspect ratio) of the captured images, or the aspect ratio may be adjusted by the signal processing unit in the monitor room. Further, the wireless receiver may be a smart phone. The transmitter/receiver unit332can also receive a control signal from a remote control device (remote controller) or a smart phone, and the received control signal is interpreted by the control unit333to control the microelectronic camera310. The contents of the controlling include, for example, focus, aperture, and the like. As described above, the light-transmitting portion113includes an inclined wall that is inclined between the large and small diameter portions with respect to the axis and the window portions201to204provided in the inclined wall. The camera device300comprises a reflector303disposed inside the light-transmitting portion113so as to reflects light entering from outside through the window portions201to204and guides it towards the other end portion of the cylindrical portion112, and the camera310disposed inside the cylindrical portion112so as to capture optical images from the reflector303with its image pickup unit. The embodiment described above is of a fixed type in which the reflector303and the microelectronic camera310are fixed. However, the present invention is not limited to the embodiment described above. The basic idea remains the same as that of the embodiment shown inFIG.1, but in other embodiments, the microelectronic camera310can be rotated along with the reflector303by a motor and its rotational position can be remotely controlled. FIG.5shows another embodiment, and the same parts as those of the embodiment shown inFIG.1are marked with the same referential signs as those used inFIG.1. The parts that differ from the structure shown inFIG.1will be explained below. According to this embodiment, a motor M is provided with respect for the battery holder330. The rotation shaft SH of the motor M penetrates the substrate320and the bottom of the camera holder311, and is coupled to the microelectronic camera310. With this structure, in this embodiment, the microelectronic camera310can be rotated by controlling the rotation of the motor M. In order for the microelectronic camera310to be able to rotate, the camera holder311is configured separately from a fixed side and a rotating side. The fixed side rotatably supports the rotating side via ball bearings b11, b12, b21and b22. The ball bearings b11, b12, b21and b22are also used as power supply components for the circuits of the microelectronic camera310and substrate320just as slip ring terminals. Although not shown in the figure, with a wiring system using slip ring terminals, control signals can as well be given to the motor M from the control unit provided on the substrate320. By controlling the rotational position of the motor M, the rotational position of the microelectronic camera310is also controlled. Note that each slip ring terminal comprises one terminal provided on the fixed side and the other terminal provided on the rotating side, which forms a mechanism to maintain the contact state at all times. To the rotating side of the camera holder311, a reflector303is mounted via an arm305. The reflector303, according to its rotational position, can reflect the light entering from the window portions201to204, and guide it to the lens of the microelectronic camera310. FIG.6is an explanatory diagram illustrating the above-described microelectronic camera310and the various circuit parts provided on the substrate320. An image signal from the microelectronic camera310is input to the image processing unit331and subjected to processing including a compression process (encoding) and the like. The encoded image signal is converted into a transmission signal in the transmitter/receiver unit332and sent to the user's wireless receiver (not shown), for example, by a wireless signal such as Blue Tooth (registered trademark) or Wi-Fi (registered trademark). The wireless receiver can, for example, relay the received signal and transmit it to the recording device. Alternatively, the wireless receiver can transmit the received signal to the monitoring room of the broadcasting facility or to a smart phone. The transmitter/receiver unit332can also receive a control signal from a remote control device (remote controller) or a smart phone, and the received control signal can be interpreted by the control unit333to control, for example, the microelectronic camera. The contents of the control include, for example, focus, aperture and the like. Further, the control unit333can control the rotational position (rotational angle) of the motor M according to an operation signal from outside. Thus, it is possible to change or adjust the shooting direction. It is preferable that the diameter of the above-described leg portion111be about 12.7 mm, the diameter of the cup be about 108 mm (±5 mm), and the distance from the inner wall of the cup to the outer circumferential surface of the leg portion be about 47.7 mm (±5 mm). Further, the height from the green surface to the bottom of the joined portion should desirably be 7.62 cm (±5 mm). The diameter of the cylindrical portion should desirably be about 25.5 mm (±5 mm), but it may be greater. When used as a golf-related device, the device should conform to the standards set by golf-related authorities. Moreover, the camera device used in the embodiment may be equipped with a function to display a level mark to check the levelness of the screen, and naturally may be provided with an anti-shake function as well. FIG.7is an explanatory diagram illustrating a configuration of still another embodiment. The same functional parts as those of the previous embodiment are marked with the same referential signs as those used in the embodiment. Note that in the embodiments shown inFIGS.1and5, the window portions201to204are formed in the inclined wall of the light-transmitting section113. However, the configuration of the light-transmitting section113is not limited to those of the above-provided embodiments. As shown inFIG.7, the pole100may include a cylindrical portion112, a leg portion111, and a cylindrical light-transmitting portion113A. Here, the cylindrical portion112of the pole is formed of carbon fiber or plastic, for example, and a reinforcement cylindrical member401made of iron or aluminum may be inserted in its inner circumferential portion so as to increase the strength of the cylindrical portion112. The above-described cylindrical light-transmitting section113A is made of transparent reinforced plastic, and can join the cylindrical portion112and the leg portion111coaxially with each other. Various methods are possible to join these together. In this embodiment, threaded grooves are formed in an outer circumference of the upper and lower end portions of the cylindrical light-transmitting portion113A, respectively. Threaded grooves are formed in the inner circumference of the lower end portion of the cylindrical portion112and also in the inner circumference of the ring-shaped head portion of the leg portion111. With this structure, the threaded grooves in the upper end portion of the cylindrical light-transmitting portion113A are screwed into the threaded grooves in the lower end portion of the cylindrical portion112, and also the threaded grooves in the lower end portion of the cylindrical light-transmitting portion113A are screwed into the threaded grooves in the head portion of the leg portion111. Thus, the cylindrical portion112and the leg portion111are continuously integrated together as one body through the cylindrical light-transmitting portion113A. The camera device300is mounted on its camera holder311. The camera holder311is a cylinder molded of synthetic resin, a lower end and an upper end of which are held by the stoppers121,122,133and134in the cylindrical portion112. The camera310is disposed in the hollow on the lower side of the camera holder311, and the camera310is coaxially mounted on the rotation shaft SH of the motor M. The positions where the motor M and the camera310are disposed are designed so that the rotation axis SH coincides with the central axis of the cylindrical portion112. In the longitudinal middle of the hollow of the camera holder311, a bearing341of the motor M is provided to partition the hollow. Further, the substrate320on which the control unit for controlling the camera310and motor M, is mounted to the bearing341. In an upper portion of the hollow of the camera holder311, a mounting portion342of the motor M is formed. Further, a battery1000is disposed on the head portion side of the motor M. The positive and negative electrodes of the battery1000are connected to a power supply terminal of the substrate320and a power supply terminal of the camera310, respectively, via a power line that runs in through the wall of the camera holder311. The power supply terminal of the camera310and the power supply terminal of the battery1000are electrically connected to each other via a contact terminal using a slip ring. To the camera310, a control signal from the control unit provided in the substrate320is given via a control line C1. The control line C1and the camera310are also electrically connected to each other via a contact terminal using a slip ring. Further, the control unit of the substrate320can control the on/off operation of the motor M, rotational position, focus and the like. The control signal from the substrate320is also supplied to the motor M via a control line C2. With this structure, the position of the rotation angle of the motor M can be controlled. The above-described embodiment is configured to rotate the camera310by using the rotation motor M. But, a pyramid-shaped reflector or a cone-shaped reflector as shown inFIG.1may be used as its reflector. When such a fixed reflector is used, the motor M is not necessary. Note that since the captured image may be deformed, a correction circuit is required in the image processing unit to correct the deformed image. In the above-described embodiment as well, it is important that the camera device300catches the ball10located on the rim of the hole11in its field of view through the above-described reflector303. In other words, it is important that the area including a part of the rim of the hole11or a part of the rim of the cup12is covered in the shooting area. Therefore, the field of view of the camera device300should desirably include a part of the area 30 mm to 50 mm away from the center of the leg portion111. In the above-described embodiment, the camera holder311holds the motor M and the battery1000as well. Further, around the inner circumference of the cylindrical portion112, a reinforcement cylinder401made of steel or aluminum is provided to reinforce the strength of the pole100. With the camera310, the surroundings of the pole100can be shot through the reflector303and the cylindrical light-transmitting section113A. With this structure, when the rotation of the motor M is controlled to change the rotational angular position of the camera310, and thus the shooting direction is changed. Regardless of the rotational angular position of the camera310, the field of view is not obstructed because the cylindrical light-transmitting portion113A is used. The above-described example is described in connection with a case where one camera device300is provided for one pole, but a plurality of camera devices300may be mounted to one pole. FIG.8is a diagram showing an example in which multiple camera devices300A,300B and300C are mounted to a single pole100. The basic structure is the same as that ofFIG.7. The angle of the reflector may be different from one camera to another. In this case, it is also important that the camera device300A, which is located at the lowest (closest to the green surface), catches the ball10located at the rim of the hole11in its field of view through the reflector303. In other words, it is important that the area including the hole11or part of the rim of the cup is covered in the shooting area. Therefore, the field of view of the camera device300should desirably cover a part of the area 30 mm to 50 mm away from the center of the leg portion111. On some greens, the surface is not necessarily flat and has a large undulation. In such a case, depending on the putting position of the ball, the ball may not be in the field of view of the camera device300A. Here, by switching from the camera device300A to the camera device300B or the camera device300C for shooting, it is possible to capture the ball located at a position higher than that of the cup. Further, the pole100comprises a power generation panel601mounted to the outer circumference of the upper part of the cylindrical portion(, which is a cylindrical portion112D). As shown in the cross section taken along line D1-D2, the power generation panel601is fixed to the cylindrical portion112D by, for example, adhesive604. Further, in the cylindrical portion112D, a power storage circuit602is provided. Based on the electric current generated from the power generation panel601, a voltage is stored in the power storage circuit602. The voltage is then charged to a battery (secondary battery) that drives the camera devices. With this structure, it is possible to realize a regenerative energy device that utilizes sunlight, thus contributing to the conservation of the natural environment. Note that, in the figure, the outer circumferential surface of the power generation panel601protrudes outward from the outer circumferential surface of the pole100(a surface at a different position from the outer circumferential surface of the power generation panel601). However, when used as a golf tool, the outer circumferential surface of the power generation panel601and the outer circumferential surface of the pole100are actually designed to be the same surface. FIG.9is a block diagram showing a shooting system using the camera device shown inFIG.1,5,7or8. On the camera device300A is shown as a typical one. The functions of the camera device300A are the same as those described inFIG.6. The signal captured by the camera device300A is converted into a transmission signal by the transmitter/receiver unit332, and the transmission signal is received by a transmitter/receiver501of a monitor device500. The image processing unit331includes an encoder. There are various encoding modes for the encoder, which are not particularly limited. The monitor device500receives the transmission signal at the transmitter/receiver501and demodulates the image signal at a demodulator unit502. Note here that the demodulator unit502includes a decoder, which corresponds to the encoder on the transmission side of the image signal. The image signal decoded by the demodulator unit502is input to a locus processing unit503and a synthesizer unit504. The locus processing unit503arranges the captured image signal in the time-axis direction in units of frames, and processes the image signal into a thin shadow, which is then supplied to the synthesizer unit504. Particularly, the image signal obtained at this time is a signal obtained by detecting the motion vector and extracting the image of a moving object. In this case, it is a video of a ball, and the locus processing unit503creates a time-lapse image of the ball (locus image) and supplies it to the synthesizer unit504. The synthesizer unit504synthesizes the locus image and the real image from the demodulator unit502, and inputs it to a display unit505. Thus, the user can see how the ball is rolling on the green on the display unit505. The thus synthesized image of about a few seconds may be stored in a memory not shown in the figure. The storage may be automatic or may be based on user operation. The memory may be a built-in memory inside the monitor or an IC memory installed from outside. The monitor device500described above may be a smart phone, a monitor device of a TV station, or a repeater. According to the embodiments ofFIGS.7,8, and9described above, there is provided a camera device to be mounted in an axial middle portion of a pole when used, whose leg portion is let to stand on the bottom of a cup. The pole includes the cylindrical portion and the leg portion located in one axial end of the cylindrical portion and having a thickness less than the thickness of the cylindrical portion. Further, in the middle of the cylindrical portion, it comprises a cylindrical transparent coupling section constituted by a transparent member disposed coaxially with the cylindrical portion. Next, in the vicinity of an upper portion of the cylindrical transparent member and the inside of the coupling portion of the cylindrical portion, downward-facing cameras310are provided and the cameras310are coupled with the rotation shaft of the motor M mounted on the camera holder311so as to be rotated under the controlled of the motor M. On the lens side of the camera310, the reflector303is arranged which rotates in unison with the camera. The reflector303reflects the image of an external scenery transmitted through the coupling portion of the cylindrical transparent member and directs it to the image pickup lens of the camera310. The pole includes the above-described camera as a first camera device and also a second camera device with the same configuration as the first camera device above the first camera device. Further, a power generation panel601is mounted to the outer circumference of the upper portion of the cylindrical portion (a cylindrical portion112D). According to the current generated from the power generation panel601, the power is stored at a voltage in the power storage circuit602. The voltage is then charged to the battery (secondary battery) that drives the camera device. Here, the leg portion111further comprises a lock portion461which locks an adapter when the adapter is to be attached to extend its distal end portion or change the direction thereof. Various types of adapters can be mounted to the lock portion461. Thus, the camera device can be made versatile. The embodiment shown inFIG.10is an example in which the distal end of the leg portion111has a threaded structure (bolt), whereas the adapter700has a threaded hole (nut)701. The distal end of the adapter700forms a hook711that modified into a V or U shape, for example. When the adapter700and the leg portion111are integrated as one body, the adapter700can be used to hang the pole100from a twig811of a tree810, for example. With such a structure, the camera device is effective for observing small birds flying to the twig811or insects gathering on the twig811. The preparatory operation for observation is simply hooking the adapter700onto the twig811, and thus it is extremely easy to use. To the end portion of the cylindrical member section112, a cap750is mounted for waterproof. In other words, measures are taken to prevent water and unwanted objects from entering the cylindrical member section112. The embodiment ofFIG.11shows an example where one end portion of the adapter700, which includes the threaded hole portion701, comprises a flange721. With such an adapter700, for example, the camera device can be placed in the water of a stream, and a number of stones822are placed on the flange721, to let the pole100stand. In this case, it is preferable that, for example, the camera device300A is placed within the water and, for example, the camera device300C is placed on the water surface. The camera device using such an adapter700is effective in observing and monitoring insects and fish in the water or on the surface of the water. The device is not only for use in a stream, but it may be placed in a water tank used for aquaculture, or even in a harbor or the like with use of a larger camera device(s). It is natural that the pole is waterproofed to prevent water from entering the inside. The embodiment inFIG.12is an example in which an adapter700comprising a threaded hole portion701and a sharp arrowhead portion731at one end thereof. The camera device employing the adapter700can be placed by easily piercing the end portion thereof into the ground of, for example, a hill, meadow, field, garden, forest, mountainous area, pasture, etc. By placing a great number of camera devices of this type dispersedly, it is possible to observe and monitor various subjects over a large area. It is naturally possible to use camera devices using the adapters shown inFIGS.10,11and12in any combinations, according to the conditions of the monitoring/observation area. Further, in the above-described embodiments, any of the camera devices300A,300B, and300C may be an infrared camera. With use of an infrared camera, the usage will be greatly expanded into a wide range of applications. For example, it can be used to monitor animals at night. Further, not only a camera, but also a microphone, speaker, ultrasonic generator and ultrasonic receiver may be added to the pole in selective combinations. With addition of a microphone, it is possible to collect the sounds emitted by various animals, birds, insects, etc., thereby making it possible to analyze the ecosystem. By using the microphone and speaker to output sounds generated by birds and animals, it is possible to study the interaction with birds and animals. Furthermore, with the use of an ultrasonic generator and ultrasonic receiver, it is possible to study interaction with living creatures in the sea and underwater. Moreover, in the water, multiple camera devices are linked between their poles, ultrasonic waves can be utilized. Alternatively, the versatile camera devices can be used for a guidance system to guide a climber walking along a trail or a passerby passing through a predetermined passage. FIG.13shows an example of the format of data transmitted from the transmitter of the transmitter/receiver unit332that transmits image signals of the camera devices. The transmitted data contains headers900and data902repetitiously. Each header900contains a pole ID90, a first camera ID91, a second camera ID92and a third camera ID93. The data902contains packetized data911,912and913of the first camera, the second camera and the third camera respectively. Audio data may as well be contained. Further, some other sensor data such as temperature data may as well be contained. The data902forms a packet stream, and each packet contains a packet ID (PID) and encoded data (Edata). In each of the embodiments described above, the field of view of the camera device300A closest to the leg portion should desirably include a part of the area 30 mm to 50 mm away from the center of the leg portion111. Depending on the object to be monitored, the field of view of the camera device300A may include a part of the area 10 mm or more from the center of the leg portion111. In some of the embodiments described above, the drawings may be schematically represented with respect to the width, thickness, shape, etc., of each part, as compared to the actual state, in order to make the description clearer. In addition, it is within the scope of the present invention if multiple embodiments are combined and implemented. Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. | 41,825 |
11860518 | DETAILED DESCRIPTION The devices, systems, and methods disclosed herein provide for a camera mounting device that is couplable to the frame of a bimini of a watercraft. The camera mounting device can be coupled to an aft portion of the frame of a bimini. By being coupled to the bimini of the watercraft, the camera has a high vantage point relative to the rest of the watercraft that has an unobstructed view of water sports behind the watercraft. As mentioned above, the view of cameras mounted to the tower of a watercraft can sometimes be obstructed buy a bimini of the watercraft when the bimini is in a deployed position. In these configurations, the bimini of the watercraft must be lowered in order to use the camera. However, by mounting the camera to the bimini itself, the view of the camera is less likely to be obstructed. In some implementations, the device includes a first shell portion and a second shell portion that are disposable on opposite sides of a portion of the frame of the bimini to clamp the shell of the device onto the frame. This design allows the camera mounting device to be retrofit onto an existing bimini frame. The design also allows the camera mounting device to be easily removed from the bimini frame when not in use. Various implementations include a bimini mountable camera device. The device includes a first shell portion, a second shell portion, and a camera. The first shell portion has a first end and a second end opposite and spaced apart from the first end of the first shell portion. The first end of the first shell portion defines a central opening extending along a central axis to the second end of the first shell portion. The second end of the first shell portion defines a first groove extending along a first groove axis. The first groove axis extends perpendicular to the central axis. The second shell portion has a first end and a second end opposite and spaced apart from the first end of the second shell portion. The first end of the second shell portion is couplable to the second end of the first shell portion. The first end of the second shell portion defines a second groove extending along a second groove axis. The second groove axis extends parallel to the first groove axis when the first end of the second shell portion is coupled to the second end of the first shell portion. The camera is at least partially disposed within the central opening. Various other implementations include a bimini mounted camera system. The system includes a frame for a bimini and a bimini mountable camera device, as described above. The second side of the first shell portion is coupled to the first side of the second shell portion such that a portion of the frame of the bimini is disposed within the first groove and the second groove. FIGS.1-4show a bimini mounted camera system100according to aspects of various implementations. The system100includes a frame110for a bimini and a bimini mountable camera device120. The bimini frame110includes at least one hollow tube. The tube has a circular shape in a plane perpendicular to a longitudinal center line of the tubing. A portion of the tubing of the frame110includes two frame openings112that are circular openings extending from an outer surface of the tube to the hollow center of the tube. The bimini mountable camera device120includes a first shell portion130, a second shell portion150, and a camera170. The first shell portion130has a first end132and a second end134opposite and spaced apart from the first end132of the first shell portion130. The first end132of the first shell portion130defines a central opening136extending along a central axis138to the second end134of the first shell portion130. The camera170is sized such that the camera170is disposable within the central opening136to form a water-tight seal. Since the camera170of the device120is intended to be used on a watercraft, the camera170shown inFIGS.1-4is a waterproof camera. However, in some implementations, the camera can be any type of motion capture camera known in the art. The second end134of the first shell portion130defines a first groove140extending along a first groove axis142. The first groove axis142extends perpendicular to the central axis138of the central opening136. The second shell portion150has a first end152and a second end154opposite and spaced apart from the first end152of the second shell portion150. The first end152of the second shell portion150is couplable to the second end134of the first shell portion130, as discussed below. Similar to the first shell portion130, the first end152of the second shell portion150defines a second groove160extending along a second groove axis162. The second groove axis162extends parallel to the first groove axis142when the first end152of the second shell portion150is coupled to the second end134of the first shell portion130. The first groove140and the second grove160each have a semi-circular cross-section as viewed in a plane perpendicular to their respective groove axes142,162. The radius of curvature of the first groove140as viewed in a plane perpendicular to the first groove axis142is the same as a radius of curvature of the second groove160as viewed in a plane perpendicular to the second groove axis162. The radius of curvature of the first groove140and the second groove160is the same as the radius of curvature of the portion of the tube of the frame110of the bimini. Thus, the second end134of the first shell portion130and the first end152of the second shell portion150can be coupled together such that the first groove140and the second groove160form the cylindrical passage, and the portion of the tube of the frame110of the bimini can be disposed within the cylindrical passage to couple the device120to the frame110of the bimini. Although the first groove140and the second groove160each have a semicircular cross section as viewed in a plane perpendicular to their respective groove axes142,162, in some implementations the first groove and the second groove each have a cross-sectional shape that corresponds to the portion of the frame of the bimini to which the device is configured to be coupled. In some implementations, one of the first shell portion or the second shell portion does not include a groove, and the other of the second show portion or the first show portion defines the single groove. In some implementations, the device only includes a single shell portion that is couplable to a portion of the frame of the bimini. The second end134of the first shell portion130defines a first set of one or more fastener openings144. Each of the fastener openings144of the first set of one or more fastener openings144is a threaded opening. The first end152of the second shell portion150defines a second set of one or more fastener openings164extending to the second end154of the second shell portion150. Each of the fastener openings of the first set of one or more fastener openings144is axially aligned with a different one of the fastener openings of the second set of one or more fastener openings164when the first end152of the second shell portion150is coupled to the second end134of the first shell portion130. The device includes one or more fasteners180. Each of the one or more fasteners180is disposable within a different one of the fastener openings of the second set of one or more fastener openings164such that each of the one or more fasteners180is threadingly engageable with a different one of the fastener openings of the first set of one or more fastener openings144to couple the first shell portion130to the second shell portion150. Although the first set of one or more fastener openings144and the second set of one or more fastener openings164shown inFIGS.1-4each include three fastener openings, in some implementations, the first set of one or more fastener openings and the second set of one or more fastener openings can each include any number of one or more fastener openings. Although each of the fastener openings of the first set of one or more fastener openings144includes a threaded opening, in some implementations, each of the fastener openings of the first set of one or more fastener openings can include a through hole extending from the second side of the first shell portion to the first side of the first shell portion. In some implementations, the second set of one or more fastener openings can include threaded openings extending from the first side of the second shell portion toward the first side of the second shell portion. In some implementations, the device does not include fastener openings or fasteners and the first shell portion and the second shell portion are couplable to each other by any other means known in the art. In some implementations the first shell portion and the second shell portion are permanently coupled to each other. In some implementations, the device is permanently coupled to the frame of the bimini. In some implementations the device is directly coupled to the frame of the bimini by fasteners, welding, adhesive, ties, straps, interlocking features, or any other means known in the art. The second end134of the first shell portion130shown inFIGS.1-4defines two locator pin openings146. The two locator pin openings146of the first shell portion130are located such that each of the two locator pin openings146is alignable with a different one of the two frame openings112of the bimini frame110when the portion of the bimini frame110is disposed within the first groove140of the first shell portion130. The device120further includes two locator pins182. Each of the locator pins182is at least partially disposed within a different one of the two locator pin openings146such that a portion of each of the two locator pins182extends away from the second side134of the first shell portion130. Thus, the portion of each of the two locator pins182extending away from the second side134of the first shell portion130is aligned with and disposable within a different one of the two frame openings112. Although the first shell portion130shown inFIGS.1-4defines the two locator pin openings146, in some implementations, the first end of the second shell portion defines the locator pin openings. In some implementations, the first shell portion or the second shell portion define any number of one or more locator pin openings. In some implementations, the device does not include locator pin openings. In some implementations, the locator pins are integrally formed with the first shell portion or the second shell portion. In some implementations, the locator pins are integrally formed with the portion of the frame of the bimini and are disposable within the locator pin openings of the first shell portion or the second shell portion when the device is coupled to the portion of the frame of the bimini. As shown inFIGS.1-4, the second end154of the second shell portion150defines a second central opening156extending to the first end152of the second shell portion150. The second central opening156is configured such that a cable of the camera170disposed within the central opening136of the first shell portion130extends through the second central opening156. A number of example implementations are provided herein. However, it is understood that various modifications can be made without departing from the spirit and scope of the disclosure herein. As used in the specification, and in the appended claims, the singular forms “a,” “an,” “the” include plural referents unless the context clearly dictates otherwise. The term “comprising” and variations thereof as used herein is used synonymously with the term “including” and variations thereof and are open, non-limiting terms. Although the terms “comprising” and “including” have been used herein to describe various implementations, the terms “consisting essentially of” and “consisting of” can be used in place of “comprising” and “including” to provide for more specific implementations and are also disclosed. Disclosed are materials, systems, devices, methods, compositions, and components that can be used for, can be used in conjunction with, can be used in preparation for, or are products of the disclosed methods, systems, and devices. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc. of these components are disclosed that while specific reference of each various individual and collective combinations and permutations of these components may not be explicitly disclosed, each is specifically contemplated and described herein. For example, if a device is disclosed and discussed each and every combination and permutation of the device are disclosed herein, and the modifications that are possible are specifically contemplated unless specifically indicated to the contrary. Likewise, any subset or combination of these is also specifically contemplated and disclosed. This concept applies to all aspects of this disclosure including, but not limited to, steps in methods using the disclosed systems or devices. Thus, if there are a variety of additional steps that can be performed, it is understood that each of these additional steps can be performed with any specific method steps or combination of method steps of the disclosed methods, and that each such combination or subset of combinations is specifically contemplated and should be considered disclosed. | 13,488 |
11860519 | DETAILED DESCRIPTION FIG.1schematically shows a prior art method and prior art components used to change over a camera2with a handheld top mount10to a low mode mount12of a Steadicam system. The handheld top mount10includes a camera handle14and a camera top accessory plate16, which is attached to the handle. To change mount types, according to this prior art method, the accessory plate16of the handheld top mount10is screwed onto and off of the camera2. After removal of the handheld top mount10, a mounting plate18is screwed onto the camera top19. The camera top19typically includes threads such that it is configured to receive screws. Finally, the low mode mount12is attached, using fastening elements (not shown). A low mode mount12of a Steadicam system includes, among other elements, a docking block20and various mount adjustment elements22. Mount adjustment elements22include, but are not limited to a shaft24that is mounted onto the docking block, a control handle26disposed on the shaft24, and at least one gripping element28. The docking block20is attached to the camera mounting plate18, using fasteners (e.g. sliding dovetails). The time it takes to transition from a handheld mount to a low mode mount, using this method and these components is considerable and typically occurs in four or more stages, as illustrated inFIG.1. In contrast to the traditional transitions used in systems that switch over a camera with a handheld top mount to a low mode mount,FIG.2schematically shows stages of one embodiment of a quick swap top mountable camera mount system100. Generally, there are three (3) stages in this system configuration which allow a changeover from an initial mount type over to a subsequent mount type. This system configuration, however, should not be construed as limiting. The system is designed to be user-friendly and promote interchangeability with other elements, which may be incorporated into a quick swap top mountable camera mount system. Elements of the system configuration100shown inFIG.2generally include a quick swap handheld top mount110(Stages1-2), a quick swap mounting plate118(Stages1-3), and a quick swap low mode mount112(Stage3). Like the traditional method and components, the system100is used to changeover a handheld top mount110over to a low mode mount112. But, unlike the traditional method and components, fewer pieces are required to be removed and replaced due, in part, to the use of quick swap coupling elements, which allow various types of mounts to quickly and easily decouple from the quick swap mounting plate. In this improved system, the quick swap handheld top mount110includes a handle114and a quick swap camera top accessory plate116, which is connected to the handle. And exemplary quick swap handheld top mount110is shown inFIG.4. The handle114may include a plurality of bores113for overall lightening of the handheld top mount110and an extending arm117. The extending arm117extends from a connecting end121of the handle114and connects the handle to a mount surface of a plate (e.g. the accessory plate), using at least one fastener119. The handle114includes at least one gripping area115, which allows a cameraman to carry the camera by hand. The quick swap camera top accessory plate116also includes one or more quick swap coupling elements140that allow the accessory plate116to mate with the mounting plate118. These coupling elements140include at least one quick release and lock mechanism142that allows the accessory plate116to decouple from the mounting plate118, as further described below. The handle114and the quick swap camera top accessory plate116may be either two separate pieces, which are connected together or integrated pieces, i.e. meaning that the handle and plate are molded or machined from one generally contiguous material. Referring back toFIG.2, Stage1of the system is shown with the handheld top mount110coupled to the quick swap mounting plate118. In this stage, the quick swap mounting plate118is positioned under the handheld top mount110. Stage2is shown with the handheld top mount110removed from the quick swap mounting plate118. Preferably, the handheld top mount110is slidably engaged with the quick swap mounting plate118, although other methods of engagement may be used. Stage3is shown with the quick swap low mode mount112coupled to the quick swap mounting plate118. FIG.3shows one configuration of a quick swap mounting plate118. This plate is preferably utilized through each stage of the system100. In this configuration, the mounting plate118includes an outer body portion150with a plurality of sides152. Here, the outer body portion150includes four sides—long sides154a,154band short sides156a,156b. The inner body portion158has a plurality of quadrants160, which each quadrant including an opening161a,161b,161c,161d,161e,163a,163b,163c,163d,163efor weight reduction purposes. This configuration of the mounting plate includes ten quadrants. Fewer or more quadrants, however, may be provided. Each quadrant is bounded by sides of the outer body portion, a central strip162, which extends between short sides156a,156band lateral structural elements166a,166b,166c,166d, which extend between long sides154a,154b. The inner body portion also includes a plurality of bores170with each bore having threads for coupling with fasteners and mounting onto a camera top (See, e.g.,FIG.6A). Coupled to long sides154a,154b, respectively, are a primary male alignment element180aand a secondary male alignment element180bwith the secondary male alignment element180bincluding a raised lip123. These male alignment elements and the raised lip123engage or interlock with the accessory plate and other alternative system plates, as further described below. A male alignment element may also be configured with a dovetail like shape, which includes an angular or curved side surface. The positioning and configuration of the male alignment elements, however, should not be construed as limiting. One or more male alignment elements may have alternative positioning and configurations, depending, in part, on overall the structural configuration required for positioning on top of the camera. As shown particularly inFIG.4, a quick swap accessory plate116includes an accessory plate platform130, having a generally rectangular shape. The platform130is bounded by a front platform end131a, a rear platform end131b, a first platform side132a, and a second platform side132b. The second platform side132bincludes the quick release and lock mechanism142, which is fitted partially within a cavity133(represented in part by dashed lines shown inFIG.4) on the second platform side132bof the platform. A cover134is positioned partially over the cavity to protect the interior components of the quick release and lock mechanism142. Extending from the platform130are plate extensions135a,135b. And disposed on the platform130and the plate extensions is a plurality of apertures136of various sizes and shapes. These apertures are incorporated into the platform and pate extensions to lighten the overall load of the accessory plate116. The apertures may also provide attachment points for coupling the accessory plate to the camera2and the mounting plate118. The accessory plate platform130also includes a channel129configured to receive the quick swap mounting plate118. Disposed within the channel129are a primary female alignment element182aand a secondary female alignment element182b. These female alignment elements are configured to couple with the quick swap mounting plate118, as shown inFIG.6A. Profiled surfaces192a,192bof each female alignment element respectively form a primary socket194aand a secondary socket194b. The primary socket194ahas as a complementary shape to receive the primary male alignment element of the quick swap mounting plate. Similarly, the secondary socket194bhas as a complementary shape to receive the secondary male alignment element180bof the quick swap mounting plate118. FIGS.5and6Bshow how the quick swap mounting and accessory plates may be aligned before assembly. And,FIGS.6A and6B, in particular, illustrate the transition of the system100from Stage1to Stage2and particularly how a quick swap handheld top mount110and the quick swap mounting plate118may be coupled to and decoupled from a camera2. To initiate the transition and the coupling and decoupling of the quick swap handheld top mount, a quick release and lock mechanism142is activated. Referring particularly toFIGS.5,6A,6B, and12, the quick release and lock mechanism142is activated by a lever144coupled to one or more rotary elements146fitted within the cavity133. A rotary element146acts an axle, by engaging both the lever144and a block148positioned within the cavity133of the quick swap camera top accessory plate116. The rotary element146rotates and moves the block148downwardly and inwardly toward the quick mounting plate118. A user activates this motion, by moving the lever144from a first position145a(shown inFIGS.5and6B) to a second position145b(shown inFIG.6A). Decoupling of the accessory plate116from the mounting plate118is achieved by activating the quick swap release and lock mechanism142and then sliding the accessory plate116off of the quick swap mounting plate118(shown inFIG.6B). FIG.7shows one configuration of a quick swap mounting plate118assembled to a quick swap low mode mount112. After the quick swap handheld top mount110is removed from the quick swap mounting plate118, as shown inFIG.6B, the quick swap low mode mount112may positioned onto the mounting plate118. The quick swap low mode mount112includes, among other elements, a docking block120and mount adjustment elements122. Mount adjustment elements122include, but are not limited to a shaft124that is mounted onto the docking block, a control handle126disposed on the shaft124, and at least one gripping element128. A channeled underside125of docking block120preferably interlocks with the raised lip123of the quick swap mounting plate118. Alternatively, or in addition to the raised lip, coupling of the mounting plate118and the docking block120may be achieved using screws and/or other fasteners. FIGS.3-7show one version of a quick swap top mountable camera mount system100, represented by the schematic shown inFIG.2.FIGS.8A-11show alternative components and arrangements, which may be used with quick swap top mountable camera mount systems. These alternative components/arrangements may include a first-type power distribution plate203and an alternate accessory plate303. Either the power distribution plate203, the alternate accessory plate303or the accessory plate116of a handheld top mount110may be coupled to and positioned atop a quick swap mounting plate118(represented by dashed lines shown inFIG.8A). The power distribution plate203includes a power distribution plate mounting surface204, having a plurality of apertures236of various sizes and shapes. These apertures are incorporated into the mounting surface204to lighten the overall load of the power distribution plate203and/or provide attachment points for coupling the power distribution plate203to other system components. Like the mounting plate118, the mounting surface204includes a raised lip223, which can interlock with either a handheld top mount, an accessory plate alone or a low mode mount (FIG.8B). Each plate203,303also includes at least one quick release and locking mechanism242, a cavity233, and a cover234and has the same elements as the release and lock mechanism incorporated into the accessory plate116, as shown inFIG.12. These elements include and a lever244, one or more rotary elements246and a block248. In an alternative arrangement, as shown inFIG.8C, two plates203,303may be stacked. FIGS.9A-10C, show a power distribution plate302, including a plate mounting surface304on a power distribution plate platform330. The power distribution plate302also includes a plurality of apertures336of various sizes, a raised lip323, and at least one quick release and lock mechanism342with a cover334. Like the accessory plates, the apertures are provided to lighten the overall load and/or provide attachment points for coupling the power distribution plate to other system components. The power distribution plate platform330includes a front platform end331aand a rear platform end331b, a power distribution plate channel329configured to receive the quick swap mounting plate, a primary female alignment element382aand a secondary female alignment element382b. These female alignment elements are further configured to couple with the quick swap mounting plate118. Profiled surfaces392a,392bof each female alignment element respectively form a primary socket394aand a secondary socket394b. The primary socket394ahas as a complementary shape to receive the primary male alignment element180aof the quick swap mounting plate. Similarly, the secondary socket394bhas as a complementary shape to receive the secondary male alignment element180bof the quick swap mounting plate118. The power distribution plate also includes a side wall305with a series of formations306. These formations are configured to house power connectors for power distribution. The quick release and lock mechanism342of the power distribution plate203, and the accessory plate303incorporates the same elements into a cavity333as the release and lock mechanism incorporated into the accessory plate116. As shown inFIG.12, these elements include a lever344, one or more rotary elements346and a block348. Each of these elements facilitates coupling and decoupling of the plates203,303to a quick swap handheld top mount110, or a quick swap mounting plate118FIG.9A-9Cshow examples of how a mounting plate118, a power distribution plate302, a quick swap accessory plate116may be aligned and assembled.FIG.10A-10Cshow examples of how a mounting plate118, a power distribution plate302, a quick swap accessory plate118, and camera2may be aligned and assembled.FIG.11shows another alternative system arrangement. Here, a handle114is connected directly to a plate mounting surface304, using a fastener119. While embodiments of this invention have been shown and described, it will be apparent to those skilled in the art that many more modifications are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the following claims. | 14,459 |
11860520 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Embodiments of the present disclosure will be described below with reference to the drawings. The embodiments described below are each an example of the present disclosure. The present disclosure is not limited to the following embodiments and also encompasses a variety of variations implemented to the extent that the variations do not change the substance of the present disclosure. Each member in the following drawings is so drawn at a scale different from an actual scale as to be large enough to be recognizable in the drawings. In the drawings below, axes XYZ, which are coordinate axes perpendicular to one another, are drawn as required. In this case, the axes XYZ in each of the drawings are so configured that the plane XY coincides with a substantially horizontal plane, and the direction that the arrow of the axis Z indicates, that is, the positive side of the direction Z is substantially opposite the direction in which the gravity acts. The positive side of the direction Z is also referred to as an “upper side,” and the negative side of the direction Z is also referred to as a “lower side.” First Embodiment The present embodiment will be described with reference to a case where the projection-type display apparatus is a projector including three liquid crystal panels as light modulators. 1.1. Projector The configuration of the projector according to the present embodiment will be described with reference toFIG.1.FIG.1is a schematic view showing the configuration of the projector according to the first embodiment. The projector1according to the present embodiment includes a main body2and a projection lens60, as shown inFIG.1. The main body2is accommodated in an exterior enclosure2a. The exterior enclosure2ais made, for example, of a resin material and is the combination of a plurality of members. The projection lens60is so disposed as to protrude out of the exterior enclosure2a. The projection lens60is mounted on the main body2via a lens holder70. The main body2includes a light source10, a color separation system20, and a relay system30, which serve as an illumination system, three liquid crystal panels40R,40G, and40B, which serve as the light modulators, a cross dichroic prism50, which serves as a light combining system, and the lens holder70. The projection lens60is attachable to and detachable from the lens holder70. The liquid crystal panels40R,40G, and40B modulate light outputted from the light source10. The projection lens60projects the light modulated by the liquid crystal panels40R,40G, and40B. The light source10includes a light source11, a first lens array12, a second lens array13, a polarization converter14, and a superimposing lens15. The first lens array12and the second lens array14each include lenslets arranged in a matrix. In the projector1, a discharge-type light source is employed as the light source11, but the form of the light source11is not limited thereto. A light emitting diode, a laser, or any other solid-state light source may be employed as the light source11. A light flux outputted from the light source11is divided by the first lens array12into a plurality of minute sub-light fluxes. The sub-light fluxes are superimposed on one another by the second lens array13and the superimposing lens15on a light incident surface of each of the three liquid crystal panels40R,40G, and40B, which are each an illumination target. That is, the first lens array12, the second lens array13, and the superimposing lens15form an optical integration/illumination system that illuminates the liquid crystal panels40R,40G, and40B in a substantially uniform manner with the light flux outputted from the light source11. The polarization converter14converts the non-polarized light outputted from the light source11into polarized light usable by the three liquid crystal panels40R,40G, and40B. The color separation system20includes a first dichroic mirror21, a second dichroic mirror22, a reflection mirror23, and field lenses24and25. The color separation system20separates the light outputted from the light source10into three color light fluxes that belong to different wavelength regions. The three color light fluxes are substantially red light, substantially green light, and substantially blue light. In the following description, the substantially red light described above is also called R light, the substantially green light described above is also called G light, and the substantially blue light described above is also called B light. The field lens24is disposed on the light incident side of the liquid crystal panel40R. The field lens25is disposed on the light incident side of the liquid crystal panel40G. The first dichroic mirror21transmits the R light and reflects the G light and the B light. The R light having passed through the first dichroic mirror21is reflected off the reflection mirror23, passes through the field lens24, and illuminates the liquid crystal panel40R for R light. The field lens24collects the light reflected off the reflection mirror23, and the liquid crystal panel40R is illuminated with the collected light. The field lens25also collects the light reflected off the second dichroic mirror22, as does the field lens24, and the liquid crystal panel40G is illuminated with the collected light. In this process, the light with which each of the liquid crystal panels40R and40G is illuminated is so set as to be a substantially parallelized light flux. The G light reflected off the first dichroic mirror21is reflected off the second dichroic mirror22, then passes through the field lens25, and illuminates the liquid crystal panel40G for G light. The first dichroic mirror21and the second dichroic mirror22are produced by forming a dielectric multilayer film formed of multiple layers each corresponding to a function on a transparent glass plate. The relay system30includes a light-incident-side lens31, a first reflection mirror32, a relay lens33, a second reflection mirror34, and a light-exiting-side lens35as a field lens. The B light, which travels along an optical path longer than those along which the R light and the G light travel, is likely to be a wide light flux. The relay lens33is therefore used to suppress the expansion of the light flux. The B light having exited out of the color separation system20is reflected off the first reflection mirror32and caused by the light-incident-side lens31to converge in the vicinity of the relay lens33. The B light then diverges toward the second reflection mirror34and the light-exiting-side lens35. The light-exiting-side lens35has the same function as that of the field lenses24and25described above, and the liquid crystal panel40B is illuminated with the light having passed through the light-exiting-side lens35. The light that illuminates the liquid crystal panel40B is so set as to be a substantially parallelized light flux. The liquid crystal panels40R,40G, and40B for the color light fluxes convert the color light fluxes incident via the light incident surfaces thereof into light fluxes having intensities according to corresponding image signals and transmit and output the converted light fluxes. The liquid crystal panels40R,40G, and40B are each a transmissive liquid crystal panel. The liquid crystal panels40R,40G, and40B as the light modulators are each not limited to a transmissive liquid crystal panel. Reflective light modulators, such as reflective liquid crystal panels, may be employed as the light modulators. The light modulators may instead be each a digital microdevice or any other similar device that controls the direction in which the light incident thereon exits for each micromirror that serves as a pixel to modulate the light outputted from the light source11. Further, the configuration in which light modulators are provided for a plurality of color light fluxes is not necessarily employed, and a configuration in which one light modulator modulates the plurality of light fluxes in a time division manner may instead be employed. The cross dichroic prism50combines the color converted light fluxes outputted from the liquid crystal panels40R,40G, and40B with one another. The cross dichroic prism50has an R-light-reflecting dichroic surface51R, which reflects the R light and a B-light-reflecting dichroic surface51B, which reflects the B light. A dielectric multilayer film that reflects the R light is disposed on the R-light-reflecting dichroic surface51R. A dielectric multilayer film that reflects the B light is disposed on the B-light-reflecting dichroic surface51B. The R-light-reflecting dichroic surface51R and the B-light-reflecting dichroic surface51B are hereinafter also simply referred to as reflecting dichroic surfaces51R and51B. The dielectric multilayer film that reflects the R light and the dielectric multilayer film that reflects the B light are so disposed as to forma substantially X-letter shape in the plan view along the direction Z. The reflecting dichroic surfaces51R and51B combine the three converted R light, G light, and B light with one another to produce combined light that displays a color image. The combined light produced by the cross dichroic prism50exits toward the projection lens60. The projection lens60is mounted on the main body2via the lens holder70. The lens holder70is attached to a structural member that is not shown but is part of the main body2. Further, the above-mentioned components provided in the main body2are similarly attached to the structural member described above. Therefore, when the projection lens60is held by the lens holder70and mounted on the main body2, the main body2and the projection lens60are positioned with respect to each other. The positional relationship between the main body2and the projection lens60may be adjustable via a position adjustment mechanism. The lens holder70will be described later in detail. The combined light having exited out of the main body2is projected as image light via the projection lens60on a projection target that is not shown, such as a screen. The configuration of the projection lens will next be described with reference toFIGS.2and3.FIG.2is a perspective view showing the exterior appearance of the projection lens.FIG.3is a side view diagrammatically showing the configuration of the projection lens. InFIG.3, the components provided in the main body2excluding the exterior enclosure2a, the cross dichroic prism50, and the lens holder70are omitted. The projection lens60is a bending-type projection lens and includes an optical system bent in the form of a substantially U-letter shape in the plan view along the direction X, as shown inFIG.2. A cylindrical section62is provided at one end of the optical system described above in the projection lens60. When the projection lens60is mounted on the main body2, the cylindrical section62is inserted into the main body2. A lens cover64, which covers a second lens that will be described later, is provided at the other end of the optical system described above in the projection lens60. The lens cover64is openable and closable, andFIG.2shows the state in which the lens cover64is closed. The lens cover64opens and allows the image light to exit when the projection lens60is used, and the lens cover64closes to protect the second lens when the projection lens60is not used. The lens cover64may instead be so configured as to be attachable to and detachable from the projection lens60. The projection lens60deflects the combined light having exited out of the cross dichroic prism50toward the positive side of the direction Y in a two-stage sequential manner, as shown inFIG.3. The combined light is therefore reversed toward the negative side of the direction Y out of the projector1and enlarged and displayed as a displayed image on the projection target that is not shown, such as a screen. In detail, the projection lens60includes a first lens61, a second lens63, a first reflection mirror65, a second reflection mirror67, and other components.FIG.3shows only the first lens61closest to the demagnifying side and the second lens63closest to the magnifying side, and the other lenses are omitted. The thus configured projection lens60has a complicated configuration as compared with a non-bending-type projection lens, so that the weight of the projection lens60is likely to increase. Although will be described later in detail, the projection lens60is mounted on the main body2with the cylindrical section62of the projection lens60inserted into the lens holder70. The combined light having exited out of the cross dichroic prism50in the main body2toward the positive side of the direction Y is incident on an end surface of the cylindrical section62that is the end surface facing the negative side of the direction Y. The combined light having entered the projection lens60travels via the first lens61and reaches the first reflection mirror65. The combined light described above is then reflected off the first reflection mirror65, so that the path of the combined light toward the positive side of the direction Z is bent, and the combined light reaches the second reflection mirror67. The combined light described above is then reflected off the second reflection mirror67, so that the path of the combined light toward the negative side of the direction Y is bent, and the combined light is incident on the second lens63. The second lens63enlarges the light flux incident thereon from the positive side of the direction Y and causes the enlarged light flux to exit toward the negative side of the direction Y. The combined light incident on the second lens63is then enlarged and projected as the image light in tilted projection toward not only the negative side of the direction Y but the side above the projector1. The projection lens60can shorten the focal length of the projector1, unlike a non-bending-type projection lens. Using the bending-type projection lens60therefore allows projection from a position close to the projection target. It is noted that the bending-type projection lens60does not necessarily have the configuration describe above as long as the projection lens60can bend the optical path of the combined light having exited out of the main body2and output the combined light along the bent optical path. Further, the projection lens mounted on the main body2is not limited to a bending type, and the type of the projection lens can be selected as appropriate in accordance with the application of the projector1. 1.2. Lens Holder The configuration of the lens holder70in the present embodiment and the state in which the projection lens60is held will be described with reference toFIGS.4A,4B, and5.FIG.4Ais a diagrammatic view showing the state in which the lens holder and the projection lens are separate from each other.FIG.4Bis a diagrammatic view showing the state in which the lens holder holds the projection lens.FIG.5is an exploded view showing the configuration of the lens holder. InFIG.4A, part of the projection lens60is omitted. InFIGS.4A and4B, the components of the main body2excluding the lens holder70are omitted. Further,FIG.5shows the lens holder with a first lens holding mechanism exploded. The lens holder70is a substantially window-frame-shaped member and is disposed in parallel to the plane XZ, as shown inFIG.4A. The cylindrical section62of the projection lens60is inserted into the interior of the substantially window-frame-shaped lens holder70toward the negative side of the direction Y, as indicated by the arrow. Although not shown, an opening through which the cylindrical section62is inserted is provided in the exterior enclosure2aof the main body2in a position corresponding to the lens holder70. The cylindrical section62is inserted through the inner opening of the lens holder70, as shown inFIG.4B. In this process, the projection lens60is held by the lens holder70via first and second lens holding mechanisms that will be described later. The projection lens60is thus mounted on the main body2. The lens holder70includes a first lens holding mechanism710and a second lens holding mechanism730, which hold the projection lens60, as shown inFIG.5. The second lens holding mechanism730is a lens holding mechanism different from the first lens holding mechanism710and holds the projection lens60independently of the first lens holding mechanism710. The first lens holding mechanism710and the second lens holding mechanism730are disposed in the presented order side by side in the direction toward the positive side of the direction Y. In other words, the first lens holding mechanism710is located on the side closer to the cross dichroic prism50shown inFIG.1in the direction Y than the second lens holding mechanism730. The arrangement of the first lens holding mechanism710and the second lens holding mechanism730is not limited to the arrangement described above, and the second lens holding mechanism730may instead be located on the side closer to the cross dichroic prism50than the first lens holding mechanism710. The first lens holding mechanism710includes a ring section711, a pivotal section713, and a base section715. The base section715, the pivotal section713, and the ring section711are disposed in the presented order in the direction toward the positive side of the direction Y. The first lens holding mechanism710is a lens holding mechanism employing what is called a spigot method. The ring section711is a substantially ring-shaped member and has an opening having an inner diameter substantially equal to the outer diameter of the cylindrical section62of the projection lens60. The cylindrical section62is therefore insertable through the opening of the ring section711. The inner edge of the opening of the ring section711is provided with four cutouts711a. Along the circumference of the opening of the ring section711, two of the cutouts711aface each other, and the other two face each other. The ring section711is fixed to the base section715, for example, with screws. The pivotal section713is pivotable along the outer circumference of the cylindrical section62of the projection lens60. The pivotal section713includes a lever section714and is so shaped that the lever section714is added to a substantially-ring-shaped member. The lever section714causes the pivotal section713to pivot. The substantially-ring-shaped member described above has an opening having an inner diameter substantially equal to the outer diameter of the cylindrical section62, whereby the cylindrical section62of the projection lens60is insertable through the opening. The inner edge of the opening of the substantially-ring-shaped member described above is provided with four cutouts713a. Along the circumference of the opening of the pivotal section713, two of the cutouts713aface each other, and the other two face each other. The pivotal section713is sandwiched between the ring section711and the base section715but is not fixed thereto. Operating the lever section714therefore allows the pivotal section713to pivot around an imaginary center of the inner circumferential circle of the ring section711. The lever section714is so disposed as to protrude out of the exterior enclosure2aof the main body2shown inFIG.1toward the exterior thereof. The lever section714can therefore be operated by an operator outside the exterior enclosure2a. The base section715is a frame body, and a cylindrical member716is provided inside the frame body. The cylindrical member716is a substantially cylindrical member, the inner diameter of which is substantially equal to the outer diameter of the cylindrical section62of the projection lens60. The cylindrical section62is therefore insertable into the cylindrical member716. The inner surface of the cylindrical member716is provided with four cutouts716a. Along the circumference of the inner surface of the cylindrical member716, two of the cutouts716aface each other, and the other two face each other. The base section715supports the ring section711and is fixed to a structural body that is not shown, such as a frame, but is part of the main body2. The cutouts711a,713a, and716ahave substantially the same shapes when viewed from the positive side of the direction Y. How the first lens holding mechanism710holds the projection lens60will be described later. The first lens holding mechanism710does not necessarily employ the spigot method described above and may employ any other known method. The second lens holding mechanism730includes holding sections731and732, an upper support section733and a lower support section734as a pair of support sections, an upper frame section737and a lower frame section736, guide sections738and739, a shaft section735, and a dial740as a switching section. The upper frame section737and the lower frame section736are each a substantially quadrangular columnar member, with the upper frame section737disposed on the upper side and the lower frame section736disposed on the lower side. The upper frame section737and the lower frame section736are so configured that the height direction of the substantially quadrangular columnar members coincides with the direction X and are so paired with each other as to face each other in the direction Z. The guide sections738and739are each a circular columnar member. The guide section738is connected to right end portions of the upper frame section737and the lower frame section736when viewed from the positive side of the direction Y. The guide section739is connected to left end portions of the upper frame section737and the lower frame section736when viewed from the positive side of the direction Y. The guide sections738and739are so configured that the height direction of the circular columnar members coincides with the direction Z and are so paired with each other as to face each other in the direction X. The upper frame section737and the lower frame section736and the guide sections738and739form a substantially quadrangular frame body in the plan view along the direction Y. The frame body is a structural body in the second lens holding mechanism730. The frame body is fixed to a structural body that is not shown, such as a frame, but is part of the main body2. The upper support section733and the lower support section734are each a substantially quadrangular columnar member, with the upper support section733disposed on the upper side and the lower support section734disposed on the lower side. That is, the upper support section733and the lower support section734extend in the direction X and are so paired with each other as to face each other in the direction Z. The upper support section733is located below the upper frame section737, and the lower support section734is located above the lower frame section736. A guided section741is provided at the right end of the upper support section733, and a guided section743is provided at the left end of the upper support section733when viewed from the positive side of the direction Y. A guided section742is provided at the right end of the lower support section734, and a guided section744is provided at the left end of the lower support section734when viewed from the positive side of the direction Y. The guided sections741,742,743, and744are each a cylindrical member, and the inner diameter of the guided sections741,742,743, and744is substantially equal to the outer diameter of the guide sections738and739. The guided sections741,742,743, and744are so disposed that the height direction of the cylindrical members coincides with the direction Z. The guide section738is inserted into the guided sections741and742, and the guide section739is inserted into the guided sections743and744. The guided sections741and743are therefore guided by the guide sections738and739, whereby the upper support section733is movable toward the positive and negative sides of the direction Z. Similarly, the guided sections742and744are guided by the guide sections738and739, whereby the lower support section734is movable toward the positive and negative sides of the direction Z. The upper support section733and the lower support section734include the pair of holding sections731and732, which sandwich the projection lens60. In detail, the upper support section733supports the holding section731so disposed as to face downward, and the lower support section734supports the holding section732so disposed as to face upward. The holding section731includes a plurality of protrusions731a, which protrude downward. Each set of two of the protrusions731aforms a pair. The holding section732includes a plurality of protrusions732a, which protrude upward. Each set of two of the protrusions732aforms a pair. To hold the projection lens60, the protrusions731aand732aprotrude toward the projection lens60and engage with a plurality of recesses, which will be described later, of the projection lens60. The dial740, which is caused to pivot, moves the upper support section733and the lower support section734to switch the state in which the holding sections731and732sandwich the projection lens60and the state in which the projection lens60sandwiched between the holding sections731and732is released from one to the other. The dial740is a cylindrical knob and is disposed on the upper left of the above-mentioned frame body formed of the upper frame section737, the lower frame section736, and the guide sections738and739when viewed from the positive side of the direction Y. The dial740is provided with a shaft section735, which extends downward from the dial740. The shaft section735is disposed substantially in parallel to the direction Z oriented toward both the positive and negative sides thereof, and the center axis of the dial740substantially coincides with the center axis of the shaft section735. The dial740is therefore configured to be pivotable around the shaft section735as the axis of rotation. The shaft section735is provided with a male thread. The male thread is so formed that the threaded direction on one side of the lengthwise middle point of the shaft section735differs from that on other side. Specifically, the male thread on the portion above the middle point described above is a left-handed thread, and the male thread on the portion below the middle point described above is a right-handed thread. The shaft section735is so disposed as to pass through the upper frame section737, the lower frame section736, the upper support section733, and the lower support section734and is juxtaposed with the guide section739on the right thereof when viewed from the positive side of the direction Y. The shaft section735is not fixed to the upper frame section737or the lower frame section736. A female thread733ais provided on the inner surface of a through hole, through which the shaft section735passes, in the upper support section733. The female thread733ais a left-handed thread that is engageable with the male left-handed thread on the shaft section735. A female thread734ais provided on the inner surface of a through hole, through which the shaft section735passes, in the lower support section734. The female thread734ais a right-handed thread engageable with the male right-handed thread on the shaft section735. The configuration described above causes the upper support section733and the lower support section734to move along the direction Z as a first direction when the dial740is caused to pivot. That is, the distance in the direction Z between the upper support section733and the lower support section734, which form a pair, can be so changed as to increase or decrease. In detail, when the dial740is caused to pivot clockwise when viewed from above, the shaft section735also rotates in the same direction. In this case, the upper support section733, which has the female thread733a, moves downward, and the lower support section734, which has the female thread734a, moves upward. In this process, the upper support section733and the lower support section734are guided by the guide sections738and739. The distance in the direction Z between the upper support section733and the lower support section734decreases with the positional relationship between the two support sections, which extend along the direction X in parallel to each other, maintained. That is, the gap between the holding sections731and732can be narrowed by causing the dial740to pivot clockwise when viewed from above. The projection lens60is thus held. On the other hand, when the dial740is caused to pivot counterclockwise when viewed from above, the shaft section735also rotates in the same direction. In this case, the distance in the direction Z between the upper support section733and the lower support section734increases with the positional relationship between the two support sections, which extend along the direction X in parallel to each other, maintained. That is, the gap between the holding sections731and732can be widened by causing the dial740to pivot counterclockwise when viewed from above. The state in which the projection lens60is held is thus released. How the second lens holding mechanism730holds the projection lens60will be described later in detail. The dial740is so disposed as to be exposed out of the exterior enclosure2aof the main body2shown inFIG.1. The dial740can therefore be operated by the operator outside the exterior enclosure2a. In the present embodiment, the dial740is presented as the switching section by way of example, and the switching section is not limited to a specific component and may be any component that can move the upper support section733and the lower support section734. Other employable examples of the switching section may include a lever-shaped component or a slidable component. The second lens holding mechanism730does not necessarily have the configuration described above, and it is more preferable to employ the configuration described above, which allows simple attachment and detachment of the projection lens60. 1.3. How First Lens Holding Mechanism Holds Projection Lens How the first lens holding mechanism710holds the projection lens60will be described with reference toFIGS.6,7A, and7B.FIG.6is a diagrammatic view showing the state in which the first lens holding mechanism is separate from the projection lens.FIG.7Ais an enlarged view showing the state of the first lens holding mechanism that allows the projection lens to be attached thereto and detached therefrom.FIG.7Bis an enlarged view showing the state of the first lens holding mechanism that holds the projection lens.FIG.6shows the first lens holding mechanism710and part of the projection lens60, but the other components are omitted.FIGS.7A and7Bare enlarged views of the area A shown inFIG.6. The projection lens60includes engagement counterparts601a, as shown inFIG.6. The engagement counterparts601aare provided at the outer circumference of a base portion of the cylindrical section62. The engagement counterparts601aare formed of four engagement counterparts disposed in correspondence with the four cutouts711aof the ring section711, and the engagement counterparts601aengage with the cutouts711awhen the projection lens60is inserted into the first lens holding mechanism710. The engagement counterparts601aare each a projection-shaped portion and are engageable with the cutouts716aof the base section715of the first lens holding mechanism710. In the first lens holding mechanism710, the lever section714, when it is operated, causes the pivotal section713to pivot, so that the positions of the cutouts13amove. In the state in which the projection lens60is attachable to or detachable from the first lens holding mechanism710, the four cutouts713aof the pivotal section713and the four cutouts716aof the base section715are located in correspondence with the four cutouts711aof the ring section711, as shown inFIG.7A. That is, the positions of the four cutouts711a, cutouts713a, and cutouts716acoincide with one another when viewed from the positive side of the direction Y. Therefore, in the state of the first lens holding mechanism710described above, when the cylindrical section62is inserted to mount the projection lens60, the engagement counterparts601apass through the cutouts711aand713aand engage with the cutouts716a. Since the ring section711is fixed to the base section715, the cutouts716aare always located in correspondence with the cutouts711a. That is, to mount the projection lens60, when the lever section714of the pivotal section713is operated, the pivotal section713pivots, so that the cutouts713amove to the positions corresponding to the cutouts711aand716a. To separate the projection lens60from the first lens holding mechanism710, the cutouts713aare similarly moved to the positions corresponding to the cutouts711aand716a. The engagement counterparts601atherefore pass through the cutouts711aand713a, whereby the projection lens60can be pulled out toward the positive side of the direction Y. In the first lens holding mechanism710, when the lever section714is operated, the pivotal section713pivots to move the positions of the cutouts713afrom the state described above, as shown inFIG.7B. At this point, the positions of the cutouts713aare shifted from the positions of the cutouts711aand716awhen viewed from the positive side of the direction Y. In other words, the pivotal section713excluding the cutouts713aoverlaps with the cutouts711aand716a. When the projection lens60is inserted into the first lens holding mechanism710in the state shown inFIG.7A, the cutouts716aengage with the engagement counterparts601a. Thereafter, when the lever section714is so operated that the state shown inFIG.7Bis achieved, the pivotal section713having pivoted engages with the engagement counterparts601a. The movement of the projection lens60relative to the first lens holding mechanism710is therefore restricted, and the projection lens60is held by the first lens holding mechanism710. Further, the lever section714is operated in the opposite direction to cause the state shown inFIG.7Bto transition to the state shown inFIG.7A, whereby the projection lens60can be separated from the first lens holding mechanism710. 1.4. How Second Lens Holding Mechanism Holds Projection Lens How the second lens holding mechanism730holds the projection lens60will be described with reference toFIGS.8A,8B,9A, and9B.FIG.8Ashows the exterior appearance of the base portion of the cylindrical section in the projection lens viewed from above.FIG.8Bshows the exterior appearance of the base portion of the cylindrical section in the projection lens viewed from below.FIG.9Ais a cross-sectional view showing the state of the second lens holding mechanism that allows the projection lens to be attached thereto and detached therefrom.FIG.9Bis a cross-sectional view showing the state of the second lens holding mechanism that holds the projection lens. InFIGS.8A and8B, part of the projection lens60is omitted.FIGS.9A and9Bshow the cross section of the projection lens60taken along the plane XZ containing the line B-B′ shown inFIG.8Aand the second lens holding mechanism730. The projection lens60includes a plurality of recesses602aand602b, as shown inFIGS.8A and8B. In detail, the recesses602aare provided in an upper portion of the base portion of the cylindrical section62of the projection lens60, and the recesses602bare provided in a lower portion of the base portion. The recesses602aand602bare each recessed toward the center axis of the cylindrical section62. The recesses602aand602bare shifted from the engagement counterparts601atoward the positive side of the direction Y. In the present embodiment, the number of recesses602ais two, and the number of recesses602bis four. The recesses602aare located in correspondence with the protrusions731aof the second lens holding mechanism730and engage with the protrusions731a. The recesses602bare located in correspondence with the protrusions732aof the second lens holding mechanism730and engage with the protrusions732a. Wall sections603aare provided on one side of the plurality of recesses602athat is the side facing the negative side of the direction Y. Wall sections603bare provided on one side of the plurality of recesses602bthat is the side facing the negative side of the direction Y. In the state in which the projection lens60is attachable to or detachable from the second lens holding mechanism730, a large distance in the direction Z between the protrusions731aand the protrusions732ais provided, as shown inFIG.9A. In detail, the distance in the direction Z between the protrusions731aand the protrusions732ais so provided as to be greater than the distance between the recesses602aand the recesses602b. Therefore, when the projection lens60is mounted, the cylindrical section62is inserted with no interference of the recesses602aand602band the wall sections603aand603bwith the protrusions731aand732a. At this point, the wall sections603aand603bare shifted from the protrusions731aand732atoward the negative side of the direction Y, and the recesses602aand602boverlap with the protrusions731aand732awhen viewed along the direction Z. In the second lens holding mechanism730, when the dial740is caused to pivot, the gap between the holding section731and the holding section732is changed, as shown inFIG.9B. That is, when the dial740is operated, the distance in the direction Z between the protrusions731aand the protrusions732ais changed. Therefore, when the dial740is so caused to pivot that the distance in the direction Z between the protrusions731aand the protrusions732adecreases, the protrusions731aengage with the recesses602a, and the protrusions732aengage with the recesses602b. At the same time, the wall sections603aand603binterfere with the front ends of the protrusions731aand732a. The movement of the projection lens60relative to the second lens holding mechanism730is thus restricted, whereby the projection lens60is held by the second lens holding mechanism730. Therefore, to cause the second lens holding mechanism730to hold the projection lens60, the projection lens60is inserted into the second lens holding mechanism730in the state shown inFIG.9A. The dial740is then caused to pivot clockwise when viewed from above to achieve the state shown inFIG.9B. On the other hand, to separate the projection lens60from the second lens holding mechanism730, the dial740is caused to pivot counterclockwise when viewed from above from the state shown inFIG.9Bto achieve the state shown inFIG.9A. The projection lens60is then pulled out of the second lens holding mechanism730toward the positive side of the direction Y to separate the projection lens60from the second lens holding mechanism730. In the present embodiment, the configuration in which two recesses602aand four recesses602bare provided has been presented by way of example, but the number of recesses602aand602band the number of protrusions731aand732acorresponding thereto are not limited to specific numbers. Further, the shapes of the recesses602aand602band the protrusions731aand732bare not limited to those presented by way of example. Moreover, in the present embodiment, the holding section731and the holding section732sandwich the projection lens60from above and below, but the configuration described above is not necessarily employed. The projection lens60may instead be sandwiched from right and left. As described above, the projector1as the projection-type display apparatus according to the present embodiment can provide the following effects. Even when the weight of the projection lens60increases, the lens holder70can handle the increased load. In detail, the lens holder70includes the first lens holding mechanism710and the second lens holding mechanism730as two different holding mechanisms. Therefore, even when a large-weight projection lens60is mounted on the projector1, the load caused by the projection lens60is distributed to the first lens holding mechanism710and the second lens holding mechanism730. That is, a projector1including a holding mechanism that reduces the load on the lens holder70can be provided. Since the projection lens60is attachable to and detachable from the lens holder70, a plurality of kinds of projection lens60can be used separately in accordance with the application of the projector1. In detachment of the projection lens60from the projector1, the projector1prevents the projection lens60from falling off the projector1. In detail, the projection lens60is held by the lens holder70formed of two independent holding mechanisms. Therefore, even when one of the first lens holding mechanism710and the second lens holding mechanism730releases the projection lens60, the other keeps holding the projection lens60. The projector1therefore prevents the projection lens60from falling off the projector1due to wrong operation or carelessness in the detachment operation. In the first lens holding mechanism710, the pivotal section713pivots and engages with the engagement counterparts601aof the projection lens60. For example, a bayonet-based lens holding mechanism is required to cause the projection lens60itself to pivot, whereas the first lens holding mechanism710can hold the projection lens60with no pivotal movement of the projection lens60. A large-weight projection lens60can therefore be readily mounted on the projector1. Further, the attachment and detachment of the projection lens60can be readily performed as compared with a configuration in which the projection lens60is fixed to the lens holder70, for example, with screws. In the second lens holding mechanism730, pivotal movement of the dial740causes the holding sections731and732to sandwich the projection lens60, whereby the projection lens60is held by the lens holder70. For example, a bayonet-based lens holding mechanism is required to cause the projection lens60itself to pivot, whereas the second lens holding mechanism730can hold the projection lens60with no pivotal movement of the projection lens60. Similarly, the projection lens60can be detached from the lens holder70by causing the dial740to pivot to release the projection lens60sandwiched between the holding sections731and732. A large-weight projection lens60can therefore be readily attached to and detached from the projector1. Further, the attachment and detachment of the projection lens60can be readily performed as compared with a configuration in which the projection lens60is fixed to the lens holder70, for example, with screws. The plurality of protrusions731aof the holding section731fit into the plurality of recesses602aof the projection lens60, and the plurality of protrusions732aof the holding section732fit into the plurality of recesses602bof the projection lens60. The projection lens60is thus sandwiched between the holding sections731and732. The second lens holding mechanism730can therefore hold the projection lens60more reliably than in a case where the holding sections731and732each have one protrusion731aand one protrusion732a, respectively, and the projection lens60has one recess602aand one recess602b. Second Embodiment The present embodiment will be described with reference to the case where the projection-type display apparatus is a projector including three liquid crystal panels as the light modulators. The projector according to the present embodiment differs from the projector1according to the first embodiment in terms of the configuration of the projection lens, and the main body2in the present embodiment has the same configuration of the main body2in the first embodiment. Therefore, the same constituent portion as that in the first embodiment has the same reference character, and no redundant description thereof will be made. 2.1. Projection Lens The configuration of the projection lens according to the present embodiment will be described with reference toFIG.10.FIG.10is a perspective view showing the exterior appearance of the projection lens according to the second embodiment.FIGS.1,4A, and5used in the first embodiment will also be referred to for ease of description. A projection lens80according to the present embodiment is a straight-type projection lens and includes an optical system extending in the direction Y, as shown inFIG.10. A cylindrical section82is provided at an end of the above-mentioned optical system of the projection lens80that is the end facing the negative side of the direction Y. A lens81is disposed at an end of the optical system described above that is the end facing the positive side of the direction Y. A base portion of the cylindrical section82of the projection lens80has the same shape as that of the base portion of the cylindrical section62of the projection lens60, which is shown inFIG.4A, in the first embodiment. The projection lens80can therefore be held by the first lens holding mechanism710and the second lens holding mechanism730shown inFIG.5. In other words, the projection lens80is mountable on the main body2shown inFIG.1. When the projection lens80is mounted on the main body2, the cylindrical section82is inserted into the main body2, and the projection lens80is held by the lens holder70. In the state in which the projection lens80is mounted on the main body2, the combined light having exited out of the cross dichroic prism50in the main body2toward the positive side of the direction Y is incident on an end surface of the cylindrical section82that is the end surface on the negative side of the direction Y. The combined light travels via the optical system of the projection lens80and is projected as the image light via the lens81toward the positive side of the direction Y. That is, the projection lens80projects the light modulated by the liquid crystal panels40R,40G, and40B shown inFIG.1in the main body2. The projection lens80includes engagement counterparts801a, which engage with the pivotal section713when the pivotal section713in the first lens holding mechanism710pivots. The engagement counterparts801aare provided at the outer circumference of a base portion of the cylindrical section82. The engagement counterparts801aare formed of four engagement counterparts so disposed in correspondence with the four cutouts711aof the ring section711of the first lens holding mechanism710into which the projection lens80is inserted. The engagement counterparts801aare each a projection-shaped portion and are engageable with the cutouts716aof the base section715of the first lens holding mechanism710. To cause the first lens holding mechanism710to hold the projection lens80, the lever section714is operated to cause the cutouts711a,713a, and716ato coincide with one another when viewed in the direction Y. The projection lens80is then inserted into the first lens holding mechanism710. In this process, the engagement counterparts801apass through the cutouts711aand713aand engage with the cutouts716a. Thereafter, when the lever section714is operated, the pivotal section713pivots, so that the pivotal section713engages with the engagement counterparts801a. The movement of the projection lens80relative to the first lens holding mechanism710is thus restricted, whereby the projection lens80is held by the first lens holding mechanism710. Operating the lever section714in the opposite direction releases the engagement between the pivotal section713and the engagement counterpart801a, whereby the projection lens80can be separate from the first lens holding mechanism710. The projection lens80has a plurality of recesses802alocated in correspondence with the plurality of protrusions731ain the second lens holding mechanism730shown inFIG.5. The plurality of protrusions731afit into the plurality of recesses802a, respectively. The projection lens80further has a plurality of recesses802blocated in correspondence with the plurality of protrusions732ain the second lens holding mechanism730. The plurality of protrusions732afit into the plurality of recesses802b, respectively. To cause the second lens holding mechanism730to hold the projection lens80, the dial740is operated to widen the gap between the holding section731and the holding section732. In this state, the projection lens80is inserted into the second lens holding mechanism730. The dial740is then caused to pivot clockwise when viewed from above to narrow the gap between the holding section731and the holding section732, so that the protrusions731aengage with the recesses802aand the protrusions732aengage with the recesses802b. At the same time, wall sections803aand803binterfere with the front ends of the protrusions731aand732a. The movement of the projection lens80relative to the second lens holding mechanism730is thus restricted, whereby the projection lens80is held by the second lens holding mechanism730. On the other hand, to separate the projection lens80from the second lens holding mechanism730, the dial740is caused to pivot counterclockwise when viewed from above to widen the gap between the holding section731and the holding section732. The projection lens80is then pulled out of the second lens holding mechanism730toward the positive side of the direction Y to separate the projection lens80from the second lens holding mechanism730. As described above, the projector as the projection-type display apparatus according to the present embodiment can provide the same effects as those provided by the first embodiment. Third Embodiment The present embodiment will be described with reference to the case where the projection-type display apparatus is a projector including three liquid crystal panels as the light modulators. In the projector according to the present embodiment, the position adjustment mechanism (lens shift mechanism) that adjusts the position of the projection lens60is added to the main body2of the projector1according to the first embodiment. Therefore, the same constituent portion as that in the first embodiment has the same reference character, and no redundant description thereof will be made. The projection lens80according to the embodiment described above is also mountable on the main body in the present embodiment. 3.1. Lens Shift Mechanism The configuration of the lens shift mechanism according to the present embodiment will be described with reference toFIGS.11and12.FIG.11is a perspective view showing the exterior appearance of the lens shift mechanism according to the third embodiment.FIG.12is an exploded view showing the configuration of the lens shift mechanism.FIGS.11and12show also the first lens holding mechanism710and the second lens holding mechanism730. InFIG.12, only one of first and second movement mechanisms, which are paired with each other, is labeled with a reference character, and the other is not. The main body in the present embodiment includes a lens shift mechanism90as the position adjustment mechanism that moves the position of the lens holder70relative to the liquid crystal panels40R,40G, and40B shown inFIG.1. The lens shift mechanism90supports the lens holder70, as shown inFIG.11. The lens shift mechanism90relatively moves the lens holder70in a direction perpendicular to the direction Y. The position of the projection lens60shown inFIG.1and held by the lens holder70can thus be adjusted. The lens shift mechanism90includes a substrate910and a moving section920, as shown inFIG.12. The substrate910, the moving section920, and the lens holder70are disposed in the presented order in the direction toward the positive side of the direction Y. In other words, the substrate910is located closer to the cross dichroic prism50shown inFIG.1than the other two components. The substrate910is a substantially oblong, plate-shaped member extending along the plane XZ, and an opening911is provided at the center of the substrate910. Although not shown, the substrate910is fixed to a structural body in the main body. When the substrate910is viewed in the direction Y, the pair of long edges of the substrate910extend along the direction X, and the pair of short edges of the substrate910extend along the direction Z. The opening911serves as a clearance of the cylindrical section62when the projection lens60is mounted. To this end, the opening911has a cross section in the plane XZ larger than that of the cylindrical section62. The moving section920is a substantially oblong, plate-shaped member extending along the plane XZ, and an opening921is provided at the center of the moving section920. When the moving section920is viewed in the direction Y, the pair of long edges of the moving section920extend along the direction X, and the pair of short edges of the moving section920extend along the direction Z. The opening921serves as a clearance of the cylindrical section62when the projection lens60is mounted. To this end, the opening921has a cross section in the plane XZ larger than that of the cylindrical section62. The substrate910and the moving section920are linked to each other via a pair of first movement mechanisms950. In detail, the pair of long edges of each of the substrate910and the moving section920are provided with the first movement mechanisms950. The first movement mechanisms950each includes a slider951and an attachment plate953. The slider951and the attachment plate953are each a substantially oblong member elongated in the direction X. The slider951includes a fixed section951aand a movable section951b. The movable section951bis in contact with a rail that is not shown but is part of the fixed section951aand is supported by the fixed section951a. The movable section951bis therefore movable in the direction X relative to the fixed section951awith the movable section951bguided by the rail of the fixed section951a. Out of the components that form the first movement mechanism950, the attachment plate953and the fixed section951aof the slider951are fixed to the substrate910with screws. That is, since the substrate910is fixed to a structural body in the main body, the position of the substrate910relative to the liquid crystal panels40R,40G, and40B, which are also fixed to the main body is therefore fixed. In contrast, the movable section951b, out of the components that form the first movement mechanism950, is fixed to the moving section920with screws. The substrate910and the moving section920are therefore movable relative to each other in the direction X via the first movement mechanism950. A first driver960is fixed to the substrate910in a position above the left short edge of the substrate910when viewed from the negative side of the direction Y. The first driver960includes an electric motor961, which is capable of reverse rotation, a drive shaft that is not shown but is part of the electric motor961, a plurality of gears962, and a projection963. The rotation of the electric motor961is transmitted to the projection963from the drive shaft via the plurality of gears962and other components. The projection963is movable relative to the first driver960in the direction X based on the driving force produced by the electric motor961. The projection963protrudes toward the positive side of the direction Y. Therefore, a portion of the substrate910that is the portion to which the first driver960is fixed is provided with a slit913, into which the projection963of the first driver960is inserted. The slit913is elongated in the direction X and allows the projection963of the first driver960to remain inserted but does not restrict movement of the projection963in the direction X. Although not shown, the front end of the projection963is fixed to a corresponding position of the moving section920. The driving force produced by the first driver960therefore allows the moving section920to move in the direction X relative to the substrate910. The first driver960is electrically connected to a controller that is not shown but is part of the main body. The moving section920and the lens holder70are linked to each other via a pair of second movement mechanisms970. In detail, the pair of short edges of each of the moving section920and the lens holder70are provided with the second movement mechanisms970. The second movement mechanisms970each includes a slider971and an attachment plate973. The slider971and the attachment plate973are each a substantially oblong member elongated in the direction Z. The slider971includes a fixed section971aand a movable section971b. The movable section971bis in contact with a rail that is not shown but is part of the fixed section971aand is supported by the fixed section971a. The movable section971bis therefore movable in the direction Z relative to the fixed section971awith the movable section971bguided by the rail of the fixed section971a. Out of the components that form the second movement mechanism970, the attachment plate973and the fixed section971aof the slider971are fixed to the moving section920with screws. In contrast, the movable section971b, out of the components that form the second movement mechanism970, is fixed to the lens holder70with screws. The moving section920and the lens holder70are therefore movable relative to each other in the direction Z via the second movement mechanism970. A second driver980is fixed to the moving section920substantially at the center of the left short edge of the moving section920in the direction Z when viewed from the negative side of the direction Y. The second driver980includes an electric motor981, which is capable of reverse rotation, a drive shaft that is not shown but is part of the electric motor981, a plurality of gears982, and a projection983. The rotation of the electric motor981is transmitted to the projection983from the drive shaft via the plurality of gears982and other components. The projection983is movable relative to the second driver980in the direction Z based on the driving force produced by the electric motor981. The projection983protrudes toward the positive side of the direction Y. Although not shown, the front end of the projection983is fixed to a corresponding position of the lens holder70. The driving force produced by the second driver980therefore allows the lens holder70to move in the direction Z relative to the moving section920. The second driver980is electrically connected to the controller, which is not shown but is part of the main body. The thus configured lens shift mechanism90can move the projection lens60held by the lens holder70in the directions perpendicular to the direction Y relative to the liquid crystal panels40R,40G, and40B. Although not shown, an electric signal that operates the lens shift mechanism90is inputted to the controller from an operation panel provided on the main body or an information instrument, such as a personal computer. The lens shift mechanism90is then operated via the controller. In the present embodiment, the configuration in which the lens holder70is relatively moved in the directions perpendicular to the direction Y has been presented as an example of the lens shift mechanism90, but not necessarily. The lens shift mechanism90may instead be configured to relatively move the lens holder70only in one direction perpendicular to the direction Y. Further, in the lens shift mechanism90, the first movement mechanism950and the second movement mechanism970are electrically driven, but not necessarily. The first movement mechanism950and the second movement mechanism970may instead be manually driven. As described above, the projector as the projection-type display apparatus according to the present embodiment can provide the following effects in addition to the effects in the first embodiment. The projector allows what is called lens shifting for adjusting the position of the projection lens60or the projection lens80relative to the liquid crystal panels40R,40G, and40B. The lens shifting described above allows adjustment of the position where the projector projects an image or the like and prevents the projection range from having a trapezoidal shape with no change in the position of the projector. Contents derived from the embodiments are described below. A projection-type display apparatus includes a main body and a projection lens. The main body includes a light source, a light modulator that modulates light outputted from the light source, and a lens holder to and from which the projection lens is attachable and detachable. The projection lens projects the light modulated by the light modulator. The lens holder includes a first lens holding mechanism that holds the projection lens and a second lens holding mechanism different from the first lens holding mechanism. According to the configuration described above, even when the weight of the projection lens increases, the lens holder can handle the increased load. In detail, the lens holder includes the first lens holding mechanism and the second lens holding mechanism as two different holding mechanisms. Therefore, even when a large-weight projection lens is mounted on the projection-type display apparatus, the load caused by the projection lens is distributed to the first lens holding mechanism and the second lens holding mechanism. That is, a projection-type display apparatus including a holding mechanism that reduces the load on the lens holder can be provided. In the projection-type display apparatus described above, it is preferable that the second lens holding mechanism holds the projection lens independently of the first lens holding mechanism. According to the configuration described above, in detachment of the projection lens from the projection-type display apparatus, the projection-type display apparatus prevents the projection lens from falling off the projection-type display apparatus. In detail, the projection lens is held by the lens holder formed of the two independent holding mechanisms. Therefore, even when one of the first lens holding mechanism and the second lens holding mechanism releases the projection lens, the other keeps holding the projection lens. The projection-type display apparatus therefore prevents the projection lens from falling off the projection-type display apparatus due to wrong operation or carelessness in the detachment operation. In the projection-type display apparatus described above, it is preferable that the second lens holding mechanism includes a holding section capable of sandwiching the projection lens, a support section that supports the holding section, and a switching section that moves the support section to switch a state in which the holding section sandwiches the projection lens and a state in which the projection lens sandwiched by the holding section is released from one to the other. According to the configuration described above, in the second lens holding mechanism, operating the switching section causes the holding section to sandwich the projection lens, whereby the projection lens is held by the lens holder. For example, a bayonet-based lens holding mechanism is required to cause the projection lens itself to pivot, whereas the second lens holding mechanism can hold the projection lens with no pivotal movement of the projection lens. Similarly, the projection lens can be detached from the lens holder by operating the switching section to release the projection lens sandwiched by the holding section. A large-weight projection lens can therefore be readily attached to and detached from the projection-type display apparatus. Further, the attachment and detachment of the projection lens can be readily performed as compared with a configuration in which the projection lens is fixed to the lens holder, for example, with screws. In the projection-type display apparatus described above, it is preferable that the switching section is a dial, and that the dial is caused to pivot to move the support section so that the state in which the holding section sandwiches the projection lens and the state in which the sandwiched projection lens is released are switched from one to the other. According to the configuration described above, the pivotal movement of the dial readily allows the state in which the projection lens is sandwiched and the state in which the sandwiched projection lens is released to be switched from one to the other. The attachment and detachment of the projection lens can therefore be further readily performed. In the projection-type display apparatus described above, it is preferable that the holding section has a plurality of protrusions, and that the projection lens has a plurality of recesses which are located in correspondence with the plurality of protrusions and into which the plurality of protrusions fit. According to the configuration described above, the plurality of protrusions of the holding section fit into the plurality of recesses of the projection lens, whereby the projection lens is sandwiched by the holding section. The second lens holding mechanism can therefore hold the projection lens more reliably than in a case where the holding section has one protrusion and the projection lens has one recess. In the projection-type display apparatus described above, it is preferable that the first lens holding mechanism includes a pivotal section capable of pivoting along the outer circumference of the projection lens and a lever section that causes the pivotal section to pivot, and that the projection lens includes an engagement counterpart that engages with the pivotal section when the pivotal section is caused to pivot. According to the configuration described above, in the first lens holding mechanism, the pivotal section pivots and engages with the engagement counterpart of the projection lens. For example, a bayonet-based lens holding mechanism is required to cause the projection lens itself to pivot, whereas the first lens holding mechanism can hold the projection lens with no pivotal movement of the projection lens. A large-weight projection lens can therefore be readily mounted on the projection-type display apparatus. Further, the attachment and detachment of the projection lens can be readily performed as compared with a configuration in which the projection lens is fixed to the lens holder, for example, with screws. It is preferable that the projection-type display apparatus described above includes a position adjustment mechanism that moves the position of the lens holder relative to the light modulator. According to the configuration described above, the projection-type display apparatus allows what is called lens shifting for adjusting the position of the projection lens relative to the light modulator. The lens shifting described above allows adjustment of the position where the projection-type display apparatus projects an image or the like and prevents the projection range from having a trapezoidal shape with no change in the position of the projection-type display apparatus. A lens holding mechanism is a lens holding mechanism that holds a projection lens and includes a first lens holding mechanism and a second lens holding mechanism different from the first lens holding mechanism. According to the configuration described above, even when the weight of the projection lens increases, the lens holding mechanism can handle the increased load. In detail, the lens holding mechanism includes a first lens holding mechanism and a second lens holding mechanism as two different holding mechanisms. Therefore, even when a large-weight projection lens is mounted, a small load acts on each of the first lens holding mechanism and the second lens holding mechanism. That is, a lens holding mechanism on which a reduced load acts can be provided. | 67,751 |
11860521 | DETAILED DESCRIPTION FIG.1schematically shows the spectrum of diffuse reflection of an Eu3+-doped ceramic converter that is excited by a blue light-emitting diode. The illustrated part of the spectrum is in the blue spectral range and essentially includes the blue excitation light of the light-emitting diode, which is scattered back at the converter. The spectrum of the light-emitting diode is represented as a broad peak in the spectrum. The narrow dip at about 465 nm, marked “A”, which can be seen in the spectrum ofFIG.1and which almost reaches the ground level, reveals that for the absorption line at 465 nm absorption is 90% or more of the irradiated power. Thus, in principle, absorption of blue light is not bad in an Eu3+-doped material. Rather, the graph shows that the narrow absorption band of the Eu3+ions, when compared to the rather broad spectrum of the LED, implies only low total absorption of the LED light and hence poorly effective photoluminescence excitation and thus results in low optical efficiency. The invention avoids this drawback by matching the intrinsically narrow-band laser with the narrow absorption spectrum of the Eu3+ions. The surrounding medium has hardly any impact on the spectral position of absorption of the Eu3+ions. Typically, absorption occurs at approximately 465 nm, as can also be seen fromFIG.1. Therefore, according to a further embodiment of the invention, a laser is used which emits laser light with a wavelength in the wavelength range between 460 nm and 470 nm. Preferably, the laser wavelength is 465 nm, or 465 nm±2 nm. Another drawback of a blue LED as an excitation light source is its radiance or radiant flux which is orders of magnitude lower than that of a laser. Accordingly, a drawback of LED-based phosphor-converting light sources is their low luminance which is lower than the luminance that can be achieved with laser-excited ceramic converters by approximately a factor of 10. Accordingly, a phosphor converter light source with high luminance is provided. In this way, high luminous flux can be achieved with the same emitter area as with a phosphor converter LED. For a highest possible and most consistent possible luminous efficiency, luminance, and luminous flux, it is advantageous to stabilize the wavelength of the laser and to adjust it to the absorption frequency of the Eu3+ions. In fact, it is not at all trivial from a technical standpoint to tune the blue laser the most precisely possible to the absorption wavelength of 465 nm of the Eu3+transition7F0→5D2. The emission wavelength of diode lasers which are typically used in applications varies due to the manufacturing process and may also vary as a function of electrical current density and temperature. Nevertheless, diode lasers with an emission wavelength of 465 nm are commercially available and may be used for the excitation of Eu3+-doped red emitting photoluminescence converters without further wavelength stabilization. This is in particular true since in practice the diodes are often operated with previously known currents and within a limited temperature range. Optionally, diode lasers with the desired laser wavelength are selected from a production batch in order to meet the absorption wavelength of the Eu3+transition7F0→5D2. Alternatively, according to a further embodiment of the invention, means for stabilizing the laser wavelength are provided. In order to avoid the laser from shifting or changing its lasing wavelength during operation, it is possible according to one embodiment to use a laser that is stabilized by an external grating.FIG.2schematically shows an embodiment of the invention comprising a stabilizer9, with a diffraction grating91being used as the stabilizer9or as a component thereof. The laser beam30emitted by the laser3is directed onto the luminescent inorganic converter element7of converter assembly5. Converter assembly5emits light50which in addition to red photoluminescent light also includes components of the primary light, i.e. the blue laser light, depending on the configuration. The admixed laser light is produced in particular through scattering of the light of the primary laser beam30. An diffraction grating91is arranged in the beam path of the laser beam30and is arranged such that laser light of a higher diffraction order is fed back into the laser cavity. For this purpose, the grating91is arranged obliquely relative to the laser beam30in the example shown. By orienting the grating, i.e. in its angle, relative to the laser beam30, the feedback wavelength can be selected and laser emission can be stabilized to this wavelength. According to one embodiment, the grating91is positioned such that light with a wavelength in the range of 465 nm±2 nm is irradiated. According to a further embodiment, active loop control of the laser wavelength is suggested, which stabilizes the laser to the absorption wavelength of the Eu3+using a loop control circuit of the light source. Loop control may be accomplished by adjusting a parameter which has an impact on the wavelength of the light. One such parameter is the temperature of the laser. By adjusting the laser's temperature, for example, optimization for maximum absorption is achieved. FIG.3schematically shows an exemplary configuration for this purpose. The means9for stabilizing the wavelength comprise a loop control circuit11and connected thereto a heating element13and at least one light detector15. Light detector15may comprise a photodiode or a phototransistor, for example, as illustrated. A simple way of controlling the loop control circuit is to measure the red light emitted by the converter5or the blue light scattered back by the converter5. The loop control circuit then drives the laser3such that the emission of red light becomes maximal or so that the blue backscattering becomes minimal. For this purpose, the temperature of the laser3is adjusted using heating element13in the illustrated embodiment, in order to modify the wavelength to the value of maximum absorption, typically 465 nm±2 nm. More generally, without being limited to the specific illustrated example, it is contemplated according to one embodiment of the invention that the means9comprise a heating element13for regulating the temperature of the laser3. The Eu3+-doped red emitting converter element7which is typically excited at 465 nm can advantageously be used for digital projection, since the color coordinates of its emission are very well suited to define the red vertex of a color space for projection. This is exemplified inFIG.4for a fictitious projector, whose RGB color channels are provided by the Eu3+-doped phosphor, a LuAG phosphor, and a laser at 465 nm (FIG.4, color range “B”). In particular the red and blue vertices nearly ideally include the widely used Rec 709 standard color space of digital projection (FIG.4, color range “A”). The green vertex may be adjusted to the standard's color space specifications using suitable color filters. The excitation at 465 nm is also advantageous in this respect, since it allows to encompass the standard color space with a comparatively high lumen equivalent of the blue light of 50 lm/W. In comparison, a 450 nm laser only has a lumen equivalent of blue light of 25 lm/W. In the arrangements illustrated by examples ofFIGS.2and3, the converted light is emitted from the same surface of the converter element7which is irradiated by the laser light. Thus, these are remission light sources. Generally, an embodiment of the light source is preferred which comprises a luminescent inorganic converter element that is operated in remission. Accordingly, in a further embodiment of the invention, the light source is configured to emit light that is emitted by the luminescent inorganic converter element in remission. However, a transmission configuration is also possible. Various red emitting ceramic converter materials have been experimentally investigated. The following table provides an overview of the composition of the examined materials. ComponentsSampleMaterialDetectedprior toSinter#(main phase)secondary phasesinteringdensity ca.202003(Y0.5EU0.5)2Mo3O12)(Y0.5EU0.5)2Mo4O15)Y2O3, Eu2O3, MoO393%202007(Y0.5EU0.5)2Mo3O12)(Y0.5EU0.5)2Mo4O15)Y2O3, Eu2O3, MoO395%202009(Y0.5EU0.5)2Mo3O12)(Y0.5EU0.5)2Mo4O15)Y2O3, Eu2O3, MoO394%202201(Y0.5EU0.5)2Mo4O15)MoO3Y2O3, Eu2O3, MoO395% In particular, the ceramic materials preferably used for the converter contain Eu3+as an active element, and the host lattice of the Eu3+is of the scheelite type. Without being limited to the examples in the table, the Eu containing compounds which are suitable for the converter may more generally comprise molybdates, vanadates, tungstates, or tantalates, preferably in ceramic form, and preferably these compounds additionally containing at least one of the following elements: lanthanum, terbium, gadolinium, yttrium, and lutetium. Here, according to one refinement of this embodiment, Eu replaces at least one of the mentioned elements in a percentage between 10% and 100%, preferably between 50% and 100%, more preferably between 70% and 95%. According to yet another embodiment, the luminescent Eu3+containing material may be a molybdate, vanadate, tungstate or tantalate, while this compound additionally contains at least one of the following elements: lithium, sodium, potassium, magnesium, calcium, or strontium. The converter may be made of a single-phase ceramic. If the converter includes more than one phase, these phases may comprise the compounds mentioned above. The converter in particular does not contain any residues of MoO3, VO3, WO3, or TaO3. From among the above-mentioned samples, sample #202009 listed in the table above was used to experimentally verify the approach of the invention. The measurements were performed on a converter of 200 μm thickness made from the sample. The converter was placed on a highly reflective mirror plate. The blue light of a 465 nm laser was irradiated onto the converter at an angle of 30°. The emitted light intensity was resolved spectrally. The spectrometer used was a CAS 140+ with a measuring head coupled via a glass fiber bundle. The color coordinates shown in the table below were determined from the measured spectrum for the specified spectral ranges. These measurement results confirm that a material with purple color coordinates can be achieved. cxcyBlue0.1350.041Blue and red0.2870.123Red (λ > 600 nm)0.6840.316 A spectrum emitted by such a sample is shown inFIG.5. This spectrum also includes the scattered primary light with a wavelength of approximately 465 nm. The lumen equivalent of the red spectrum is 292 lm/W. This means that a radiant flux of 1 W of the red emission corresponds to a photometric flux of 292 lm/W. The conversion efficiency, i.e. the ratio of emitted red light to the input power was estimated to be 33 lumen/W. FIG.6shows a power spectral density versus wavelength graph for sample OC-202007 listed in the table above. In this case, the sample was excited by a blue LED. As in the example shown inFIG.1, the narrow-band absorption of the Eu3+can be seen as a sharp local minimum in the blue spectral range. This measurement confirmed the negligible temperature dependence of the absorption wavelength. For this purpose, the converter was placed on a hot plate and irradiated with the light from the blue LED. When heated to 170° C., the absorption wavelength of the converter did not change measurably. Thus, if the wavelength of the employed laser diodes is not stabilized, the wavelength can be adjusted solely through the temperature of the laser. The temperature of the converter need not be taken into account. For a commercially available 465 nm laser diode, a shift of 0.052 nm/K was measured. Generally, therefore, a digital projector is provided according to one embodiment of the invention, which comprises a light source1according to the invention. According to another embodiment, as in the example explained above, the converter5of light source1may advantageously furthermore comprise an element which emits green light through photoluminescence when irradiated by the laser3, in addition to the luminescent inorganic converter element7comprising the Eu3+-doped ceramic. As mentioned before, LuAG, i.e. a lutetium aluminum garnet, is particularly suitable as the luminescent material of such an element. The green emitting element may in particular also be provided in the form of a ceramic material. However, green light, in particular for the projector mentioned, may also be generated in other ways than by photoluminescence. More generally, it is therefore contemplated according to one embodiment of the invention that the light source1comprises a green light emitting emitter, preferably in the form of the element as mentioned which emits green light through photoluminescence when irradiated by the laser3. In principle, a converter can be operated in transmission or in reflection. In a reflection configuration, it may generally be advantageous to design the phosphor ceramic so as to be highly scattering in order to minimize a lateral enlargement of the emission spot for a given blue excitation spot. However, particularly in the case of materials with limited absorption of the blue light, this is typically also associated with strong backscattering of the blue light and thus with strong diffuse blue reflection. If the Eu3+-doped phosphor ceramic is design to be highly scattering, e.g. by introducing pores or other scattering centers, the combination of the red emission color coordinates with the blue remission can give purple emission color coordinates for the overall assembly. More generally, according to one embodiment of the invention, a converter assembly5is accordingly provided which comprises a luminescent inorganic converter element7comprising ceramic that is Eu3+-doped such that the converter5emits photoluminescent light in the red spectral range when exposed to the laser light, and wherein the luminescent inorganic converter element7is designed to be light-scattering so that scattered blue laser light combines with the red photoluminescent light to give emitted purple-colored light. FIG.7shows spectra of power spectral density (PSD) as a function of wavelength. The spectrum16of a purple-colored Eu3+photoluminescence converter is shown in comparison with the white-yellow emission spectrum17of a cerium-doped YAG phosphor inFIG.7. In many projectors, the red fraction of this phosphor is used for the red channel of a projector. The part of the spectrum used for this purpose is illustrated by an idealized edge filter18with a cut-off wavelength of 600 nm. Both spectra were captured at the same excitation light power. With the integration of the spectrum weighted with the eye sensitivity curve, a parameter is calculated for both spectra, which is proportional to the photometric luminous flux, measured in lumen (lm). The proportionality constant is determined from the known efficacy of the yellow reference sample of 317 lm/W. This allows to calculate the efficacy for the red emitting sample as well. The efficacy of a converter is the photometric luminous flux emitted by the sample, normalized to the incident light power and measured in lumen per watt (lm/W). The efficacy can be calculated either for the red (600 nm-780 nm), the green (475 nm-600 nm), or the entire “yellow” (475 nm-780 nm) spectral range, by adjusting the integration limits. Since the efficiency of converter materials typically shows a dependence on sample temperature and power or spot size of the excitation light, the red efficacy in the context of the present document is defined for a measurement with a light spot size of approximately 1 mm and a power of 1-10 mW at room temperature. For the example shown inFIG.7, the following efficacy values were determined at an excitation power of 3.5 mW of the 465 nm laser: YellowGreenRedefficacyefficacyefficacy[lm/W][lm/W][lm/W]Wavelength range475 nm-780 nm475 nm-600 nm600 nm-780 nmRed emitter OC-57.28.548.7202009cReference sample317.0282.934.1Ce:YAG Thus, with 48.7 lm/W the red sample exhibits a red efficacy that is significantly better than that of a typical cerium-doped YAG converter which has a red efficacy of 34.1 lm/W. More generally, without being limited to particular exemplary embodiments described herein, it is therefore contemplated according to a further embodiment of the invention that the red efficacy of the converter material is greater than 35 lm/W. Emission of purple light may also arise if the phosphor is excited over an excessively broad band or in a manner not matched in terms of the spectrum. The reason for this may be the use of a large number of blue excitation lasers in order to achieve the laser output power required in high-performance projectors. These lasers usually do not emit exactly identically in terms of their spectrum and, overall, define a rather broadband excitation light source. However, this may even be advantageous, since it allows to dispense with the laser wavelength stabilization described above, but then causes an admixture of blue to the useful light and thus results in a purple light source. Irrespective of whether the lasers are all precisely matched to the Eu3+absorption, a light source1may be provided comprising a plurality of lasers, in particular such that these lasers simultaneously irradiate the same spot of the converter. FIG.8shows emission spectra19,20of converters that include different host materials. The power spectral densities illustrated inFIG.8show that the Eu3+emission may vary depending on the host material. In order to be able to compare the spectra, the power spectral densities were normalized to the maxima of the spectra. For the two ceramics, the following color coordinates lying within the red color range are resulting: cxcyCurve 190.68230.3176Curve 200.68430.3156 In particular the weighting of the spectral components may vary, which may have an impact on the color coordinates of the emission, without however restricting the suitability of the material for use in projection. Materials are preferred in which the emission at 700 nm is not very pronounced, since the eye's sensitivity for light of this wavelength is only very low and thus it hardly contributes to the perceived brightness of the light. FIG.9shows three different configurations (a), (b), and (c) of static converter assemblies each comprising a heat spreader21, an anti-reflective coating23, and a reflector22for increasing the output of useful light. In the embodiment according to panel (a), the luminescent inorganic converter element7is applied to a heat spreader21. A reflector22may be applied to the heat spreader below the converter element7, for example in the form of a dielectric or metallic reflection layer. The outwardly facing surface of the luminescent converter element7may be provided with an anti-reflective layer23in order to improve the emission of the light. The embodiments according to panels (b) and (c) also comprise a reflector22arranged between the heat spreader and the luminescent converter element7. Here, the luminescent converter element7is arranged so as to be integrated in the heat spreader21. For example, the heat spreader22may have an appropriate recess for this purpose. In the embodiment according to panel (c), the luminescent converter element7is integrated in a through-opening of the heat spreader21, so that photoluminescent light can be emitted to both sides of the heat spreader and from both mouths of the through opening. In this embodiment, the inner surface of the through-opening may be provided with a reflector22. In the case of transmissive operation, a dichroic reflector may be applied on the side of the excitation light, which transmits the blue excitation light and reflects the red emitted light. The purple emission as suggested according to the invention can be used for the projection. The red and blue color channels are generated from the purple phosphor by color wheel filtering. If emitted light that includes blue and red components, that is to say purple light, is used to produce different colors, in particular for a projector, it is furthermore generally favorable if the purple phosphor or the purple emission is designed such that the color coordinates on the purple line between the blue and red color coordinates are such that a connecting line to the green vertex of the color space passes through the white point. This embodiment is illustrated byFIG.10. Similarly toFIG.4, this graph shows the Rec 709 color space which is delimited by curve “A”. Curve “B” again delimits the color range that can be spanned by a Eu3+-doped phosphor, a LuAG phosphor, and a laser at 465 nm. The white point25with color coordinates cx=cy=0.33 is also shown. In a converter assembly5such as a color wheel for a 1-chip projector, the blue channel then does not have to be defined by an opening in the phosphor wheel but is generated from the purple phosphor by color wheel filtering. For this purpose, the purple phosphor and/or the excitation by the blue laser light are advantageously designed such that a connecting line from the color coordinates on the purple line between the blue and red color coordinates to the green vertex of the color space preferably passes through the white point25. Regardless of the configuration of the converter, that is also regardless of whether the converter comprises a color wheel or not, a light source is provided according to one embodiment of the invention, which comprises, in addition to the luminescent inorganic Eu3+ions containing converter element7, a further photoluminescent emitter for emitting green photoluminescent light, and wherein the laser3and the luminescent inorganic converter element7are matched to one another such that the ceramic element emits purple light including red photoluminescent light and scattered light from the laser3, and wherein a ratio of the intensities of the red photoluminescent light and of the light from the laser3in the emitted light is such that the color coordinates26of the emitted purple light lie on a line29which starting from the color coordinates of the photoluminescent light of the further photoluminescent emitter passes through an area27around the white point at color coordinates cx=cy=0.33, which area27is defined by color coordinate ranges of 0.31≤cx≤0.35 and 0.31≤cy≤0.35. The area27around white point25is shown inFIG.10. Accordingly, to achieve still very good color reproduction, it is not necessary for the line29to pass exactly through the white point25. In the illustrated example, line29runs slightly past white point25, but still passes through area27. The purple light can then be divided into blue and red components, by spectral filtering, so as to then span the entire color space. In projector applications, the blue light is typically directed onto the converter material through a dichroic beam splitter to separate the blue from the yellow light path. As a result, the blue light cannot be mixed with the yellow light in such an optical configuration. This is at least the case for light that has the same polarization as the incident laser light. But even when using a polarization-dependent beam splitter, not more than 50% of the incident light can be reused. For this reason, a tilted beam configuration may be employed in white light applications for mixing yellow and blue light, which allows to laterally irradiate the excitation light from the laser and to collect the generated light including its blue component from the vertical direction. A problem with this approach is that the coupling efficiency is limited since the space required for emission of blue light cannot be used to combine the light beams. For example, if a lens33is used to collimate the light, the numerical aperture (NA) of the lens33is limited. An exemplary arrangement for this is shown in panel (a) ofFIG.11. Usually, a lens33with high NA as shown in panel (b) is used for collimation. To solve this problem, the lateral side of the lens33may be designed specifically so as to direct the incident laser beam30through the converging lens33onto the converter5(FIG.11, embodiment (c)). Here, the large differential between the etendue of the laser light30and of the emitted light50is advantageously exploited to sophistically direct the incident laser beam30onto the converter5. For a purple light source, it is crucial to proceed in this way since the material of the converter5can only be used efficiently if the backscattered blue light can also be exploited. In particular if the blue laser beam30is directed onto the converter via one or more optical fibers31, one or more channels or passages32for the fibers31may be provided in the respective collimation lens33in order to approach a fiber31as close as possible to the surface of the converter5. Panel (d) ofFIG.11shows such a configuration in which an optical fiber31is introduced into a passage32of the lens33for feeding the laser light. The light sources according to panels (c) and (d) represent exemplary embodiments of the invention in which the light source comprises a lens33for collimating the light emitted by the converter5, with an arrangement such that the laser light is incident on the converter after passing through the lens or the laser light is directed through the lens33and onto the converter5. For this purpose, the lens33may have a special shape, as in the example of panel (c). The converter assemblies shown inFIGS.9and11and known as “static” are of particular interest for the photoluminescence converter of the invention, since the decay time of the converter is comparatively long, so that the light-emitting area is not enlarged in statically operated configurations. As described above, the converter can be applied to a heat spreader and may optionally be operated in a pulsed mode in order to generate light—and hence waste heat—only when the red or optionally the blue color channel is required. A light source1according to the invention may also be employed with particular advantage in a 3-chip projector. In principle, it is again possible here for the red and blue channels to be fed from the purple phosphor, i.e. the luminescent inorganic converter element7. However, since the color channels are not superimposed sequentially in time in this case, but spatially, by a dichroic cross prism known as X-cube, the entire luminous flux emitted can be used in the projector, in principle without any filter loss. Such a projector is shown inFIG.12. The principle of this projector is based on the fact that the luminescent inorganic converter element7and the laser3are adapted such that the converter element7emits purple light including a blue component from scattered laser radiation and a red component from photoluminescence excited by the laser beam, and the blue and red components are spatially split into a blue and a red light beam and the two light beams are fed to two different chips of the projector to generate colored sub-images. As shown inFIG.12, the purple light50emitted by the luminescent inorganic converter element77can be collimated by a lens33. The light50is directed onto a dichroic beam splitter35which splits the blue and red components into two light beams51,52. In the example shown, the red sub-beam51is transmitted straight through the beam splitter35and onto a first chip38, while the blue sub-beam52is reflected out laterally. Via a mirror34, the blue sub-beam52is directed onto a second chip39. Furthermore, a green light emitter is provided. Again, the further luminescent inorganic converter element8as mentioned above can be used for this purpose, for example in the form of a LuAG ceramic element. The green sub-beam53is directed onto a third chip40. More generally, without being limited to the illustrated example, the chips38,39,40for generating colored sub-images may be in the form of LCD chips. The three light beams transmitted through the chips38,39,40are then combined in a dichroic cross prism43to form an image beam54which carries the image information and can then be projected. The invention may also be used generally for lighting purposes. Particularly considered is the use for signaling lights such as for airport lighting, maritime signaling lights, warning lights on wind turbines and radio masts, in the field of special lighting such as stage lighting, effect lighting, architectural lighting. In order to be able to produce white light for general lighting purposes, the purple light may be combined with green light. The combining with a suitably adapted light source1may in particular be made in a way so that color coordinates in the vicinity of the white point are achieved, as can be seen fromFIG.10. Accordingly, for generating white light, the laser and the luminescent inorganic converter element7are preferably adapted so that the color coordinates of the emitted light lie on the line29starting from the color coordinates of the green light emitter. The green emitter may be a green photoluminescent emitter, as in the example described above. In particular, as shown, the color coordinates26are advantageously given by the intersection of line29with the purple line of the color space. Depending on the desired effect, the color coordinates of the light source may as well lie close to the white point25, for example in order to achieve warmer or colder color tones. More generally, it is therefore suggested according to a further embodiment that the light source comprises a green emitter and generates white light by combining it with the blue light from the laser and the red photoluminescent light emitted by the luminescent inorganic converter element7. Preferably, again, a ratio of the intensities of the red photoluminescent light and the light from the laser3in the emitted light is such that the color coordinates26of the emitted purple light lie on the line29emanating from the color coordinates of the photoluminescent light of the further photoluminescent emitter and passing through an area27within a range of color coordinate of 0.31≤cx≤0.35 and 0.31≤cy≤0.35. The use of the purple light produced by combining blue remission and red emission as described herein does not imply that the converter excited by laser light of 465 nm can only be used in this way. The red efficacy of more than 34 lm/W as already proven shows that a light source comprising a converter assembly which includes such a red emitting inorganic converter element is particularly advantageous also when not using the blue excitation light, depending on the application. LIST OF REFERENCE NUMERALS 1Light source2Projector3Laser5Converter assembly7,8Converter element9Means for stabilizing laser wavelength11Loop control circuit13Heating element15Light detector16,17,19,20Emission spectrum18Edge filter21Heat spreader22Reflector23Anti-reflective coating25White point26Color coordinates of purple light27Area around2529Line through2730Laser beam31Optical fiber32Passage through3333Lens34Mirror35Dichroic beam splitter38,39,40Chip43Dichroic cross prism50Light emitted by converter551Red sub-beam52Blue sub-beam53Green sub-beam54Image beam91Diffraction grating | 31,378 |
11860522 | DESCRIPTION OF EXEMPLARY EMBODIMENTS First Embodiment A first embodiment of the present disclosure will be described below with reference to the drawings. Schematic Configuration of Projector FIG.1is a diagrammatic view showing a schematic configuration of a projector1according to the present embodiment. The projector1according to the present embodiment is a projection-type display apparatus that modulates light outputted from a light source apparatus4A to form an image according to image information and enlarges and projects the formed image on a projection receiving surface PS, such as a screen. The projector1includes an exterior enclosure2and an image projection apparatus3, as shown inFIG.1. In addition to the components described above, the projector1includes, although not illustrated, a cooler that cools a cooling target in the projector1, a controller that controls the action of the projector1, and a power supply that supplies electronic parts that form the projector1with electric power. Configuration of Exterior Enclosure The exterior enclosure2forms the exterior of the projector1. The exterior enclosure2accommodates the image projection apparatus3, the cooler, the controller, and the power supply. The exterior enclosure2has a front surface21, a rear surface22, a right side surface23, and a left side surface24, further has a top surface and a bottom surface that are not illustrated, and is formed in a substantially box-like shape. Although not illustrated, the front surface21has an opening through which an image projected by a projection optical apparatus36, which will be described later, passes. In the following description, three directions perpendicular to one another are called directions +X, +Y, and +Z. The direction +Z is the direction from the rear surface22toward the front surface21, the direction +X is the direction from the right side surface23toward the left side surface24, and the direction +Y is the direction from the bottom surface toward the top surface. Although not illustrated, the direction opposite the direction +X is called a direction −X, the direction opposite the direction +Y is called a direction −Y, and the direction opposite the direction +Z is called a direction −Z. Configuration of Image Projection Apparatus The image projection apparatus3generates an image according to image information and projects the generated image. The image projection apparatus3includes the light source apparatus4A, a homogenizing apparatus31, a color separation apparatus32, parallelizing lenses33, light modulation apparatuses34, a light combining apparatus35, and a projection optical apparatus36. The light source apparatus4A outputs illumination light WL to the homogenizing apparatus31. The configuration of the light source apparatus4A will be described later in detail. The homogenizing apparatus31homogenizes the illumination light WL outputted from the light source apparatus4A. Although not illustrated, the homogenizing apparatus31includes a pair of lens arrays, a polarization converter, and a superimposing lens. The color separation apparatus32separates the illumination light WL incident from the homogenizing apparatus31into blue light LB, green light LG, and red light LR. The color separation apparatus32includes dichroic mirrors321and322, reflection mirrors323,324, and325, relay lenses326and327, and an optical component enclosure328, which accommodates the components described above. The dichroic mirror321transmits the blue light LB contained in the illumination light WL and reflects the green light LG and the red light LR contained therein. The blue light LB having passed through the dichroic mirror321is reflected off the reflection mirror323and guided to one of the parallelizing lenses33(33B). Out of the green light LG and the red light LR reflected off the dichroic mirror321, the dichroic mirror322reflects the green light LG to guide the reflected green light LG to one of the parallelizing lenses33(33G), and transmits the red light LR. The red light LR is guided to one of the parallelizing lenses33(33R) via the relay lens326, the reflection mirror324, the relay lens327, and the reflection mirror325. The parallelizing lenses33each parallelize the light incident thereon. The parallelizing lenses33include the parallelizing lens33R for red light, the parallelizing lens33G for green light, and the parallelizing lens33B for blue light. The light modulation apparatuses34modulate the light outputted from the light source apparatus4A in accordance with image information. The light modulation apparatuses34include a red light modulator34R, which modulates red light, a green light modulator34G, which modulates green light, and a blue light modulator34B, which modulates blue light. The light modulators34R,34G, and34B each include, for example, a liquid crystal panel that modulates light incident thereon and a pair of polarizers disposed on the light incident side and the light exiting side of the liquid crystal panel. The light combining apparatus35combines the red light LR, the green light LG, and the blue light LB modulated by the light modulation apparatuses34with one another to form an image based on the image information. In the present embodiment, the light combining apparatus35is formed of a cross dichroic prism and can instead be formed of a plurality of dichroic mirrors. The projection optical apparatus36projects the combined image from the light combining apparatus35onto the projection receiving surface PS and enlarges to display an enlarged image on the projection receiving surface PS. The projection optical apparatus36can, for example, be a unit lens formed of a lens barrel and a plurality of lenses disposed in the lens barrel. Configuration of Light Source Apparatus FIG.2is a diagrammatic view showing the configuration of the light source apparatus4A. The light source apparatus4A outputs the illumination light WL to the homogenizing apparatus31along the direction +Z. The light source apparatus4A includes a light source enclosure41, a light source section42, an afocal optical element43, a first phase retarder44, a diffusive transmitter45, a light separator46, a second phase retarder47, a light collector48, a diffuser49, a third phase retarder50, and a wavelength conversion apparatus6, as shown inFIG.2. Configuration of Light Source Enclosure The light source enclosure41is an enclosure that dust is unlikely to enter and is formed in a substantially box-like shape. The light source enclosure41has a front surface411, a rear surface412, a right side surface413, and a left side surface414. In addition to the above, the light source enclosure41has, although not illustrated, a top surface coupled to the +Y-direction ends of the front surface411, the rear surface412, the right side surface413, and the left side surface414, and a bottom surface that coupled to the −Y-direction ends of the four surfaces. The front surface411is a surface of the light source enclosure41via which the illumination light WL exits, and the front surface411is disposed on one side of the light source enclosure41, the side facing in the direction +Z. The front surface411has an exit port415, via which the illumination light WL exits. The rear surface412is a surface opposite from the front surface411and is disposed in a position shifted in the direction −Z side from the front surface411. A substrate64, which will be described later, of the wavelength conversion apparatus6is thermally coupled to the rear surface412. In the light source enclosure41, the following axes are set: an illumination optical axis Ax1along the direction +X; and an illumination optical axis Ax2along the direction +Z. That is, the illumination optical axis Ax1and the illumination optical axis Ax2intersect with each other. The optical components of the light source apparatus4A are disposed on the illumination optical axis Ax1or the illumination optical axis Ax2. Specifically, the light source section42, the afocal optical element43, the first phase retarder44, the diffusive transmitter45, the light separator46, the second phase retarder47, the light collector48, and the diffuser49are arranged on the illumination axis Ax1. The wavelength conversion apparatus6, the light separator46, and the third phase retarder50are arranged on the illumination optical axis Ax2. That is, the light separator46is disposed at the intersection of the illumination optical axis Ax1and the illumination optical axis Ax2. Configuration of Light Source Section The light source section42is fixed to the right side surface413and outputs light in the direction +X along the illumination optical axis Ax1. The light source section42includes light sources421, a light source support substrate422, and lenses423. The light sources421each output s-polarized blue light BL1in the direction +X. The light sources421are formed of at least one solid-state light emitter. Specifically, the light sources421are each a semiconductor laser, and the blue light BL1outputted by each of the light sources421is, for example, laser light having a peak wavelength of 440 nm. The light source support substrate422supports the light sources421and is fixed to the right side surface413. The light source support substrate422is made, for example, of metal so that the heat of the light sources421can be readily transmitted to the light source enclosure41. The lenses423are provided in accordance with the light sources421, parallelizes the blue light BL1incident from each of the light sources421, and causes the parallelized blue light BL1to enter the afocal optical element43. Configuration of Afocal Optical Element The afocal optical element43is disposed in a position shifted in the direction +X from the light source section42and reduces the luminous flux diameter of the blue light BL1incident from the light source section42. The afocal optical element43is formed of a first lens431, which collects a luminous flux incident thereon, and a second lens432, which parallelizes the luminous flux collected by the first lens431. The afocal optical element43may be omitted. Configuration of First Phase Retarder The first phase retarder44converts part of the blue light BL1incident from the afocal optical element43into p-polarized blue light BLp. That is, the first phase retarder44converts the blue light BL1incident thereon into light that is a mixture of s-polarized blue light BLs and the p-polarized blue light BLp. A pivot apparatus may be provided to cause the first phase retarder44to pivot around a pivotal axis extending along the illumination optical axis Ax1. In this case, the ratio between the s-polarized blue light BLs and the p-polarized blue light BLp in the luminous flux having exited out of the first phase retarder44can be adjusted in accordance with the angle of the pivotal movement of the first phase retarder44. The s-polarized light is s-polarized light with respect to the light separator46, and the p-polarized light is p-polarized light with respect to the light separator46. Configuration of Diffusive Transmitter The diffusive transmitter45is disposed in a position shifted in the direction +X from the first phase retarder44and homogenizes the illuminance distributions of the blue light BLs and the blue light BLp incident from the first phase retarder44. The diffusive transmitter45can, for example, have a configuration including a hologram, a configuration in which a plurality of lenslets are arranged in a plane perpendicular to the optical axis, or a configuration in which a light passage surface is a rough surface. The diffusive transmitter45may be replaced with a homogenizer optical element including a pair of multi-lenses. Configuration of Light Separator The blue light BLs and the blue light BLp are incident on the light separator46from the diffusive transmitter45. The light separator46corresponds to a reflector. The light separator46causes a first portion of the light outputted from the light sources421to exit toward the wavelength conversion apparatus6and a second portion of the light to exit toward the diffuser49. In detail, the light separator46is a polarizing beam splitter that separates the s-polarized light component and the p-polarized light component contained in the light incident on the light separator46from each other, reflects the s-polarized light component, and transmits the p-polarized light component. The light separator46has color separation characteristics that cause the light separator46to transmit light having a predetermined wavelength and longer wavelengths irrespective of the s-polarized light component or the p-polarized light component. The light separator46therefore transmits the p-polarized blue light BLp out of the blue light BLs and the blue light BLp incident from the diffusive transmitter45to cause the blue light BLp to enter the second phase retarder47and reflects the s-polarized blue light BLs toward the wavelength conversion apparatus6. The light separator46may instead have the function of a half-silvered mirror that transmits part of the light incident from the diffusive transmitter45and reflects the remaining light and the function of a dichroic mirror that reflects the blue light BLs incident from the second phase retarder47and transmits fluorescence YL incident from the wavelength conversion apparatus6. In this case, the first phase retarder44and the second phase retarder47can be omitted. In the present specification, the s-polarized blue light BLs separated by the light separator46is an example of first light having a first wavelength band outputted from the light sources421. That is, the light separator46reflects the first light outputted from the light sources421to guide the reflected first light to a wavelength converter63. Configuration of Second Phase Retarder The second phase retarder47is disposed in a position shifted in the direction +X from the light separator46. The second phase retarder47converts the blue light BLp incident in the direction +X from the light separator46into circularly polarized blue light BLc. The second phase retarder47converts the circularly polarized blue light BLc incident in the direction −X from the light collector48into s-polarized blue light BL2. Configuration of Light Collector The light collector48is disposed in a position shifted in the direction +X from the second phase retarder47and collects the blue light BLc incident from the second phase retarder47at the diffuser49. The light collector48parallelizes the blue light BLc incident from the diffuser49and causes the parallelized blue light BLc to exit to the second phase retarder47. The light collector48is formed of two lenses481and482, but the number of lenses that forms the light collector48can be changed as appropriate. Configuration of Diffuser The diffuser49reflects in the direction −X the blue light BLc incident from the light collector48in such a way that the reflected blue light BLc diffuses at an angle of diffusion equal to the angle of diffusion of the light from the wavelength converter63, which will be described later. The diffuser49, for example, reflects the blue light BLc incident thereon in the Lambertian reflection scheme. The blue light BLc reflected off the diffuser49passes through the light collector48along the direction −X and then enters the second phase retarder47. When reflected off the diffuser49, the blue light BLc is converted into circularly polarized light having a direction of polarization rotation opposite from the direction of the polarization rotation of the blue light BLc having passed in the direction +X through the second phase retarder47. The blue light BLc having entered the second phase retarder47via the light collector48is therefore converted into the s-polarized blue light BL2by the second phase retarder47. The blue light BL2incident on the light separator46from the second phase retarder47is reflected in the direction +Z off the light separator46and enters the third phase retarder50. Configuration of Third Phase Retarder The third phase retarder50is disposed in a position shifted in the direction +Z from the light separator46and converts the blue light BL2and the fluorescence YL incident from the light separator46into white light that is a mixture of s-polarized light and p-polarized light. The thus converted white light exits as the illumination light WL to the homogenizing apparatus31. That is, the light outputted from the light source apparatus4A to the homogenizing apparatus31is the illumination light WL, which is the mixture of the blue light BL2and the fluorescence YL. Configuration of Wavelength Conversion Apparatus The wavelength conversion apparatus6converts the wavelength of light incident thereon and emits the light. That is, the wavelength conversion apparatus6outputs the fluorescence YL, which is the result of the conversion of the wavelength of the blue light BLs incident on the light separator46. The wavelength conversion apparatus6includes the following components sequentially from the side facing the light separator46: an optical element61; a light collector62; a wavelength converter63; a substrate64; a heat dissipation member65; and a driver66, which rotates the optical element61and the light collector62around an axis of rotation Rx. Configuration of Optical Element The optical element61refracts the blue light BLs incident from the light separator46and causes the refracted blue light BLs to exit toward the light collector62. The optical element61further refracts the fluorescence YL incident from the light collector62and causes the refracted fluorescence YL to exit toward the light separator46. The optical element61is a plate-shaped light-transmissive member and is made of glass in the present embodiment. The optical element61has a first surface611and a second surface612disposed on the side opposite from the first surface611. The first surface611and the second surface612each incline with respect to a plane perpendicular to the direction +Z. The first surface611and the second surface612are parallel to each other. The state in which the first surface611and the second surface612are parallel to each other includes a state in which the two surfaces are substantially parallel to each other. The first surface611is disposed on the side facing the direction +Z. The first surface611is a surface on which the blue light BLs is incident from the light separator46. The first surface611is also a surface via which the fluorescence YL incident from the light collector62exits to the light separator46. The blue light BLs having exited out of the light separator46in the direction −Z along the illumination optical axis Ax2is incident on the first surface611. The blue light BLs incident on the first surface611is refracted when the blue light BLs enters the optical element61. The optical axis of the blue light BLs incident on the first surface611of the optical element61is hereinafter referred to as a first optical axis X1. The second surface612is disposed on the side facing in the direction −Z. The second surface612is a surface via which the blue light BLs having traveled in the optical element61exits toward the light collector62. That is, the second surface612is a surface via which the blue light BLs incident on the first surface611and refracted thereby exits in the direction −Z toward the light collector62. The optical axis of the blue light BLs having exited via the second surface612is hereinafter referred to as a second optical axis X2. That is, a light exiting position on the second surface612, the position from which the blue light BLs as the first light exits, is located on the second optical axis X2. The second surface612is a surface on which the fluorescence YL is incident from the wavelength converter63. That is, the fluorescence YL having exited out of the light collector62in the direction +Z is incident on the second surface612. The fluorescence YL incident on the second surface612is refracted when the fluorescence YL enters the optical element61. The fluorescence YL having traveled in the optical element61exits via the first surface611in the direction +Z toward the light separator46. The fluorescence YL having exited via the first surface611in the direction +Z travels along the optical path of the blue light BLs incident on the first surface611in the direction opposite from the direction in which the blue light BLs travels. Configuration of Light Collector The light collector62is disposed in a position shifted in the direction −Z from the optical element61. That is, the light collector62is disposed between the optical element61and the wavelength converter63in the direction +Z. The light collector62collects the blue light BLs incident from the optical element61at the wavelength converter63. The light collector62also parallelizes the fluorescence YL incident from the wavelength converter63and causes the parallelized fluorescence YL to exit to the optical element61. The light collector62includes a first lens621disposed on the side facing in the direction +Z and a second lens622on the side facing in the direction −Z. The number of lenses that form the light collector62may, however, be one or three or more. The thus configured light collector62is so disposed that the optical axis of the light collector62coincides with the second optical axis X2. The focal point of the light collector62is therefore present on the second optical axis X2. The state in which the optical axis of the light collector62coincides with the second optical axis X2includes a state in which the optical axes substantially coincide with each other. Configuration of Wavelength Converter The wavelength converter63converts the first light having the first wavelength band into second light having a second wavelength band different from the first wavelength band. That is, the wavelength converter63converts the wavelength of the blue light BLs incident from the light collector62and emits the fluorescence YL, which is converted light resulting from the conversion. In the present embodiment, the wavelength converter63is a reflective wavelength converter that emits the fluorescence YL toward the side on which the blue light BLs is incident. The wavelength converter63includes a wavelength conversion layer631and a reflection layer633. The wavelength conversion layer631contains a phosphor that generates the fluorescence YL having wavelengths longer than the wavelength of the blue light BLs. The fluorescence YL is, for example, light having a peak wavelength that falls within a range from 500 to 700 nm and contains green light and red light. The fluorescence YL is an example of the second light having the second wavelength band different from the first wavelength band. A surface of the wavelength conversion layer631, the surface facing in the direction +Z, is a light incident surface632, on which the blue light BLs is incident. That is, the wavelength converter63has the light incident surface632, on which the blue light BLs is incident. The light incident surface632intersects with the first optical axis X1at the center of the light incident surface632when viewed in the direction +Z. That is, the axis of rotation Rx, which coincides with the first optical axis X1, intersects with the light incident surface632at the center thereof when viewed in the direction +Z. The state in which the first optical axis X1and the axis of rotation Rx intersect with the light incident surface632at the center thereof includes a state in which the two axes intersect with the light incident surface632at a point close to the center thereof. On the other hand, the position where the second optical axis X2intersects with the light incident surface632is separate from the position where the first optical axis X1intersects with the light incident surface632. That is, the first optical axis X1and the second optical axis X2are separate from each other at the light incident surface632. The position where the second optical axis X2intersects with the light incident surface632is the position where the blue light BLs is incident on the light incident surface632. In the present embodiment, the wavelength conversion layer631is formed substantially in a circular shape when viewed in the direction +Z, but not necessarily. The wavelength conversion layer631may be formed substantially in a rectangular shape or an annular shape when viewed in the direction +Z. The reflection layer633is provided on the opposite side of the wavelength conversion layer631from the side on which the blue light BLs is incident. That is, the reflection layer633is provided in a position shifted in the direction −Z from the wavelength conversion layer631. The reflection layer633reflects in the direction +Z the light incident thereon from the wavelength conversion layer631. The reflection layer633is also a portion of the wavelength converter63, the portion where the wavelength converter63is coupled to the substrate64. The fluorescence YL emitted from the wavelength converter63in the direction +Z enters the light collector62. The fluorescence YL having entered the light collector62is parallelized by the light collector62and is incident on the second surface612of the optical element61. The fluorescence YL incident on the second surface612is refracted by the optical element61and exits in the direction +Z via the first surface611. The fluorescence YL having exited via the first surface611travels along the first optical axis X1, is incident on the light separator46, passes through the light separator46, and enters the third phase retarder50. Configurations of Substrate and Heat Dissipation Member The substrate64supports the wavelength converter63. The substrate64has a support surface641, which supports the wavelength converter63, and the support surface641is thermally coupled to the outer surface of the rear surface412. The substrate64is made of a material, such as metal, so that heat of the wavelength converter63can be readily transmitted. The heat dissipation member65is provided at the opposite surface of the substrate64from the wavelength converter63. The heat dissipation member65dissipates the heat of the wavelength converter63transmitted from the substrate64to the space outside the light source enclosure41. The heat dissipation member65has a plurality of fins651, and a cooling gas circulated by the cooler flows through the gaps between the plurality of fins651. The plurality of fins651dissipate the heat of the wavelength converter63by transmitting the heat of the wavelength converter63to the cooling gas. Configuration of Driver The driver66rotates the optical element61and the light collector62around the axis of rotation Rx. In detail, the driver66rotates the optical element61and the light collector62as a unit around the axis of rotation Rx. The frequency of the rotation of the optical element61and the light collector62rotated by the driver66can be set at any value. When the frequency is 60 Hz or higher, a user is unlikely to recognize flickers of an image. The axis of rotation Rx coincides with the first optical axis X1, as described above. The state in which the axis of rotation Rx coincides with the first optical axis X1includes a state in which the axis of rotation Rx substantially coincides with the first optical axis X1. The driver66can be formed, for example, of a hollow motor that accommodates the optical element61and the light collector62. A situation in which the blue light BLs and the fluorescence YL passing through the optical element61and the light collector62are blocked by the driver66can therefore be avoided. The configuration described above is, however, not necessarily employed, and the driver66may include a holding member that holds the optical element61and the light collector62and rotate the holding member around the axis of rotation Rx. Position where Blue Light is Incident on Wavelength Converter FIG.3is a plan view viewed in the direction +Z and showing a light incident position SP, where the blue light BL1is incident on the light incident surface632of the wavelength converter63. The first optical axis X1is the optical axis of the blue light BLs incident on the optical element61. The second optical axis X2is the optical axis of the blue light BLs that is refracted by the optical element61and exits via the second surface612. The optical axis of the light collector62coincides with the second optical axis X2, and the focal point of the light collector62is present on the second optical axis X2. The first optical axis X1, which coincides with the axis of rotation Rx, and the second optical axis X2are separate from each other at the light incident surface632, as shown inFIG.3. The light incident position SP, where the blue light BLs is incident on the light incident surface632, corresponds to the intersection where the second optical axis X2intersects with the light incident surface632. When the driver66rotates the optical element61and the light collector62around the axis of rotation Rx, the light incident position SP is located on a circular trajectory around the first optical axis X1. That is, the light incident position SP continuously moves with time on the light incident surface632along the circumferential direction around the intersection where the first optical axis X1intersects with the light incident surface632. As described above, the configuration in which the light incident position SP, where the blue light BLs, which is the excitation light, is incident, continuously changes with time on the light incident surface632can avoid the continuous local incidence of the blue light BLs on the light incident surface632. The situation in which the temperature of the light incident surface632locally rises can therefore be avoided, whereby a decrease in the conversion efficiency at which the wavelength converter63converts the blue light BLs into the fluorescence YL can be suppressed. Furthermore, since the focal point of the light collector62is located on the second optical axis X2, the blue light BLs is focused at the intersection of the second optical axis X2and the light incident surface632, so that the range over which the blue light BLs is incident on the light incident surface632can be reduced. The range over which the fluorescence YL is emitted from the wavelength converter63can therefore be reduced, whereby the fluorescence YL that can be used by an optical system downstream of the wavelength converter63can be increased. Effects of First Embodiment The projector1according to the present embodiment described above provides the following effects. The projector1includes the light source apparatus4A, the light modulation apparatuses34, which modulate the illumination light WL outputted from the light source apparatus4A to form an image, and the projection optical apparatus36, which projects the image formed by the light modulation apparatuses34. The light source apparatus4A includes the light sources421, the optical element61, the light collector62, the wavelength converter63, the substrate64, and the driver66. The light sources421output the blue light BLs. The blue light BLs corresponds to the first light having the first wavelength band. The wavelength converter63converts the blue light BLs into the fluorescence YL. The wavelength converter63has the light incident surface632, on which the blue light BLs is incident. The fluorescence YL corresponds to the second light having the second wavelength band different from the first wavelength band. The substrate64supports the wavelength converter63. The optical element61is provided in the optical path of blue light BLs outputted from the light sources421and incident on the wavelength converter63. The light collector62collects the blue light BLs having exited out of the optical element61at the wavelength converter63. The driver66rotates the optical element61and the light collector62around the axis of rotation Rx parallel to the first optical axis X1of the blue light BLs incident on the optical element61. The optical element61has the first surface611, on which the blue light BLs is incident, and the second surface612, via which the blue light BLs exits toward the light collector62. The light incident position SP, where the blue light BLs is incident on the first surface611, is present on the first optical axis X1. The light exiting position on the second surface612, the position from which the blue light BLs exits, and the focal point of the light collector62are present on the second optical axis X2parallel to the first optical axis X1. The first optical axis X1and the second optical axis X2are separate from each other at the light incident surface632. The configuration described above, in which the driver66rotates the optical element61and the light collector62around the axis of rotation Rx, allows movement of the light incident position SP, where the blue light BLs is incident on the intersection where the second optical axis X2intersects with the light incident surface632of the wavelength converter63. In this process, the focal point of the light collector62is present on the second optical axis X2, so that the blue light BLs is focused at the intersection where the second optical axis X2intersects with the light incident surface632. The range over which the blue light BLs is incident on the incident surface632can therefore be reduced, whereby the range over which the fluorescence YL is emitted from the wavelength converter63can be reduced. As a result, the fluorescence YL that can be used by an optical system downstream of the wavelength converter63can be increased. In addition to the above, since the continuous local incidence of the blue light BLs on the wavelength converter63can be avoided, the situation in which the temperature of the wavelength converter63locally rises can be suppressed. A decrease in the conversion efficiency at which the wavelength converter63converts the blue light BLs into the fluorescence YL can therefore be suppressed, whereby the efficiency at which the fluorescence YL is extracted from the wavelength converter63can be increased. In the light source apparatus4A, the axis of rotation Rx coincides with the first optical axis X1. Consider now a case where the axis of rotation Rx around which the optical element61and the light collector62are rotated does not coincide with the first optical axis X1. When the optical element61and the light collector62are rotated by the driver66, the trajectory of the light incident position SP, where the blue light BLs is incident on the light incident surface632, is located outside the trajectory, around the axis of rotation Rx, of the portion where the first optical axis X1intersects with the light incident surface632. That is, the trajectory of the light incident position SP is concentric with and disposed outside the trajectory of the portion where the first optical axis X1intersects with the light incident surface632. In this case, since the trajectory of the light incident position SP, where the blue light BLs is incident, has a relatively large diameter, the wavelength converter63tends to be large to avoid loss of the blue light BLs incident on the wavelength converter63. In contrast, when the axis of rotation Rx coincides with the first optical axis X1, the diameter of the trajectory of the light incident position SP, where the blue light BLs is incident on the light incident surface632, can be smaller than the diameter in the case where the axis of rotation Rx does not coincide with the first optical axis X1. The size of the wavelength converter63can therefore be reduced. In the light source apparatus4A, the first surface611inclines with respect to a plane perpendicular to the first optical axis X1, and the first surface611and the second surface612are parallel to each other. According to the configuration described above, the blue light BLs incident on the first surface611can be refracted by the optical element61. The light incident position SP, where the blue light BLs is incident on the light incident surface632, can therefore be changed when the driver66rotates the optical element61and the light collector62. Furthermore, since the first surface611and the second surface612are parallel to each other, the light exiting position from which the blue light BLs exits via the second surface612can be readily grasped. The light collector62, the focal point of which is located on the second optical axis X2, can therefore be readily disposed. Second Embodiment A second embodiment of the present disclosure will next be described. The projector according to the present embodiment has the same configuration as that of the projector1according to the first embodiment but differs therefrom in terms of the configuration of the light source apparatus. Specifically, the light source apparatus provided in the projector according to the present embodiment further includes an afocal optical element in addition to the configuration of the light source apparatus4A according to the first embodiment. In the following description, portions that are the same or substantially the same as the portions having been already described have the same reference characters and will not be described. Configuration of Light Source Apparatus FIG.4is a diagrammatic view showing a light source apparatus4B according to the present embodiment. The projector according to the present embodiment has the same configuration and function as those of the projector1according to the first embodiment except that the light source apparatus4A according to the first embodiment is replaced with the light source apparatus4B shown inFIG.4. The light source apparatus4B has the same configuration and function as the light source apparatus4A except that an afocal optical element51is further provided. The afocal optical element51is disposed on the illumination optical axis Ax2between the light separator46and the wavelength conversion apparatus6. That is, the afocal optical element51is provided on a side of the optical element61, the side on which the blue light BLs is incident, in the optical path of the blue light BLs to be incident on the wavelength converter63. The afocal optical element51reduces the luminous flux diameter of the blue light BLs having exited out of the light separator46and to be incident on the optical element61and parallelizes the blue light BLs. The afocal optical element51further increases the luminous flux diameter of the fluorescence YL incident from the optical element61and parallelizes the fluorescence YL. The afocal optical element51has a first lens511provided on the side facing in the direction +Z and a second lens512provided on the side facing in the direction −Z. The first lens511collects the blue light BLs incident from the light separator46. The second lens512parallelizes the blue light BLs collected by the first lens511and causes the collected blue light BLs to exit in the direction −Z. The blue light BLs having exited out of the second lens512enters the optical element61of the wavelength conversion apparatus6. The second lens512increases the luminous flux diameter of the fluorescence YL incident from the optical element61. The first lens511parallelizes the fluorescence YL incident from the second lens512. The fluorescence YL having exited out of the first lens511in the direction +Z is incident on the light separator46. Effects of Second Embodiment The projector according to the present embodiment described above provides the effects below as well as the same effects provided by the projector1according to the first embodiment. The light source apparatus4B includes the afocal optical element51, and the afocal optical element51is provided on a side of the optical element61, the side on which the blue light BLs is incident, in the optical path of the blue light BLs to be incident on the wavelength converter63, reduces the luminous flux diameter of the blue light BLs, and parallelizes the blue light BLs. According to the configuration described above, the afocal optical element51can reduce the luminous flux diameter of the blue light BLs to be incident on the optical element61. The sizes of the optical element61, the light collector62, and the wavelength converter63can thus be reduced. The size of the light source apparatus4B can therefore be reduced. Third Embodiment A third embodiment of the present disclosure will next be described. The projector according to the present embodiment has the same configuration as that of the projector according to the first embodiment but differs therefrom in terms of the configuration of the light source apparatus. Specifically, the light source apparatus provided in the projector according to the present embodiment further includes an afocal optical element in addition to the configuration of the light source apparatus4A according to the first embodiment. Furthermore, the position of the afocal optical element in the present embodiment differs from the position of the afocal optical element in the light source apparatus4B according to the second embodiment. In the following description, portions that are the same or substantially the same as the portions having been already described have the same reference characters and will not be described. FIG.5is a diagrammatic view showing the configuration of a light source apparatus4C according to the present embodiment. The projector according to the present embodiment has the same configuration and function as those of the projector1according to the first embodiment except that the light source apparatus4A according to the first embodiment is replaced with the light source apparatus4C shown inFIG.5. The light source apparatus4C has the same configuration and function as the light source apparatus4A except that an afocal optical element52is further provided. The afocal optical element52is disposed in the light source enclosure41. Specifically, the afocal optical element52is disposed in the illumination optical axis Ax2on a side of the third phase retarder50, the side toward which the illumination light WL exits. That is, the afocal optical element52is disposed between the third phase retarder50and the homogenizing apparatus31. The afocal optical element52increases the luminous flux diameter of the illumination light WL incident from the third phase retarder50, parallelizes the illumination light WL having the increased diameter, and causes the illumination light WL to exit. That is, the afocal optical element52increases the luminous flux diameter of the fluorescence YL emitted from the wavelength converter63and parallelizes the fluorescence YL having the increased diameter. The afocal optical element52includes a first lens521and a second lens522. The first lens521is disposed on the side facing in the direction −Z and increase the diameter of the light incident from the third phase retarder50. The second lens522parallelizes the light incident in the direction +Z from the first lens521. The afocal optical element52may instead be provided in the illumination optical axis Ax2between the light separator46and the third phase retarder50. Effects of Third Embodiment The projector according to the present embodiment described above provides the effects below as well as the same effects provided by the projector1according to the first embodiment. The light source apparatus4C includes the afocal optical element52, which increases the luminous flux diameter of the fluorescence YL emitted from the wavelength converter63and parallelizes the light having the increased diameter. According to the configuration described above, the afocal optical element52can increase the luminous flux diameter of the fluorescence YL to be outputted from the light source apparatus4C. Therefore, when the light source apparatus4C is so configured that the luminous flux diameter of the illumination light WL outputted from the light source apparatus4C is equal to the luminous flux diameter of the illumination light WL outputted from the light source apparatus without the afocal optical element52, the afocal optical element52allows reduction in the sizes of the parts upstream thereof in the optical paths of the blue light and the fluorescence out of the parts that form the light source apparatus4C. The size of the light source apparatus4C can therefore be reduced. Variations of Embodiments The present disclosure is not limited to the embodiments described above, and variations, improvements, and other modifications to the extent that the advantage of the present disclosure is achieved fall within the scope of the present disclosure. In the embodiments described above, it is assumed that the light source apparatuses4A,4B, and4C each include the light separator46, which guides part of the blue light outputted from light sources421to the wavelength converter63and guides the other part of the blue light to the diffuser49, but not necessarily. The light source apparatuses according to the present disclosure may each be configured to cause the entire blue light outputted from the light sources421to enter the wavelength converter63. In this case, for example, the light source apparatus may be configured to output white light by combining blue light outputted from another light source with the fluorescence YL generated by the wavelength converter63. In the embodiments described above, it is assumed that the axis of rotation RX, around which the driver66rotates the optical element61and the light collector62coincides with the first optical axis X1, which is the optical axis of the blue light BLs incident on the optical element61, but not necessarily. The axis of rotation Rx may not coincide with the first optical axis X1. In the embodiments described above, it is assumed that the first surface611of the optical element61inclines with respect to a plane perpendicular to the first optical axis X1, but not necessarily. The first surface of the optical element, the surface on which the first light is incident, may not incline with respect to a plane perpendicular to the first optical axis. That is, the optical element only needs to be capable of changing the traveling direction of the first light incident on the first surface and causing the first light to exit in parallel to the first optical axis via the second surface. Furthermore, it is assumed that the first surface611and the second surface612are parallel to each other, but not necessarily. The first surface and the second surface of the optical element may not be parallel to each other. In the embodiments described above, it is assumed that the axis of rotation Rx, around which the optical element61and the light collector62are rotated, intersects with the light incident surface632, at the center thereof, of the wavelength conversion layer631provided in the wavelength converter63, but not necessarily. The axis of rotation around which the optical element and the light collector are rotated may intersect with a portion of the light incident surface, on which the first light is incident, of the wavelength converter, the portion excluding the center of the light incident surface. In the embodiments described above, it is assumed that the light source apparatuses4A,4B, and4C each include the light separator46, which reflects the blue light BLs outputted from the light sources421to guide the reflected blue light BLs to the wavelength converter63, but not necessarily. The light separator46may be omitted. That is, the wavelength conversion apparatus6may be disposed on the illumination optical axis Ax1, and the blue light BLs outputted from the light sources421along the illumination optical axis Ax1may be caused to enter the wavelength conversion apparatus6. It is further assumed in the embodiments described above that the light separator46corresponds to the reflector. However, the reflector in the present disclosure does not necessarily reflect part of the light incident thereon and transmit the other part of the light in accordance with the wavelength or the polarization state of the light and may instead be a total reflection mirror that reflects substantially the entire incident light. In the second embodiment described above, it is assumed that the light source apparatus4B includes the afocal optical element51, and that the afocal optical element51is provided on a side of the optical element61, the side on which the blue light BLs is incident, in the optical path of the blue light BLs to be incident on the wavelength converter63, reduces the luminous flux diameter of the blue light BLs, and parallelizes the blue light BLs. That is, it is assumed that the light source apparatus4B includes the afocal optical element51, and that the light source apparatus4B is provided between the light separator46and the optical element61, reduces the diameter of the luminous flux incident from the light separator46on the optical element61, and parallelizes the luminous flux having the reduced diameter. The afocal optical element51may be so disposed between the optical element61and the light collector62that the optical axis of the afocal optical element51coincides with the second optical axis X2. In this case, the driver66may rotate the optical element61, the light collector62, and the afocal optical element51as a unit around the axis of rotation Rx. In the embodiments described above, it is assumed that the wavelength converter63includes the wavelength conversion layer631, which converts the blue light BLs into the fluorescence YL, and the reflection layer633, which reflects the light incident from the wavelength conversion layer631, but not necessarily. The reflection layer633may be omitted. In this case, the substrate64may be configured to reflect the light incident from the wavelength conversion layer631. Furthermore, the wavelength converter63may be configured to cause the fluorescence YL to exit along the direction in which the blue light BLs is incident. That is, the wavelength converter in the present disclosure may be a transmissive wavelength converter. In the embodiments described above, it is assumed that the wavelength conversion apparatus6includes the substrate64and the heat dissipation member65, but not necessarily. The substrate64may be omitted, and the heat dissipation member65may be omitted. Furthermore, the substrate64is not necessarily be thermally coupled to the light source enclosure41. In the first embodiment described above, it is assumed that the light source apparatus4A has the configuration and layout shown inFIG.2. In the second embodiment described above, it is assumed that the light source apparatus4B has the configuration and layout shown inFIG.4. In the third embodiment described above, it is assumed that the light source apparatus4C has the configuration and layout shown inFIG.5. The configuration and layout of any of the light source apparatuses according to the present disclosure are, however, not limited to the those described above. The same holds true for the projector including any of the light source apparatuses according to the present disclosure. In the embodiments described above, the light modulation apparatuses34include the three light modulators34B,34G, and34R, but not necessarily. The number of light modulators that form the light modulation apparatuses is not limited to three and can be changed as appropriate. It is further assumed in the embodiments described above that the light modulators34B,34G, and34R include each a transmissive liquid crystal panel having a light incident surface and a light exiting surface different from each other, but not necessarily. The light modulators may each include a reflective liquid crystal panel having a surface that serves both as the light incident surface and the light exiting surface. Still instead, the light modulators34B,34G, and34R may each be a light modulator using any component other than a liquid-crystal-based component if the modulators can modulate an incident luminous flux to form an image according to image information, such as a device using micromirrors, for example, a digital micromirror device (DMD). The aforementioned embodiments have been described with reference to the case where the light source apparatuses according to the present disclosure are each incorporated in a projector, but not necessarily. The light source apparatuses according to the present disclosure may each be used in an electronic instrument other than a projector, for example, an illuminator and a headlight of an automobile. Overview of Present Disclosure The present disclosure will be summarized below as additional remarks. A light source apparatus according to a first aspect of the present disclosure includes a light source that outputs first light having a first wavelength band, a wavelength converter that converts the first light into second light having a second wavelength band different from the first wavelength band, a substrate that supports the wavelength converter, an optical element provided in the optical path of the first light outputted from the light source and incident on the wavelength converter, a light collector that collects the first light having exited out of the optical element at the wavelength converter, and a driver that rotates the optical element and the light collector around an axis of rotation parallel to a first optical axis of the first light incident on the optical element. The wavelength converter has a light incident surface on which the first light is incident. The optical element has a first surface on which the first light is incident and a second surface via which the first light exits toward the light collector. A light incident position on the first surface, the position on which the first light is incident, is present on the first optical axis. A light exiting position on the second surface, the position from which the first light exits, and the focal point of the light collector are present on a second optical axis parallel to the first optical axis. The first optical axis and the second optical axis are separate from each other at the light incident surface. According to the configuration described above, the driver can rotate the optical element and the light collector around the axis of rotation to move the light incident position, where the first light is incident on the intersection where the second optical axis intersects with the light incident surface of the wavelength converter. In this process, the focal point of the light collector is present on the second optical axis, so that the first light is focused at the intersection where the second optical axis intersects with the wavelength converter. The range over which the first light is incident on the incident surface can therefore be reduced, whereby the range over which the second light is emitted from the wavelength converter can be reduced. In addition to the above, since continuous local incidence of the first light on the wavelength converter can be avoided, whereby the situation in which the temperature of the wavelength converter locally rises can be suppressed. A decrease in the efficiency at which the wavelength converter converts the first light into the second light can therefore be suppressed, whereby the efficiency at which the second light is extracted from the wavelength converter can be increased. In the first aspect described above, the axis of rotation may coincide with the first optical axis. Consider now a case where the axis of rotation around which the optical element and the light collector are rotated does not coincide with the first optical axis. When the optical element and the light collector are rotated by the driver, the trajectory of the light incident position where the first light is incident on the light incident surface is located outside the trajectory, around the axis of rotation, of the portion where the first optical axis intersects with the light incident surface. That is, the trajectory of the light incident position is concentric with and disposed outside the trajectory of the portion where the first optical axis intersects with the light incident surface. In this case, since the trajectory of the light incident position of the first light has a relatively large diameter, the wavelength converter tends to be large to avoid loss of the first light incident on the wavelength converter. In contrast, when the axis of rotation coincides with the first optical axis, the diameter of the trajectory of the light incident position where the first light is incident on the light incident surface can be smaller than the diameter in the case where the axis of rotation does not coincide with the first optical axis. The size of the wavelength converter can therefore be reduced. In the first aspect described above, the first surface may incline with respect to a plane perpendicular to the first optical axis, and the first surface and the second surface may be parallel to each other. According to the configuration described above, the first light incident on the first surface can be refracted by the optical element. The light incident position where the first light is incident on the light incident surface can therefore be changed when the driver rotates the optical element and the light collector. Furthermore, since the first surface and the second surface are parallel to each other, the position from which the first light exits via the second surface can be readily grasped. The light collector, the focal point of which is located on the second optical axis, can therefore be readily disposed. In the first aspect described above, the light source apparatus may include an afocal optical element provided in the optical path of the first light incident on the wavelength converter and on a side of the optical element, the side on which the first light is incident, the afocal optical element reducing the luminous flux diameter of the first light and parallelizing the first light. According to the configuration described above, the afocal optical element can reduce the luminous flux diameter of the first light to be incident on the optical element. The sizes of the optical element, the light collector, and the wavelength converter can thus be reduced. The size of the light source apparatus can therefore be reduced. In the first aspect described above, the light source apparatus may include an afocal optical element that increases the luminous flux diameter of the second light emitted from the wavelength converter and parallelizes the second light having the increased diameter. According to the configuration described above, the afocal optical element can increase the luminous flux diameter of the second light to be outputted from the light source apparatus. Therefore, when the light source apparatus including the afocal optical element is so configured that the diameter of the luminous flux outputted from a light source apparatus including no afocal optical element is equal to the diameter of the luminous flux outputted from a light source apparatus including an afocal optical element, the afocal optical element allows reduction in the sizes of the parts upstream thereof in the optical paths of the first light and the second light out of the parts that form the light source apparatus. The size of the light source apparatus can therefore be reduced. A projector according to a second aspect of the present disclosure includes the light source apparatus according to the first aspect described above, a light modulation apparatus that modulates illumination light outputted from the light source apparatus to form an image, and a projection optical apparatus that projects the image formed by the light modulation apparatus. The configuration described above can provide the same effects as those provided by the light source apparatus according to the first aspect. | 60,277 |
11860523 | DESCRIPTION OF EXEMPLARY EMBODIMENTS First Embodiment A first embodiment of the present disclosure will be described below with reference toFIGS.1to7. FIG.1is a schematic configuration diagram showing a projector1according to the first embodiment.FIG.2is a schematic configuration diagram of a light source apparatus2provided in the projector1. In the following drawings, components are drawn at different dimensional scales in some cases for clarification of each of the components. The projector1is an example of a projector using three transmissive liquid crystal light valves as a light modulator. The light modulator may instead, for example, be reflective liquid crystal light valves. The light modulator may still instead be a light modulator that is not based on a liquid crystal material, such as a device using a micromirror or an apparatus including a DMD (digital micromirror device). Projector The configuration of the projector1will first be described. The projector1includes the light source apparatus2, a color separation system3, light modulators4R,4G, and4B, a light combining system5, and a projection system6, as shown inFIG.1. The light source apparatus2outputs white light WL as illumination light. The color separation system3separates the white light WL outputted from the light source apparatus2into red light LR, green light LG, and blue light LB. The light modulators4R,4G, and4B modulate the red light LR, the green light LG, and the blue light LB, respectively, in accordance with image information to form red image light, green image light, and blue image light. The light combining system5combines the red image light outputted from the light modulator4R, the green image light outputted from the light modulator4G, and the blue image light outputted from the light modulator4B with one another. The projection system6projects the combined image light from the light combining system5toward a screen SCR. The light source apparatus2outputs the white light WL, which is the illumination light, toward the color separation system3, as will be described later with reference toFIG.2. The white light WL contains, out of blue excitation light EL, that is, blue light BL0outputted from an array light source20, blue light BL having exited out of a wavelength converter50without being converted in terms of wavelength and yellow fluorescence YL generated by the wavelength conversion of the blue light BL0. The white light WL is adjusted by the light source apparatus2so as to have a substantially uniform illuminance distribution. The color separation system3includes a first dichroic mirror7a, a second dichroic mirror7b, a first reflection mirror8a, a second reflection mirror8b, and a third reflection mirror8c, as shown inFIG.1. The first dichroic mirror7aseparates the white light WL outputted from the light source apparatus2into the red light LR and the mixture of the green light LG and the blue light LB. To this end, the first dichroic mirror7atransmits the red light LR and reflects the green light LG and the blue light LB. The second dichroic mirror7bseparates the mixture of the green light LG and the blue light LB into the green light LG and the blue light LB. To this end, the second dichroic mirror7breflects the Green light LG and transmits the blue light LB. The first reflection mirror8ais disposed in the optical path of the red light LR and reflects the red light LR having passed through the first dichroic mirror7atoward the light modulator4R. The second reflection mirror8band the third reflection mirror8care disposed in the optical path of the blue light LB and reflects the blue light LB having passed through the second dichroic mirror7bto guide the blue light LB to the light modulator4B. When transmitting the red light LR, the light modulator4R modulates the light in accordance with image information to form red image light. Similarly, when transmitting the green light LG, the light modulator4G modulates the light in accordance with image information to form green image light. When transmitting the blue light LB, the light modulator4B modulates the light in accordance with image information to form blue image light. The light modulators4R,4G, and4B are each formed, for example, of a liquid crystal panel. Polarizers (not shown) are disposed on the light incident side and the light exiting side of each of the light modulators4R,4G, and4B. A field lens10R, which converts the red light LR incident on the light modulator4R into parallelized light, is provided on one side of the light modulator4R that is the side on which the red light LR is incident. A field lens10G, which converts the green light LG incident on the light modulator4G into parallelized light, is provided on one side of the light modulator4G that is the side on which the green light LG is incident. A field lens10B, which converts the blue light LB incident on the light modulator4B into parallelized light, is provided on one side of the light modulator4B that is the side on which the blue light LB is incident. The light combining system5is formed, for example, of a cross dichroic prism. The light combining system5combines the red image light outputted from the light modulator4R, the green image light outputted from the light modulator4G, and the blue image light outputted from the light modulator4B with one another and exits the combined image light toward the projection system6. The projection system6is formed of a plurality of projection lenses. The projection system6enlarges the combined image light from the light combining system5and projects the enlarged image light toward the screen SCR. Enlarged color video images are displayed on the screen SCR. Light Source Apparatus The configuration of the light source apparatus2will next be described. A light source apparatus2A according to the first embodiment includes a light emitting unit11, an integration optical system31, a polarization converter32, and a superimposing lens33A, as shown inFIG.2. The optical integration system31and the superimposing lens33A form a superimposing system33. The light emitting unit11includes the array light source20, a collimator optical system22, a condenser optical system23, a first phase retarder28A, an optical element25A including a polarization separator150A, a first condenser optical system26, the wavelength conversion apparatus50, a second phase retarder28B, a second condenser optical system29, and a diffusive reflector30. The array light source20, the collimator optical system22, the condenser optical system23, the first phase retarder28A, the optical element25A, the second phase retarder28B, the second condenser optical system29, and the diffusive reflector30are sequentially arranged along an optical axis AX1. On the other hand, the wavelength conversion apparatus50, the first condenser optical system26, the optical element25A, the optical integration system31, the polarization converter32, and the superimposing lens33A are sequentially arranged along an optical axis AX2. The optical axes AX1and AX2are present in the same plane and perpendicular to each other. The array light source20includes a plurality of semiconductor lasers (excitation light sources)21as the solid-state light source. The plurality of semiconductor lasers21are arranged in an array in a plane perpendicular to the optical axis AX1. The semiconductor lasers21each output, for example, the blue light BL0as the excitation light EL, which excites a wavelength conversion layer80of the wavelength conversion apparatus50. The peak wavelength of the blue light BL0is, for example, 445 nm and can be arbitrarily changed as long as the wavelength allows excitation of the wavelength conversion layer80as described above. The peak wavelength is determined in accordance with the fluorescing material of the wavelength conversion layer80and may, for example, be 460 nm. The blue light BL0outputted in the form of a pencil of light from the array light source20enters the collimator optical system22. The collimator optical system22converts the blue light BL0outputted from the array light source20into parallelized light. The collimator optical system22includes, for example, a plurality of collimator lenses22A arranged in an array. The plurality of collimator lenses22A are arranged in correspondence with the plurality of semiconductor lasers21. The blue light BL0having passed through the collimator optical system22enters the condenser optical system23. The condenser optical system23adjusts the luminous flux diameter of the blue light BL0. The condenser optical system23includes, for example, a convex lens23A and a concave lens23B. The blue light BL0having passed through the condenser optical system23enters the first phase retarder28A. The first phase retarder28A is, for example, a half wave plate configured to be rotatable substantially around the optical axis AX1. The blue light BL0outputted from the semiconductor lasers21is linearly polarized. Appropriately setting the angle of rotation of the first phase retarder28A allows the blue light BL0passing through the first phase retarder28A to be converted into a beam containing an S-polarized component and a P-polarized component with respect to the optical element25A mixed with each other at a predetermined ratio. The ratio between the S-polarized component and the P-polarized component can be changed by rotating the first phase retarder28A. The blue light BL0containing the S-polarized and P-polarized components generated when passing through the first phase retarder28A enters the optical element25A. The optical element25A is, for example, a dichroic prism having wavelength selectivity. The dichroic prism has an inclining surface150a, which inclines by an angle of 45° with respect to the optical axes AX1and AX2in the same plane. The polarization separator150A having wavelength selectivity is provided at the inclining surface150a. The polarization separator150A has a polarization separation function of separating the blue light BL0into blue light BLS, which is formed of the S-polarized component with respect to the polarization separator150A, and blue light BLP, which is formed of the P-polarized component with respect to the polarization separator150A. That is, the polarization separator150A reflects the blue light BLSand transmits the blue light BLPout of the blue light BL0incident thereon. The polarization separator150A further has a color separation function of transmitting the fluorescence YL, which has a wavelength band different from that of the blue light BL0, irrespective of the polarization state of the fluorescence YL. The blue light BLShaving exited out of the polarization separator150A enters the first condenser optical system26. The first condenser optical system26collects the blue light BLSand directs the collected blue light BLStoward the wavelength conversion apparatus50. The first condenser optical system26includes, for example, a first lens26A and a second lens26B. In the light source apparatus2A according to the first embodiment, the wavelength conversion apparatus51in the first embodiment is used as the wavelength conversion apparatus50.FIG.3is an enlarged cross-sectional view of the key parts of the wavelength conversion apparatus51taken along the optical axis AX2. The wavelength conversion apparatus51includes the wavelength conversion layer80, which converts the blue light (excitation light) BLSin terms of wavelength, as will be described later with reference toFIG.3. The wavelength conversion layer80absorbs the blue light BLSand is therefore excited thereby and emits the yellow fluorescence YL having a peak wavelength, for example, within a wavelength region longer than or equal to 500 nm but shorter than or equal to 700 nm. The peak wavelength of the fluorescence YL differs at least from, the peak wavelength of the blue light BLSand is determined in accordance with the fluorescing material of the wavelength conversion layer80. A reflection layer90is provided at one side of the wavelength conversion layer80that is the side opposite from the side on which the blue light BLSis incident, as shown inFIG.3. The reflection layer90reflects a component of the fluorescence YL generated by the wavelength conversion layer80that is the component traveling toward a substrate35of the wavelength conversion apparatus51. Part of the fluorescence YL generated by the wavelength conversion layer80is reflected off the reflection layer90and exits out of the wavelength conversion layer80. The other part of the fluorescence YL generated by the wavelength conversion layer80exits out of the wavelength conversion layer80without traveling via the reflection layer90. A heat sink38is disposed at a surface of the substrate35that is the surface opposite from the surface that supports the wavelength conversion layer80and the reflection layer90. In the wavelength conversion apparatus51, heat generated by the wavelength conversion layer80can be dissipated via the heat sink38, thermal degradation of the wavelength conversion layer80can be avoided. The fluorescence YL emitted from the wavelength conversion layer80is unpolarized light. After passing through the first condenser optical system26, the fluorescence YL enters the polarization separator150A. The fluorescence YL having entered the polarization separator150A travels from the polarization separator150A toward the optical integration system31. On the other hand, the P-polarized blue light BLPhaving exited out of the polarization separator150A enters the second phase retarder28B, as shown inFIG.2. The second phase retarder28B is, for example, a quarter wave plate disposed in the optical path between the polarization separator150A and the diffusive reflector30. The blue light BLPhaving exited out of the polarization separator150A is therefore converted by the second phase retarder28B, for example, into right-handed circularly polarized blue light BLC1having a polarization rotation direction around the optical axis AX1, which then enters the second condenser optical system29. The second condenser optical system29includes, for example, a lens29A and causes the blue light BLC1in the form of a collected light to be incident on the diffusive reflector30. The diffusive reflector30is disposed on one side of the polarization separator150A that is the side opposite from the array light source20and diffusively reflects the blue light BLC1having exited out of the second condenser optical system29toward the polarization separator150A. The diffusive reflector30is, for example, an element that reflects the blue light BLC1in a Lambertian reflection scheme but does not disturb the polarization state thereof. The light diffusively reflected off the diffusive reflector30is hereinafter referred to as blue light BLC2. The diffusively reflected blue light BLC1forms the blue light BLC2having a substantially uniform illuminance distribution. For example, the right-handed circularly polarized blue light BLC1is reflected in the form of left-handed circularly polarized blue light BLC2. The blue light BLC2is converted by the second condenser optical system29into parallelized light and then enters the second phase retarder28B again. The left-handed circularly polarized blue light BLC2is converted by the second phase retarder28B into S-polarized blue light BLS1. The blue light BLS1is reflected off the polarization separator150A toward the optical integration system31. The blue light BLS1, along with the fluorescence YL having passed through the polarization separator150A, is used to form the white light WL. That is, the blue light BLS1and the fluorescence YL both exit out of the polarization separator150A in the same direction to form the white light WL, which is the combination of the blue light BLS1and the yellow fluorescence YL. The white light WL exits toward the optical integration system31. The optical integration system31includes, for example, a lens array31A and a lens array31B. In each of the lens arrays31A and31B, a plurality of microlenses are arranged along the direction perpendicular to the optical axis AX2, that is, along the optical axis AX1. The white light WL having passed through the optical integration system31enters the polarization converter32. The polarization converter32includes polarization separation films and retardation films. The polarization converter32converts the unpolarized white light WL into linearly polarized light. The white light WL converted in terms of polarization by the polarization converter32enters the superimposing lens33A. The superimposing lens33A cooperates with the optical integration system31to homogenize the illuminance distribution of the white light WL in an illumination receiving region. The light source apparatus2A outputs the white light WL along the optical axis AX2, as described above. Wavelength Conversion Apparatus The wavelength conversion apparatus51according to the first embodiment will next be described. The wavelength conversion apparatus51includes an optical array layer60, the wavelength conversion layer80, the reflection layer90, the substrate35, and the heat sink38. The optical array layer60, the wavelength conversion layer80, and the reflection layer90form a wavelength converter100, which absorbs blue light BLSand generates the fluorescence YL, in the wavelength conversion apparatus51. The reflection layer90, the wavelength conversion layer80, and the optical array layer60of the wavelength converter100are sequentially laminated on a surface35aof the substrate35. The substrate35is made of a metal material that excels in heat dissipation, for example, aluminum (Al) or copper (Cu). FIG.4is a cross-sectional view showing that the blue light BLSenters as the excitation light EL the wavelength converter100of the wavelength conversion apparatus51, and that the fluorescence YL is emitted. The reflection layer90is laminated on the surface35aof the substrate35, as shown inFIGS.3and4. The wavelength conversion layer80is laminated on a surface90aof the reflection layer90, and the optical array layer60is provided at a surface80aof the wavelength conversion layer80. The blue light BLShaving exited out of the first condenser optical system26described with reference toFIG.2enters in the form of collected light the wavelength conversion layer80via the surface80a. The wavelength conversion layer80converts the blue light BLSin terms of wavelength into the fluorescence YL. That is, the wavelength conversion layer80contains a fluorescing material that absorbs the blue light BLSincident thereon, is excited thereby, and emits the fluorescence YL. The fluorescing material contains, for example, at least any one of the following oxide phosphors: yttrium aluminum garnet (YAG(Y3Al5O12):Ce) to which cerium (Ce) has been added as an activator; Y3(Al, Ga)5O12; Lu3Al5O12; and TbAl5O12. The wavelength conversion layer80may contain europium (Eu) in place of cerium (Ce) as the activator. The wavelength conversion layer80may further contain scatterers. The fluorescence YL generated by the wavelength conversion layer80exits via a surface80bof the wavelength conversion layer80, which is the surface opposite from the surface80a. The optical array layer60is formed of a plurality of cells62. The cells62each have a surface (first side)62band a surface (second side)62a. The fluorescence (light) YL, which is the result of the wavelength conversion performed by the wavelength conversion layer80and has exited via the surface80a, is incident on the surface62b. The fluorescence YL exits toward the side on which the blue light. BLSis incident out of the wavelength conversion apparatus51via the surface62a. The cells62are each made of a material having high thermal conductivity and high transmittance at least for the blue light BLSand fluorescence YL, such as sapphire, alumina, YAG, and quartz. The cells62made of a material having high thermal conductivity and high transmittance allow heat TH generated in the wavelength conversion performed by the wavelength conversion layer80to be efficiently dissipated out of the wavelength conversion apparatus51via the cells62. Furthermore, the fluorescence YL emitted from the wavelength conversion layer80efficiently exits via the surface62a, and the blue light BLSefficiently enters the wavelength conversion layer80. FIG.5is a plan view viewed along the traveling direction of the blue light BLSand showing that the blue light BLShaving a substantially circular shape is incident on the optical array layer60.FIG.6is a plan view showing that the blue light BLSinFIG.5is incident on the wavelength conversion layer80.FIG.7is a plan view showing that the fluorescence YL is emitted when the blue light BLSenters the wavelength conversion apparatus51as shown inFIG.6. The cells62are each formed in a square shape in the plan view viewed along the traveling direction of the blue light BLS, that is, along the optical axis AX2, as shown inFIGS.5and6. A dimension h62of the cells62shown inFIGS.3and4in a height direction H thereof is equal to or greater than a dimension w62of the cells62shown inFIGS.5and6in the plan view thereof. The dimension w62of the cells62is the dimension in a plane perpendicular to the height direction H of the cells62and is also the dimension in a plane perpendicular to the traveling direction of the blue light BLSand to the optical axis AX2. The dimension h62of the cells62in the height direction H thereof is, for example, 1.0 to 1.5 times greater than the dimension w62of the cells62in the plan view thereof. In the wavelength conversion apparatus51according to the first embodiment, a dielectric film66is interposed between the plurality of cells62in the optical array layer60. The dielectric films66may be made of a reflective material containing, for example, silver (Ag) or Al or may be each formed of a dielectric multilayer film in which a plurality of suitable dielectric layers are alternately laminated on each other. The optical array layer60has reflection surfaces68, which extend from the surfaces62bto the surfaces62aof the plurality of cells62and are located at the interfaces between the cells62and62adjacent to each other along surfaces60aand60b, and the reflection surfaces68reflect the blue light BLSand the fluorescence YL. That is, the reflection surfaces68are each formed of side surfaces66cand66dof the dielectric film66disposed between the cells62adjacent to each other. The blue light BLSand the fluorescence YL incident on the reflection surfaces68through the cells62are specularly reflected off the reflection surfaces68. InFIG.3, a variety of arrows show the propagation of the directional components of the blue light BLSand the fluorescence YL and the movement of a heat flux in the wavelength conversion apparatus51. On the other hand,FIG.4shows the propagation of the entirety of the blue light BLSand the fluorescence YL, that is, the luminous flux as a whole. The path of a blue light component BLS11of the blue light BLS, which enters the wavelength conversion layer80from outside the wavelength converter100, that is, from above the wavelength converter100inFIGS.3and4, inclines by a small angle with respect to the height direction H, and the blue light component BLS11does not impinge on the reflection surfaces68, as shown inFIG.3. The blue light component BLS11is therefore not reflected off the reflection surfaces68but passes through the optical array layer60and enters the wavelength conversion layer80. On the other hand, a blue light component BLS12of the blue light BLS, which enters the wavelength conversion layer80as described above, impinges on the reflection surface68in a cell62. The blue light component BLS12therefore passes through the optical array layer60while being reflected off the reflection surfaces68and enters the wavelength conversion layer80. Similarly, the path of a fluorescence component YL11of the fluorescence YL, which exits out of the wavelength converter100via the wavelength conversion layer80, inclines by a small angle with respect to the height direction H, and the fluorescence component YL11does not impinge on the reflection surfaces68. The fluorescence component YL11is therefore not reflected off the reflection surfaces68but passes through the optical array layer60and exits out of the wavelength converter100. On the other hand, a fluorescence component YL12of the fluorescence YL, which exits out of the wavelength converter100via the wavelength conversion layer80, impinges on the reflection surface68in a cell62. The fluorescence component YL12therefore passes through the optical array layer60while being reflected off the reflection surfaces68and exits out of the wavelength converter100. The number of times by which each of the components of the blue light BLSand the fluorescence YL is reflected off the reflection surface68in a cell62is determined by the angle of the traveling direction of the component of the blue light BLSor the fluorescence YL with respect to the optical axis AX2. The greater the angle of the traveling direction of each of the components of the blue light BLSand the fluorescence YL with respect to the optical axis AX2, the greater the number of times by which the component is reflected off the reflection surface68. As described above, the wavelength conversion apparatus51according to the first embodiment includes the wavelength conversion layer80, which converts the blue light BLSin terms of wavelength into the fluorescence YL, and the optical array layer60, which is formed of the plurality of cells62each having the surface62b, on which the fluorescence YL, which is the result of the wavelength conversion performed by the wavelength conversion layer80, is incident, and the surface62a, via which the fluorescence YL exits. The optical array layer60has the reflection surfaces68, which extend from the surfaces60bto the surfaces60ain the height direction H and are located at the interfaces between the cells62and62adjacent to each other along the surfaces60aand60b, and which reflect the fluorescence YL. For example, in the plan view along the traveling direction of the blue light BLSand the optical axis AX2, the blue light BLShas a circular shape and is incident on a portion of the surface60aof the optical array layer60, that is, the surfaces62aof the plurality of cells62in the region occupied by 5×5=25 cells62in total, as shown inFIG.5. In this case, the blue light BLStravels in a region of the optical array layer60that is the region occupied by the cells62that overlap with the blue light BLSat the surface60a, as can be seen fromFIGS.3and4. When reflected off the reflection surfaces68, the blue light BLSdoes not pass through the cells62outside the region occupied by the cells62overlapping with the blue light BLSat the surface60abut enters the wavelength conversion layer80with the beam area of the blue light BLSat the surface60asubstantially maintained. The blue light BLSincident on at least part of the surfaces62aof the cells62of the optical array layer60in the plan view is, however, likely to spread across those cells62, so that the blue light BLSbleeds into all the cells62overlapping with the blue light BLSat the surface60aof the optical array layer60, as shown inFIGS.5and6. That is, the blue light BLSradiated onto the surface60bof the optical array layer60and the surface80aof the wavelength conversion layer80overlaps with the rectangular region occupied by 5×5=25 cells62in total, as shown inFIG.6. The fluorescence YL generated by the wavelength conversion layer80exits with the beam area of the fluorescence YL being approximately equal to the beam area of the blue light BLSthat has been held by the optical array layer60at substantially the same value and travels in the direction opposite the traveling direction of the blue light BLSin the region occupied by the cells62that overlap with the fluorescence YL at the surface60bof the optical array layer60. The fluorescence YL reflected off the reflection surfaces68does not pass through the cells62outside the region occupied by the cells62overlapping with the fluorescence YL at the surface60bbut exits out of the optical array layer60with the beam area of the fluorescence YL at the surface60bsubstantially maintained, as seen fromFIGS.3and4. The fluorescence YL that exits via the surface80aof the wavelength conversion layer80and the surface60aof the optical array layer60therefore overlaps with the rectangular region occupied by the 5×5=25 cells62in total, as shown inFIG.7. The wavelength conversion apparatus51according to the first embodiment, which includes the optical array layer60described above, allows at least part of the blue light BLSto enter the wavelength conversion layer80with the blue component BLS12reflected in the optical array layer60off the reflection surfaces68of the cells62that the at least part of the blue light BLShas entered so that the region of the blue light BLSin the plan view is maintained in the optical array layer60. A decrease in the wavelength conversion efficiency in the wavelength conversion layer80due to an increase in the amount of spread of the blue light BLSin the plan view can therefore be suppressed as compared with a case where no optical array layer60is provided. Furthermore, the wavelength conversion apparatus51according to the first embodiment can suppress the spread of the fluorescence YL in the plan view while the fluorescence component YL12is reflected off the reflection surfaces68of the cells62that the fluorescence YL has entered and therefore allows the plan-view region of the fluorescence YL exiting via the surface80aof the wavelength conversion layer80to substantially coincide with the plan-view region of the fluorescence YL exiting via the surface60aof the optical array layer60. A decrease in the efficiency of utilization of the fluorescence YL due to an increase in the amount of spread of the fluorescence YL in the plan view can therefore be suppressed as compared with the case where no optical array layer60is provided. In the wavelength conversion apparatus51according to the first embodiment, the reflection surfaces68are each formed of the side surfaces66cand66dof the dielectric film66. The blue light BLSand the fluorescence YL incident on each of the reflection surfaces68can therefore be efficiently reflected and guided through the cell62. As a result, loss of the blue light BLSand fluorescence YL can be suppressed. In the wavelength conversion apparatus51according to the first embodiment, the reflection surfaces68may each be a diffusive reflection surface. For example, the reflection surfaces68may have fine irregularities designed in correspondence with the wavelengths of the blue light BLSand the fluorescence YL. The reflection surfaces68each having a diffusive reflection surface allow enhancement of the directivity of the blue light BLSand fluorescence YL with respect to the travelling direction thereof and the optical axis AX2. In the wavelength conversion apparatus51according to the first embodiment, the reflection surfaces68are not necessarily each formed of the dielectric film66and can be made of any material or medium that can form the reflection surface68at the interface between the cells62and reflect the blue light BLSand the fluorescence YL in accordance with the difference in refractive index between the material or medium of the reflection surface68and the material of the cells62. In other words, the reflection surfaces68can be made of a material or a medium that has an appropriate refractive index in accordance with the refractive index of the material of the cells62and the peak wavelengths of the blue light BLSand the fluorescence YL. The material or medium of the reflection surfaces68may be so selected that the blue light component BLS12and the fluorescence component YL12are totally reflected off the reflection surfaces68. The configuration described above allows the blue light BLSto efficiently enter the wavelength conversion layer80and the fluorescence YL to efficiently exit out of the optical array layer60. When sapphire or any other material described above is used as the material of the cells62of the optical array layer60, the refractive index of the optical array layer60is about 1.7. The wavelength conversion apparatus51according to the first embodiment may include an air layer between the cells62and62adjacent to each other along the surfaces60aand60bof the optical array layer60, and the blue light BLSand the fluorescence YL may be reflected off side surfaces (interfaces)62cand62dof the cells62in accordance with the difference in refractive index between the cells62and the air layer. The configuration described above allows enhancement of the directivity of the blue light BLSand fluorescence YL with respect to the travelling direction thereof and the optical axis AX2. Since the surface area of the cells62exposed to the air outside the wavelength conversion apparatus51via the air layer increases, the heat dissipation capability of the wavelength conversion apparatus51can be improved, whereby an increase in the temperature caused by the heat TH generated when the wavelength conversion layer80generates the fluorescence YL can be suppressed. The wavelength conversion apparatus51according to the first embodiment includes the reflection layer90at the surface80bof the wavelength conversion layer80, which is the surface opposite from the surface80afacing the optical array layer60, as described above. The blue light BLSenters the wavelength conversion layer80via the optical array layer60. The fluorescence YL having exited via the surface80bof the wavelength conversion layer80is reflected by the reflection layer90, passes through the wavelength conversion layer80, and enters the optical array layer60. In the wavelength conversion apparatus51according to the first embodiment, the optical array layer60is used to cause the substantial irradiated area irradiated with the blue light (light) BLSincident on the wavelength conversion layer80from the optical array layer60to substantially coincide with the area occupied by the cells62that overlap with the irradiated area in the plan view. The wavelength conversion apparatus51according to the first embodiment can therefore suppress the spread of the irradiated region irradiated with the fluorescence YL emitted by the wavelength conversion layer80and the diffusion of the fluorescence YL and hence suppress the diffusion of the fluorescence YL that exits out of the optical array layer60. As a result, a decrease in the wavelength conversion efficiency in the wavelength conversion apparatus51according to the first embodiment can be suppressed as compared with wavelength conversion apparatuses of related art. Second Embodiment A second embodiment of the present disclosure will next be described with reference toFIGS.8to10. In the second and subsequent embodiments, configurations common to those in the embodiment described above, that is, the first embodiment, have the same reference characters as those of the configurations in the embodiment described above and will not be described. The projector according to the second embodiment has the same configuration as that of the projector1according to the first embodiment except that the light source apparatus2A is replaced with a light source apparatus2B shown inFIG.8.FIG.8is a schematic configuration diagram of the light source apparatus2B provided in the projector1according to the second embodiment. Light Source Apparatus The light source apparatus2B according to the second embodiment includes a light emitting unit12, the optical integration system31, the polarization conversion element32, and the superimposed lens33A, as shown inFIG.8. The light emitting unit12includes the array light source20, the collimator optical system22, a condenser optical system42, a dichroic film45, a wavelength conversion apparatus52according to the second embodiment, and a pickup optical system160. The array light source20, the collimator optical system22, the condenser optical system42, the wavelength conversion apparatus52, and the pickup optical system160are sequentially arranged along the optical axis AX2. The array light source20includes a plurality of semiconductor lasers (excitation light sources)21as the solid-state light source. The plurality of semiconductor lasers21are arranged in an array in a plane perpendicular to the optical axis AX2. The condenser optical system42is formed, for example, of a single convex lens42A. The condenser optical system42is disposed on the optical axis AX2of white light BL outputted from the array light source20, collects the blue light BL, which is parallelized light into which the excitation light EL is converted by the collimator optical system22, and directs the collected blue light BL to the wavelength conversion apparatus52. The wavelength conversion apparatus52has the function of transmitting part of the blue light BL outputted from the array light source20and converting the remaining blue light BL into the fluorescence YL. The wavelength conversion apparatus52contains a fluorescing material that absorbs the remaining blue light BL and emits the fluorescence YL of yellow containing red light and green light. The wavelength at which the intensity of the emitted fluorescence YL peaks is, for example, about 550 nm. The dichroic film45is provided on the light incident side of the wavelength conversion apparatus52that is the side on which the white light BL is incident. The dichroic film45transmits the blue light BL and reflects the fluorescence YL. The blue light BL having passed through the wavelength converter52and the fluorescence YL are combined with each other to form the white light WL. The pickup optical system160includes, for example, a pickup lens162and a pickup lens164. The pickup optical system160captures the white light WL outputted from the wavelength conversion apparatus52, converts the white light WL into substantially parallelized light, and directs the parallelized white light WL toward the optical integration system31. The optical integration system31is formed, for example, of a lens array31C and a lens array31D. In each of the lens arrays31C and31D, a plurality of microlenses are arranged along the direction perpendicular to the optical axis AX2. The white light WL having entered the optical integration system31enters the polarization converter32. The polarization converter32converts the white light WL containing the unpolarized fluorescence YL into linearly polarized light. The superimposing lens33A and the optical integration system31cooperate with each other in such a way that the white light WL having passed through the polarization converter32generates white light WL having a uniform illuminance distribution in the illumination receiving region. That is, the light source apparatus2B generates the white light WL as the illumination light used in the projector according to the second embodiment. In the light source apparatus2B according to the second embodiment, the optical system can be configured to be substantially linearly along the optical axis AX2, whereby the optical system can be readily designed and assembled. Wavelength Conversion Apparatus FIG.9is a cross-sectional view of a wavelength conversion apparatus52according to the second embodiment. The wavelength conversion apparatus52does not include the reflection layer90of the wavelength conversion apparatus51according to the first embodiment but includes an optical array layer (light-incident-side optical array layer)70in place of the reflection layer90. That is, the wavelength conversion apparatus52includes the optical array layer60, the wavelength conversion layer80, and the optical array layer70. The optical array layer60, the wavelength conversion layer80, and the optical array layer70form the wavelength converter100, which absorbs the blue light BL and generates the fluorescence YL in the wavelength conversion apparatus52. The optical array layer70is formed of a plurality of cells72. The cells72each have a surface (third surface)72band a surface (fourth surface)72a. The blue light (excitation light) having exited out of the condenser optical system42described with reference to FIG.8is incident in the form of the collected light on the surfaces72b. The blue light BL having propagated through the cells72exits via the surfaces72a. The cells72are each made of a material having high thermal conductivity and high transmittance at least for the blue light BL and fluorescence YL, such as sapphire, alumina, YAG, and quartz, as the cells62are. The cells72made of a material having high thermal conductivity and high transmittance allow the heat TH generated in the wavelength conversion performed by the wavelength conversion layer80to be efficiently dissipated out of the wavelength conversion apparatus52not only via the cells62but via the cells72. In the wavelength conversion apparatus52, the fluorescence YL emitted from the wavelength conversion layer80enters the optical array layer60via the surfaces62band efficiently exits via the surface60a. The cells72are each formed, for example, in a square shape in the plan view along the traveling direction of the blue light BL. A dimension h72of the cells72in the height direction H thereof is greater than a dimension w72of the cells72in the plan view thereof. The dimension w72of the cells72is the dimension in a plane perpendicular to the height direction H of the cells72and is also the dimension perpendicular to the travelling direction of the blue light BL and the optical axis AX2. The dimension h72of the cells72in the height direction H thereof is, for example, 1.2 to 2.0 times greater than the dimension w72of the cells72in the plan view thereof. In the wavelength conversion apparatus52, the dimension w72of the cells72in the plan view thereof is substantially equal to the dimension w62of the cells62in the plan view thereof. In the wavelength conversion apparatus52, a dielectric film76is interposed between the plurality of cells72in the optical array layer70. The dielectric films76may be made of a reflective material containing Ag or Al or may be each formed of a dielectric multilayer film in which a plurality of suitable dielectric layers are alternately laminated on each other, as the dielectric films66are. The optical array layer70has reflection surfaces78, which extend from the surfaces72bto the surfaces72aof the plurality of cells72and are located at the interfaces between the cells72and72adjacent to each other along surfaces70aand70b, and the reflection surfaces78reflect the blue light BL. That is, the reflection surfaces78are each formed of side surfaces76cand76dof the dielectric film76disposed between the cells72adjacent to each other. The blue light BL incident on the reflection surfaces78through the cells72is specularly reflected off the reflection surfaces78. Out of the blue light BL that enters the wavelength conversion layer80from outside the wavelength converter100, that is, from below the wavelength converter100inFIG.9, the blue light component traveling along a path inclining by a small angle with respect to the height direction H and not impinging on the reflection surfaces78is not reflected off the reflection surfaces78but passes through the optical array layer70and enters the wavelength conversion layer80. On the other hand, out of the blue light BL that enters the wavelength conversion layer80as described above, the blue light component that impinges on the reflection surfaces78in the cells72passes through the optical array layer70while being reflected off the reflection surfaces78and enters the wavelength conversion layer80. The number of times by which each component of the blue light BL is reflected off the reflection surface78in a cell72is determined by the angle of the traveling direction of the component of the blue light BL with respect to the optical axis AX2. The greater the angle of the traveling direction of each component of the blue light BL with respect to the optical axis AX2, the greater the number of times by which the component is reflected off the reflection surface78. As described above, the wavelength conversion apparatus52according to the second embodiment includes the optical array layer70, which causes the blue light BL to be incident on the surface80bof the wavelength conversion layer80, separately from the optical array layer60in the first embodiment. The optical array layer70is formed of the plurality of cells72each having the surface70b, on which the blue light BL is incident, and the surface70a, via which the incident blue light BL exits, and includes the light reflecting surfaces78, which extends from the surface70bto the surface70aand are located at the interfaces between cells72and72adjacent to each other along the surfaces70aand70b. The blue light BL enters the wavelength conversion layer80via the optical array layer70, and the fluorescence YL generated by the wavelength conversion enters the optical array layer60. In the wavelength conversion apparatus52having the configuration described above, in the optical array layer70, the blue light BL travels in the region occupied by the cells72overlapping with the blue light BL at the surface70b. At least part of the blue light BL reflected off the reflection surfaces78does not pass through the cells72outside the region occupied by the cells72overlapping with the blue light BL at the surface70bbut enters the wavelength conversion layer80with the beam area of the blue light BL and the irradiated region irradiated thereby at the surface70bsubstantially maintained. It is, however, noted that the blue light BL incident on at least part of the surfaces72bof the cells72of the optical array layer70in the plan view is likely to spread across those cells72, as the blue light BL bleeds in the first embodiment. That is, the blue light BL bleeds into all the cells72overlapping with the blue light BL at the surface70bof the optical array layer70. In the wavelength conversion apparatus52, the fluorescence YL generated by the wavelength conversion layer80exits with the beam area being approximately equal to those of the blue light BL substantially maintained by the optical array layer70and travels in the region occupied by the cells62overlapping with the fluorescence YL at the surface60bof the optical array layer60in the same direction as the direction in which the blue light BL enters the wavelength conversion layer80. The fluorescence YL does not pass through the cells62outside the region occupied by the cells62overlapping with the fluorescence YL at the surface60bbut exits out of the optical array layer60with the beam area of the fluorescence YL substantially maintained at the surface60b. The wavelength conversion apparatus52according to the second embodiment, which includes the optical array layers60and70described above, allows at least part of the blue light BL to enter the wavelength conversion layer80with the plan-view region of the blue light BL in the optical array layer70maintained while the at least part of blue light BL is reflected in the optical array layer70off the reflection surfaces78of the cells72on which the at least part of the blue light BL is incident. A decrease in the wavelength conversion efficiency in the wavelength conversion layer80due to an increase in the amount of spread of the blue light BL in the plan view can therefore be suppressed as compared with a case where no optical array layer70is provided. Furthermore, the wavelength conversion apparatus52according to the second embodiment can suppress the spread of the fluorescence YL in the plan view while the fluorescence component YL12is reflected off the reflection surfaces68of the cells62that the fluorescence YL has entered and therefore allows the plan-view region of the fluorescence YL exiting via the surface80aof the wavelength conversion layer80to substantially coincide with the plan-view region of the fluorescence YL exiting via the surface60aof the optical array layer60. A decrease in the efficiency of utilization of the fluorescence YL due to an increase in the amount of spread of the fluorescence YL in the plan view can therefore be suppressed as compared with the case where no optical array layer60is provided. Furthermore, the wavelength conversion apparatus52according to the second embodiment can suppress the diffusion of the blue light BL in the plan view in the optical array layer70. The wavelength conversion apparatus52according to the second embodiment can be changed as the wavelength conversion apparatus51according to the first embodiment is changed in terms of the common configurations, and the changed wavelength conversion apparatus52provides the same effects and advantages as those provided by the changed wavelength conversion apparatus51according to the first embodiment. A wavelength conversion apparatus53shown inFIG.10is a variation of the wavelength conversion apparatus52according to the second embodiment.FIG.10is a cross-sectional view of the wavelength conversion apparatus53according to the variation of the second embodiment. In the wavelength conversion apparatus53, the dimension (cell size) w62of the cells62of the optical array layer60in the plan view is smaller than the dimension (cell size) w72of the cells72of the optical array layer70in the plan view, which is shifted from the optical array layer60toward the side on which the blue light BL is incident, as shown inFIG.10. The wavelength conversion apparatus53according to the variation of the second embodiment allows enhancement of the directivity of the fluorescent YL with respect to the direction parallel to the optical axis AX2as compared with the blue light BL as the excitation light EL. Other Embodiments Descriptions will be made of variations of the optical array layer60of both the wavelength conversion apparatuses51and52according to the first and second embodiments described above and the optical array layer70of both the wavelength conversion apparatus52according to the second embodiment and the wavelength conversion apparatus53according to the variation of the wavelength conversion apparatus52. FIGS.11and12are plan views of first and second variations of the optical array layer60. The shape of the cells62of the optical array layer60in the plan view is not limited to the square shape shown by way of example in the embodiments described above. The cells62may each have a regular hexagonal shape in the plan view, as shown inFIG.11. The cells62may each have an isosceles triangular shape in the plan view as a result of division of a square shape by one diagonal thereof, as shown inFIG.12. When the cells62each have a regular hexagonal shape in the plan view, the dimension w62is the diameter of the regular hexagonal shape. When the cells62each have an isosceles triangular shape in the plan view, the dimension w62is the length of each of the two equilateral sides of the isosceles triangular shape. The cells62may each have any shape other than a square shape, the regular hexagonal shape, and the isosceles triangular shape, the latter two of which are shown by way of example inFIGS.11and12, and it is preferable to set the shape of the cells62as appropriate in such a way that a desired shape of the fluorescent YL in the plan view is formed in consideration of the occurrence of the aforementioned bleeding of the blue light BLSand BL as the excitation light EL. FIGS.13to15are perspective views of third to fifth variations of the optical array layer60. The cells62may each have a columnar structure, as shown inFIGS.13and14. When the cells62each have a columnar structure, the dimension h62of the cells62in the height direction H thereof is, for example, 2.0 to 10.0 times greater than the dimension w62of the cells62in the plan view thereof. The cells62each having a columnar structure as described above allow enhancement of the directivity of the blue light BLSand BL and the fluorescence YL with respect to the direction parallel to the height direction H of the cells62. As a result, a decrease in the wavelength conversion efficiency in the wavelength conversion apparatus can be suppressed. The cells62may each have a columnar structure, and the plurality of cells62may be separate from each other, as shown inFIG.14. According to the configuration described above, since the surface area of the wavelength conversion apparatus51that is the area exposed to the outside air via the air layer increases, as compared with a case where the cells62each do not have a columnar structure, the heat dissipation capability of the wavelength conversion apparatus can be improved, whereby an increase in the temperature caused by the heat TH generated by the wavelength conversion layer80can be suppressed. In the optical array layer60, the plurality of cells62each having a columnar structure may be laminated on each other with the axial core direction thereof rotated by 90° in the plan view along the traveling direction of the excitation light EL at predetermined height intervals in the traveling direction of the excitation light EL, that is, the height direction H, as shown inFIG.15. According to the optical array layer60shown inFIG.15, the plurality of cells62that overlap with each other in the height direction H in the plan view function as secondary cells, whereby the spread of the irradiated regions irradiated with the blue light BLSand BL and the fluorescence YL propagating in the cells62is suppressed alternately in two directions intersecting with each other at 90° in a plane perpendicular to the height direction H. The irradiated regions in the plan view irradiated with the excitation light EL and the fluorescence YL in the optical array layer can thus be maintained, whereby the excitation light EL and the fluorescence YL can be caused to enter or exit out of the wavelength conversion layer80. Preferable embodiments of the present disclosure have been described above in detail. The present disclosure is, however, not limited to a specific embodiment, and a variety of modifications and changes can be made to the embodiments within the scope of the substance of the present disclosure described in the claims. Furthermore, the components in a plurality of embodiments can be combined with each other as appropriate. For example, the aforementioned second embodiment has been described with reference to the wavelength conversion apparatus52, which is what is called a transmissive wavelength conversion apparatus, but the wavelength conversion apparatus52may be disposed on a circumferential surface of a rotating wheel that is not shown. That is, the light source apparatus according to the present disclosure does not necessarily have a specific configuration, and the light source apparatus according to the present disclosure includes at least an excitation light source that outputs the excitation light EL and the wavelength conversion apparatus according to the present disclosure. The aforementioned embodiments have been described with reference to the case where the light source apparatus according to the present disclosure is incorporated in a projector, but not necessarily. The light source apparatus according to the present disclosure may also be used as a lighting apparatus, a headlight of an automobile, a head mounted display, and other components. A wavelength conversion apparatus according to an aspect of the present disclosure may have the configuration below. The wavelength conversion apparatus according to the aspect of the present disclosure includes a wavelength conversion layer that converts excitation light in terms of wavelength and an optical array layer formed of a plurality of cells each having a first surface on which light as a result of the wavelength conversion performed by the wavelength conversion layer is incident and a second surface via which the light exits, and the optical array layer has a reflection surface that extends from the first surface to the second surface, is located at the interface between the cells adjacent to each other, and reflects the light. In the wavelength conversion apparatus according to the aspect of the present disclosure, the reflection surface may be formed of a dielectric film. In the wavelength conversion apparatus according to the aspect of the present disclosure, the reflection surface may have a diffusive reflection surface. In the wavelength conversion apparatus according to the aspect of the present disclosure, an air layer may be formed between the cells adjacent to each other, and the light may be reflected off the interface between each of the cells and the air layer in accordance with the difference in refractive index between the cells and the air layer. In the wavelength conversion apparatus according to the aspect of the present disclosure, the dimension of the cells in the height direction thereof may be greater than the in-plane dimension of the cells in a plane perpendicular to the height direction. In the wavelength conversion apparatus according to the aspect of the present disclosure, the plurality of cells may be separate from each other. The wavelength conversion apparatus according to the aspect of the present disclosure may further include a reflection layer at a surface of the wavelength conversion layer that is the surface opposite from the surface facing the optical array layer. The excitation light may enter the wavelength conversion layer via the optical array layer, and the light converted in terms of wavelength and reflected off the reflection layer may enter the optical array layer. The wavelength conversion apparatus according to the aspect of the present disclosure may further include a light-incident-side optical array layer that causes the excitation light to be incident on a surface of the wavelength conversion layer that is the surface opposite from the surface facing the optical array layer. The light-incident-side optical array layer may be formed of a plurality of cells each having a third surface on which the excitation light is incident and a fourth surface via which the incident excitation light exits and have a reflection surface that extends from the third surface to the fourth surface, is located at the interface between adjacent cells, and reflects the light. The excitation light may enter the wavelength conversion layer via the light-incident-side optical array layer, and the light converted in terms of wavelength may enter the optical array layer. In the wavelength conversion apparatus according to the aspect of the present disclosure, the size of the cells in the optical array layer may be smaller than the size of the cells in the light-incident-side optical array layer. A light source apparatus according to another aspect of the present disclosure may have the configuration below. The light source apparatus according to the other aspect of the present disclosure includes an excitation light source and the wavelength conversion apparatus according to the aforementioned aspect of the present disclosure, and the wavelength conversion layer of the wavelength conversion apparatus is irradiated with excitation light outputted from the excitation light source. A projector according to another aspect of the present disclosure may have the configuration below. A projector according to the other aspect of the present disclosure includes the light source apparatus according to the aforementioned aspect of the present disclosure, a light modulator that modulates light from the light source apparatus in accordance with image information, and a projection optical apparatus that projects the light modulated by the light modulator. | 60,061 |
11860524 | DESCRIPTION OF THE EMBODIMENTS In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., is used with reference to the orientation of the Figure(s) being described. The components of the present invention can be positioned in a number of different orientations. As such, the directional terminology is used for purposes of illustration and is in no way limiting. On the other hand, the drawings are only schematic and the sizes of components may be exaggerated for clarity. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless limited otherwise, the terms “connected,” “coupled,” and “mounted” and variations thereof herein are used broadly and encompass direct and indirect connections, couplings, and mountings. Similarly, the terms “facing,” “faces” and variations thereof herein are used broadly and encompass direct and indirect facing, and “adjacent to” and variations thereof herein are used broadly and encompass directly and indirectly “adjacent to”. Therefore, the description of “A” component facing “B” component herein may contain the situations that “A” component directly faces “B” component or one or more additional components are between “A” component and “B” component. Also, the description of “A” component “adjacent to” “B” component herein may contain the situations that “A” component is directly “adjacent to” “B” component or one or more additional components are between “A” component and “B” component. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive. FIG.1AandFIG.1Bare schematic views of optical paths in a first time interval and a second time interval, respectively, according to an embodiment of the disclosure.FIG.2is a schematic top view of a wavelength conversion element.FIG.3is a schematic top view of a diffuser element. With reference toFIG.1AandFIG.1B, in this embodiment, a projection device100includes a laser light source110, a wavelength conversion element120, a diffuser element130, a filter element140, beam splitting elements150and152, a first light valve170, a second light valve180, and a projection lens190. The above elements will be described in detail in the following paragraphs. The laser light source110is configured to emit a laser beam EB, and is, for example but not limited to, a laser light emitting element, an array arranged by multiple laser light emitting elements, or an optical element assembly composed of one or more light emitting elements, mirrors, or lenses. A type of the laser light emitting element is, for example, a laser diode. In addition, a peak wavelength of a light spectrum of the laser beam EB, for example, falls within a wavelength range of blue light, and, for example but not limited to, falls within a range of 440 nanometers to 470 nanometers. The peak wavelength is defined as a wavelength corresponding to a maximum light intensity in a light intensity spectrum. The wavelength conversion element120is configured to convert a beam passing through the wavelength conversion element120into a beam of different wavelengths (a conversion beam). With reference toFIG.2, in this embodiment, the wavelength conversion element120includes a wavelength conversion region WCR and a non-wavelength conversion region NWCR. Specifically, the wavelength conversion element120has a rotary disc122, a wavelength conversion substance124, a light transmitting element126, and a first central rotary axis CRA1. The rotary disc122is disposed with a notch NT, and the light transmitting element126is disposed in the notch NT. The wavelength conversion substance124is, for example, a photoluminescent material, such as a phosphor glue layer or a quantum dot, but the disclosure is not limited thereto. In this embodiment, the wavelength conversion substance124is, for example, a yellow phosphor glue layer. The light transmitting element126is embedded in the notch NT, and is composed of, for example but not limited to, a material having high light transmittance, such as glass. In this embodiment, the wavelength conversion substance124causes a photoluminescence phenomenon and emits a long-wavelength beam when irradiated by a short-wavelength beam (i.e., a wavelength conversion phenomenon). Therefore, a region where the wavelength conversion substance124is disposed defines the wavelength conversion region WCR, and the wavelength conversion substance124(or the rotary disc122) is adapted to reflect a beam which has undergone wavelength conversion. A region defined by the light transmitting element126may be penetrated by a beam without wavelength conversion, so the light transmitting element126defines the non-wavelength conversion resign NWCR, which may also be referred to as a light penetration region. Therefore, an implementation of the wavelength conversion element120of this embodiment is, for example, a transmissive wavelength conversion element. The wavelength conversion region WCR and the non-wavelength conversion region NWCR are disposed around the first central rotary axis CRA1. The wavelength conversion region WCR has a first radian θ1relative to the first central rotary axis CRA1, and the non-wavelength conversion region NWCR has a second radian θ2relative to the first central rotary axis CRA1. The first radian θ1is, for example, greater than the second radian θ2. With reference toFIG.3as well, the diffuser element130of the projection device100is configured to diffuse/scatter the beam passing through this diffuser element, which is, for example, a diffuser wheel. The diffuser element130has a first region R1and a second region R2. In this embodiment, the diffuser element130is disposed with a diffuser structure (not shown). For example, the first region R1and the second region R2are both disposed with the diffuser structures, for example but not limited to, on a surface of the diffuser element130. In other embodiments, for example but not limited to, diffuser particles are disposed inside the diffuser element130. The first region R1and the second region R2of the diffuser element130are different in that the first region R1of the diffuser element130is further disposed with the filter element140, whereas the second region R2is not disposed with the filter element140. Moreover, the diffuser element130has a second central rotary axis CRA2. The first region R1and the second region R2are disposed around the second central rotary axis CRA2. The first region R1has a third radian θ3relative to the second central rotary axis CRA2. The second region R2has a fourth radian θ4. The third radian θ3is, for example, greater than the fourth radian θ4. With reference toFIG.2andFIG.3together, the first radian θ1is equal to the third radian θ3, and the second radian θ2is equal to the fourth radian θ4. Furthermore, the filter element140may filter out beams of wavelengths within a specific range and allow beams of wavelengths out of the specific range to pass through. In this embodiment, the filter element140is, for example, a blue light filter film, which may filter out blue light and allows beams of other wavelengths to pass through. With reference toFIG.1AandFIG.1B, the beam splitting elements150and152refer to an optical element having a beam splitting function. In this embodiment, the beam splitting element is a dichroic mirror (DM) having wavelength selectivity or a dichroic film splitting beams by wavelength/color, but the disclosure is not limited thereto. In this embodiment, the beam splitting element150is designed to allow blue light and red light to penetrate and reflect green light. The beam splitting element152is designed to allow blue light to penetrate and reflect beams of other wavelengths, such as yellow light. The projection device100may further include a light homogenizing element160, which refers to an optical element homogenizing beams passing through the light homogenizing element160. In this embodiment, the light homogenizing element160is, for example but not limited to, an integration rod, an array of lenses, or other optical elements having a light homogenizing function. An optical prism assembly OA refers to an optical element including multiple prisms. In this embodiment, the optical prism assembly OA has a function of guiding beam transmission directions, and one prism in the optical prism assembly OA is denoted as P. The light valves170and180refer to any one of spatial light modulators such as a digital micro-mirror device (DMD), a liquid-crystal-on-silicon (LCOS) panel, or a liquid crystal panel (LCD), but the disclosure is not limited thereto. The projection lens190includes, for example, a combination of one or more optical lenses having a diopter, and the optical lenses, for example, are various combinations of non-planar lenses including biconcave lenses, biconvex lenses, concave-convex lenses, convex-concave lenses, plane-convex lenses, and plane-concave lenses. The disclosure does not limit the form and type of the projection lens190. In addition, in this embodiment, for facilitating adjustment of an optical path of the laser beam EB and/or a conversion beam CB, one to multiple mirrors M1to M3and lenses L1to L8may selectively be added inside the projection device100, but the disclosure does not particularly limit the number and position of the mirrors and the lenses. The optical prism assembly OA includes multiple prisms, one of which is denoted as P. Disposition relationship between the above elements and optical behaviors in the projection device100will be described in detail in the following paragraphs. With reference toFIG.1AandFIG.1B, in this embodiment, the wavelength conversion element120is disposed downstream in an optical path of the laser light source110. The diffuser element130is disposed downstream in an optical path of the wavelength conversion element120. The optical prism assembly OA is disposed downstream in an optical path of the diffuser element130and is located between the diffuser element130, the first light valve170, and the second light valve180. The beam splitting element150is disposed downstream in an optical path of the filter element140, and is disposed on a surface of one prism P in the optical prism assembly OA by plating or coating. The beam splitting element152is disposed downstream in the optical path of the laser light source110and is disposed upstream in the optical path of the diffuser element130. The light homogenizing element160is disposed in an optical path between the beam splitting element150and the diffuser element130. The first light valve170and the second light valve180are disposed downstream in an optical path of the beam splitting element150. The projection lens190is disposed downstream in an optical path of the first light valve170and an optical path of the second light valve180. In detail, during operation of the projection device100, the wavelength conversion region WCR and the non-wavelength conversion region NWCR of the wavelength conversion element120sequentially cut into a transmission path of the laser beam EB. Moreover, the wavelength conversion element120and the diffuser element130rotate simultaneously, such that the first region R1and the second region R2correspond respectively to the wavelength conversion region WCR and the non-wavelength conversion region NWCR during rotation. That is, the wavelength conversion region WCR of the wavelength conversion element120and the first region R1of the diffuser element130cut into a transmission path of a beam (for example, the laser beam EB and/or the conversion beam CB) at the same time. With reference toFIG.1A, in the first time interval, the wavelength conversion region WCR of the wavelength conversion element120and the filter element140disposed on the first region R1cut into the transmission path of the laser beam EB, and the first region R1of the diffuser element130and the filter element140disposed on the first region R1cut into a transmission path of the conversion beam CB. Therefore, after the laser light source110emits the laser beam EB, the laser beam EB is sequentially transmitted to the beam splitting element152and the wavelength conversion substance124(descriptions of lenses and mirrors are omitted in the following embodiments). The wavelength conversion substance124is excited by most of the laser beam EB (approximately 99%) to generate the conversion beam CB. Since the wavelength conversion substance124is, for example, a yellow phosphor glue layer, a peak wavelength of the conversion beam CB, for example, falls within a wavelength range of yellow light. Next, the conversion beam CB is transmitted to the beam splitting element152, the filter element140, the diffuser element130(the first region R1), the light homogenizing element160, the optical prism assembly OA, and the beam splitting element150. The conversion beam CB, for example, penetrates the first region R1. The beam splitting element150splits the conversion beam CB passing through the first region R1and the filter element140(as a first beam B1) into a first sub-beam CB1(red beam) and a second sub-beam CB2(green beam) having different peak wavelengths. In detail, the beam splitting element150may allow the first sub-beam CB1in the conversion beam CB to penetrate and may reflect the second sub-beam CB2in the conversion beam CB, thereby guiding the first sub-beam CB1and the second sub-beam CB2respectively to the first light valve170and the second light valve180. The first light valve170converts the first sub-beam CB1into a first image beam IMB1, and the second light valve180converts the second sub-beam CB2into a second image beam IMB2. The first image beam IMB1and the second image beam IMB2are further guided to the projection lens190by the optical prism assembly OA, and then the projection lens190projects the first image beam IMB1and the second image beam IMB2onto a projection medium (such as a projection screen). On the other hand, in the first time interval, after the laser beam EB is transmitted to the wavelength conversion element120, there is still a small portion of the laser beam EB not converted by the wavelength conversion substance124(approximately 1%), so the small portion of the laser beam EB will be transmitted to the filter element140. Since the filter element140is configured to filter out blue light, the laser beam EB in the first time interval may be filtered out and will not be transmitted to the first light valve170or the second light valve180disposed downstream. With reference toFIG.1B, in the second time interval, the non-wavelength conversion region NWCR (the light transmitting element126) and the second region R2of the diffuser element130cut into the transmission path of the laser beam EB at the same time. Therefore, after the laser light source110emits the laser beam EB, the laser beam EB is sequentially transmitted to the beam splitting element152, the light transmitting element126(the non-wavelength conversion region NWCR), the beam splitting element152, the diffuser element130(the second region R2), the light homogenizing element160, the optical prism assembly OA, and the beam splitting element150. The laser beam EB, for example, penetrates the light transmitting element126, and is transmitted to and penetrates the second region R2. The beam splitting element150guides the laser beam EB passing through the second region R2(as a second beam B2) to one of the first light valve170and the second light valve180. In detail, the laser beam EB is transmitted to the first light valve170after penetrating the beam splitting element150. The first light valve170converts the laser beam EB into a third image beam IMB3. The third image beam IMB3is further guided to the projection lens190by the optical prism assembly OA, and then the projection lens190projects the third image beam IMB3onto the projection medium. Based on the above, in the projection device100of this embodiment, the wavelength conversion region WCR of the wavelength conversion element120corresponds to the first region R1of the diffuser element130, and the first region R1is disposed with the filter element140which may filter out the laser beam EB, so the portion of the laser beam EB not converted by the wavelength conversion substance122is unlikely to be transmitted to the first light valve170and the second light valve180disposed downstream and thus affect color purity of images projected by the projection device100. Therefore, the projection device100has good optical quality. Note that the following embodiments use the reference numerals and part of the content of the above embodiment, and descriptions of the same technical content are omitted. For the descriptions of the omitted parts, please refer to the above embodiment. The descriptions will not be repeated in the following embodiments. FIG.4AandFIG.4Bare schematic views of optical paths in a first time interval and a second time interval, respectively, according to another embodiment of the disclosure.FIG.5is a schematic top view of a wavelength conversion element of another embodiment. With reference toFIG.4AandFIG.4B, a projection device100ain the embodiment ofFIG.4AandFIG.4Bis substantially similar to the projection device100ofFIG.1AandFIG.1B. The two are mainly different in that the wavelength conversion elements are different. With reference toFIG.5, in this embodiment, a non-wavelength conversion region NWCRa of a wavelength conversion element120aincludes a reflecting element128. In other words, the non-wavelength conversion region NWCRa is defined by the reflecting element128, and the reflecting element128is disposed in the notch NT of the rotary disc122. With reference toFIG.4AandFIG.4Btogether, the projection device100afurther includes an optical element154disposed beside the beam splitting element152. The optical element154is, for example but not limited to, a mirror. In other embodiments, the optical element154is, for example, a beam splitting element, configured to reflect the laser beam EB and allow other beams to penetrate. In this embodiment, the number of the lenses is greatly reduced to three, which are lenses L9to L11. Therefore, the projection device100atakes up a relatively small volume. Optical behaviors in the projection device100awill be described in detail in the following paragraphs. With reference toFIG.4A, in the first time interval, the wavelength conversion region WCR of the wavelength conversion element120acuts into the transmission path of the laser beam EB, and the first region R1of the diffuser element130and the filter element140cut into the transmission path of the conversion beam CB. Therefore, after the laser light source110emits the laser beam EB, the laser beam EB is sequentially transmitted to the beam splitting element152and the wavelength conversion substance124. Most of the laser beam EB is transmitted to and excites the wavelength conversion substance124to generate the conversion beam CB. Next, the conversion beam CB is sequentially transmitted to the beam splitting element152, the filter element140, the first region R1of the diffuser element130, the light homogenizing element160, the optical prism assembly OA, and the beam splitting element150. The subsequent optical path is similar to that described in the relevant paragraphs ofFIG.1A, and therefore will not be described herein. On the other hand, in the first time interval, there is still a small portion of the laser beam EB not converted by the wavelength conversion substance124. Therefore, after reflected by the wavelength conversion substance124(or the rotary disc122), the small portion of the laser beam EB is sequentially transmitted to the beam splitting element152, the optical element154, the beam splitting element152, and the filter element140. The small portion of the laser beam EB will be filtered out by the filter element140and will not be transmitted to the first light valve170or the second light valve180disposed downstream. With reference toFIG.4B, in the second time interval, the non-wavelength conversion region NWCRa (the reflecting element128) of the wavelength conversion element120aand the second region R2of the diffuser element130cut into the transmission path of the laser beam EB. Therefore, after the laser light source110emits the laser beam EB, the laser beam EB is sequentially transmitted to the beam splitting element152, the reflecting element128, the beam splitting element152, the beam splitting element154, the beam splitting element152, the second region R2of the diffuser element130, the light homogenizing element160, the optical prism assembly OA, and the beam splitting element150. After reflected by the reflecting element128, the laser beam EB is transmitted to and penetrates the second region R2. The subsequent optical path is similar to that described in the relevant paragraphs ofFIG.1B, and therefore will not be described herein. FIG.6AandFIG.6Bare schematic views of optical paths in a first time interval and a second time interval, respectively, according to still another embodiment of the disclosure. A projection device100bin the embodiment ofFIG.6AandFIG.6Bis substantially similar to the projection device100ofFIG.1AandFIG.1B. The two are mainly different in that the projection device100bfurther includes a complementary light source CP configured to emit a complementary color beam CPL. The complementary light source CP is disposed upstream in the optical path of the beam splitting element150. In this embodiment, the complementary light source CP is, for example, a red light source, and a peak wavelength of the complementary color beam CPL, for example, falls within a wavelength range of red light. In addition, the projection device100bfurther includes a controller C configured to control whether the complementary light source CP emits the beam. Optical behaviors in the projection device100bwill be described in detail in the following paragraphs. With reference toFIG.6A, an optical path in the first time interval is similar to that described in the relevant paragraphs ofFIG.1A, and therefore will not be described herein. The two are mainly different in that the controller C controls the complementary light source CP to emit the complementary color beam CPL in the first time interval. The complementary color beam CPL is guided to the beam splitting element150by the optical prism assembly OA. The beam splitting element150guides the complementary color beam CPL to one of the first light valve170and the second light valve180. For example, since the complementary color beam CPL is a red beam, it may penetrate the beam splitting element150, be transmitted to the first light valve170together with the first sub-beam CB1, and thereby be converted into the first image beam IMB1together by the first light valve170. Hence, light intensity of the first image beam IMB1is intensified. It should be noted that in this embodiment, the optical element M3, for example but not limited to, may allow the complementary color beam CPL to penetrate and may reflect beams of other wavelengths. With reference toFIG.6B, an optical path in the second time interval is similar to that described in the relevant paragraphs ofFIG.1B, and therefore will not be described herein. The controller C controls the complementary light source CP not to emit the complementary color beam CPL in the second time interval. Note that the complementary light source CP and the complementary color beam CPL are exemplified by a red light source and a red beam, but people skilled in the art may change a color of light emitted by the complementary light source according to design requirements. For example, in another embodiment, the complementary light source CP may be a green light element, and the complementary color beam CPL emitted by the complementary light source CP may be a green beam, but the disclosure is not limited thereto. In summary, in the projection device of embodiments of the disclosure, the wavelength conversion region of the wavelength conversion element corresponds to the first region of the diffuser element, and the first region is disposed with the filter element which may filter out the laser beam, so the laser beam not converted by the wavelength conversion substance is unlikely to be transmitted to the first light valve and the second light valve disposed downstream. Therefore, the projection device has good light purity. The foregoing description of the preferred embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form or to exemplary embodiments disclosed. Accordingly, the foregoing description should be regarded as illustrative rather than restrictive. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The embodiments are chosen and described in order to best explain the principles of the invention and its best mode practical application, thereby to enable persons skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated. Therefore, the term “the invention”, “the present invention” or the like does not necessarily limit the claim scope to a specific embodiment, and the reference to particularly preferred exemplary embodiments of the invention does not imply a limitation on the invention, and no such limitation is to be inferred. The invention is limited only by the spirit and scope of the appended claims. Moreover, these claims may refer to use “first”, “second”, etc. following with noun or element. Such terms should be understood as a nomenclature and should not be construed as giving the limitation on the number of the elements modified by such nomenclature unless specific number has been given. The abstract of the disclosure is provided to comply with the rules requiring an abstract, which will allow a searcher to quickly ascertain the subject matter of the technical disclosure of any patent issued from this disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Any advantages and benefits described may not apply to all embodiments of the invention. It should be appreciated that variations may be made in the embodiments described by persons skilled in the art without departing from the scope of the present invention as defined by the following claims. Moreover, no element and component in the present disclosure is intended to be dedicated to the public regardless of whether the element or component is explicitly recited in the following claims. | 27,736 |
11860525 | DETAILED DESCRIPTION For clearer descriptions of the objectives, technical solutions, and advantages of the embodiments of the present disclosure, the following clearly and completely describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments derived by persons of ordinary skill in the art without any creative efforts shall fall within the protection scope of the present disclosure. It should be noted that, when an element is defined as “being secured or fixed to” another element, the element may be directly positioned on the element or one or more centered elements may be present therebetween. When an element is defined as “being connected or coupled to” another element, the element may be directly connected or coupled to the element or one or more centered elements may be present therebetween. As used herein, the terms “vertical”, “horizontal”, “left”, “right”, and similar expressions are for illustration purposes. In addition, technical features involved in various embodiments of the present application described hereinafter may be combined as long as these technical features are not in conflict. Referring toFIG.1, a schematic structural diagram of a projection system according to an embodiment of the present disclosure is illustrated. The projection system includes: a display device100and a projector200. The display device100is arranged on a light exit side of the projector200, and is configured to display projection content of the projector200. The display device100is a projection screen. The projection screen may be a soft projection screen, or may be a hard projection screen. The soft projection screen includes a reflective projection screen and a transmissive projection screen, and the hard projection screen includes a reflective projection screen. In the case that the display screen is a reflective projection screen, the projector200is disposed in front of the projection screen, and projects images in a front projection fashion. In the case that the projection screen is a transmissive projection screen, the projector200is disposed behind the projection screen, and projects images in a back projection fashion. The projector200includes: a projection light source210, a polarizing beam splitting prism220, a polarized light converting component230, an LCOS imaging chip240, and a projection lens250. The projection light source210, the polarizing beam splitting prism220, and the projection lens250are disposed in a first direction, and the polarizing beam splitting prism220is disposed between the projection light source210and the projection lens250. The polarized light converting component230, the polarizing beam splitting prism220, and the LCOS imaging chip240are disposed in a second direction, and the polarizing beam splitting prism220is disposed between the polarized light converting component230and the LCOS imaging chip240. The first direction is perpendicular to the second direction. For example, the first direction is a horizontal direction, and the second direction is a vertical direction. Nevertheless, in some embodiments, the first direction may also be a vertical direction, and the second direction may also be a horizontal direction. It may be understood that in the case that the projection light source210, the polarizing beam splitting prism220, and the projection lens250are disposed in the first direction, optical axes of the projection light source210, the polarizing beam splitting prism220, and the projection lens250are parallel to the first direction; and in the case that the polarized light converting component230, the polarizing beam splitting prism220, and the LCOS imaging chip240are disposed in the second direction, optical axes of the polarized light converting component230, the polarizing beam splitting prism220, and the LCOS imaging chip240are parallel to the second direction. In the projector200, the projection light source210is configured to emit an S-polarized light; the polarizing beam splitting prism220is configured to reflect the S-polarized light emitted from the projection light source210to the polarized light converting component230; the polarized light converting component230is configured to convert the S-polarized light reflected from the polarizing beam splitting prism220to a P-polarized light, and emit the converted P-polarized light to the polarizing beam splitting prism220, such that the polarizing beam splitting prism220transmits the P-polarized light emitted from the polarized light converting component230to the LCOS imaging chip240; and the LCOS imaging chip240is configured to modulate the P-polarized light transmitted from the polarizing beam splitting prism220to an S-polarized light, and emit the modulated S-polarized light to the polarizing beam splitting prism220, such that the polarizing beam splitting prism220reflects the S-polarized light emitted from the LCOS imaging chip240to the projection lens250. It may be understood that the S-polarized light emitted from the projection light source210in the first direction is processed by the polarizing beam splitting prism220, the polarized light converting component230, and the LCOS imaging chip240and emitted to the projection lens250in the first direction, such that the projection lens250is arranged linearly with the projection light source210. In addition, the polarized light converting component230and the LCOS imaging chip240both have a small size and thus do not occupy a large space in the second direction, such that the projector200is entirely in a straight line structure, and layout thereof is reasonable and compact. In this way, a waste of space is reduced, and the size of the projector200is reduced, such that the projector200is convenient to carry. For example, referring toFIG.2, the projection light source210is a laser light source. The laser light source includes, but is not limited to, a YAG (Yttrium Aluminum Garnet) laser, an OPO (Optical Parametric Oscillator) laser, or the like. A laser beam emitted from the laser light source is an S-polarized light. It may be understood that in some embodiments, the projection light source210may also be formed by a common light source and a polarizer. A light beam emitted from the common light source is converted by the polarizer to an S-polarized light and then emitted. The polarizing beam splitting prism220includes: a first right-angled prism221, a second right-angled prism222, and a multi-layer birefringent polarizer223. The first right-angled prism221is a right-angled isosceles prism, and includes a first right-angled surface P1, a second right-angled surface P2, and a first beveled surface P3. The first right-angled surface P1, the second right-angled surface P2, and the first beveled surface P3are all coated with an anti-reflection film to improve light transmittance of the first right-angled surface P1, the second right-angled surface P2, and the first beveled surface P3. The second right-angled prism222is a right-angled isosceles prism, and includes a third right-angled surface P4, a fourth right-angled surface P5, and a second beveled surface P6. The third right-angled surface P4and the fourth right-angled surface P5are both coated with an anti-reflection film to improve light transmittance of the third right-angled surface P4and the fourth right-angled surface P5. The second beveled surface P6is coated with an anti-reflection film, wherein the anti-reflection film is capable of reflecting the S-polarized light and transmitting the P-polarized light. The multi-layer birefringent polarizer233is a plate-shaped structure, and the multi-layer birefringent polarizer233is capable of reflecting the S-polarized light and transmitting the P-polarized light. The first right-angled prism221and the second right-angled prism222define a cubic. The first beveled surface P3is parallel to the second beveled surface P6, the first right-angled surface P1faces towards the projection light source210, the second right-angled surface P2faces towards the polarized light converting component230, the third right-angled surface P4faces towards the LCOS imaging chip240, and the fourth right-angled surface P5faces towards the projection lens250. The multi-layer birefringent polarizer223is disposed between the first beveled surface P3and the second beveled surface P6, and is parallel to the first beveled surface P3and the second beveled surface P6. The multi-layer birefringent polarizer223, the first beveled surface P3, and the second beveled surface P6each define a 45-degree or 135-degree included angle with the first direction. In this case, the first right-angled surface P1and the fourth right-angled surface P5are both perpendicular to the first direction, and the second right-angled surface P2and the second right-angled surface P4are both parallel to the first direction. It may be understood that the S-polarized light emitted from the projection light source210is transmitted through the first right-angled surface P1and the first beveled surface P3to the multi-layer birefringent polarizer223and reflected by the multi-layer birefringent polarizer223; the S-polarized light is then transmitted through the first beveled surface P3and the second right-angled surface P2to the polarized light converting component230and converted by the polarized light converting component to the P-polarized light; the P-polarized light is transmitted through the second right-angled surface P2, the first beveled surface P3, the multi-layer birefringent polarizer223, the second beveled surface P6, and the third right-angled surface P4to the LCOS imaging chip240and modulated by the LCOS imaging chip240to the S-polarized light; and the S-polarized light is transmitted through the third right-angled surface P4to the second beveled surface P6and reflected by the second beveled surface P6, and is then transmitted through the fourth right-angled surface P5to the projection lens250. The S-polarized light modulated by the LCOS imaging chip240is reflected by the second beveled surface P6, such that ghost phenomenon of the projector200is prevented. Since the multi-layer birefringent polarizer223, the first beveled surface P3, and the second beveled surface P6each define a 45-degree or 135-degree included angle with the first direction, the S-polarized light emitted from the projection light source210is perpendicular to the S-polarized light reflected from the multi-layer birefringent polarizer223. The S-polarized light reflected by the multi-layer birefringent polarizer223is parallel to the P-polarized light emitted from the polarized light converting component230. The S-polarized light emitted from the LCOS imaging chip240is perpendicular to the S-polarized light reflected by the second beveled surface P6. In an embodiment of the present disclosure, the multi-layer birefringent polarizer223is glued on the first beveled surface P3and the second beveled surface P6, and in the case that the multi-layer birefringent polarizer223is glued on the first beveled surface P3and the second beveled surface P6, the multi-layer birefringent polarizer223is totally coincident with the first beveled surface P3and the second beveled surface P6. Further, referring toFIG.3, in some embodiments, the polarizing beam splitting prism220further includes: an S-polarized light absorptive polarizer224. The S-polarized light absorptive polarizer224is a plate-shaped structure, and is capable of absorbing the S-polarized light and transmitting the P-polarized light. In an embodiment of the present disclosure, the S-polarized light absorptive polarizer224is a metal wire-grid polarizer. The S-polarized light absorptive polarizer224is disposed between the multi-layer birefringent polarizer223and the second beveled surface P6, and is parallel to the multi-layer birefringent polarizer223and the second beveled surface P6. In this case, the S-polarized light absorptive polarizer224, the multi-layer birefringent polarizer223, and the second beveled surface P6each define a 45-degree or 135-degree included angle with the first direction. It may be understood that the S-polarized light emitted from the projection light source210is transmitted through the first right-angled surface P1and the first beveled surface P3to the multi-layer birefringent polarizer223and reflected by the multi-layer birefringent polarizer223; the S-polarized light is then transmitted through the first beveled surface P3and the second right-angled surface P2to the polarized light converting component230and converted by the polarized light converting component230to the P-polarized light; the P-polarized light is transmitted through the second right-angled surface P2, the first beveled surface P3, the multi-layer birefringent polarizer223, the S-polarized light absorptive polarizer224, the second beveled surface P6, and the third right-angled surface P4to the LCOS imaging chip240and modulated by the LCOS imaging chip240to the S-polarized light; and the S-polarized light is transmitted through the third right-angled surface P4to the second beveled surface P6and reflected by the second beveled surface P6, and is then transmitted through the fourth right-angled surface P5to the projection lens250. It may be understood that in the case that the S-polarized light absorptive polarizer224is disposed between the multi-layer birefringent polarizer223and the second beveled surface P6, even if the S-polarized light emitted from the projection light source210is transmitted from the multi-layer birefringent polarizer223, the S-polarized light may also be absorbed by the S-polarized light absorptive polarizer224and thus fails to be transmitted to the projection lens250, such that the projector200is effectively prevented from light leakage. In the meantime, even if the S-polarized light emitted from the LCOS imaging chip240is transmitted through the second beveled surface P6, the S-polarized light may also be absorbed by the S-polarized light absorptive polarizer224and thus fails to be transmitted to the multi-layer birefringent polarizer223and reflected twice by the multi-layer birefringent polarizer223, such that the ghost phenomenon of the projector200is effectively prevented, and thus a good projection effect is achieved. In an embodiment of the present disclosure, the S-polarized light absorptive polarizer224, the multi-layer birefringent polarizer223, the first right-angled prism221, and the second right-angled prism222are connected by gluing. In this case, the S-polarized light absorptive polarizer224, the multi-layer birefringent polarizer223, the first beveled surface P3, and the second beveled surface P6are completely coincident with each other. The polarized light converting component230includes: a quarter-wave plate231and a reflective mirror232. The quarter-wave plate231is disposed between the reflective mirror232and the polarizing beam splitting prism220, and is disposed on a light exit side of the reflective mirror232. The quarter-wave plate231and the reflective mirror232are both parallel to the first direction, and are coaxially arranged. The quarter-wave plate231is configured to rotate a polarization state of the S-polarized light reflected from the polarizing beam splitting prism by 45 degrees to acquire an intermediate light, and emit the intermediate light to the reflective mirror232. The reflective mirror232is configured to reflect the intermediate light emitted from the quarter-wave plate231to the quarter-wave plate231, such that the quarter-wave plate231rotates a polarization state of the intermediate light reflected from the reflective mirror232by 45 degrees to acquire a P-polarized light, and emits the P-polarized light to the polarizing beam splitting prism220. The s-polarized light modulated by the LCOS imaging chip240is a projection beam. In the case that the projection beam is incident to the projection lens250, the projection lens250is capable of projecting the projection beam to the display device100to display projection content. Further, referring toFIG.4andFIG.5, in some embodiments, for luminance uniformity of the projector200, the projector200further includes: a microlens array260. The microlens array260is disposed between the projection light source210and the polarizing beam splitting prism220, and is parallel to the second direction. The microlens array260and the projection light source210are coaxially arranged. The microlens array260is a plate-shaped structure. In the case that the microlens array260is parallel to the second direction, a thickness direction of the microlens array260is coincident with the first direction. A plurality of protrusions are disposed on a light exit surface of the microlens array260. The microlens array260is capable of uniformizing the S-polarized light emitted from the projection light source210by the plurality of protrusions, and emitting the uniformized S-polarized light to the polarizing beam splitting prism220. The embodiments of the present disclosure achieve the following beneficial effects: Different from the related art, the embodiments of the present disclosure provide a projector and a projection system. The projector includes a projection light source, a polarizing beam splitting prism, a polarized light converting component, an LCOS imaging chip, and a projection lens. The projection light source, the polarizing beam splitting prism, and the projection lens are disposed in a first direction, and the polarizing beam splitting prism is disposed between the projection light source and the projection lens. The polarized light converting component, the polarizing beam splitting prism, and the LCOS imaging chip are disposed in a second direction, and the polarizing beam splitting prism is disposed between the polarized light converting component and the LCOS imaging chip, wherein the first direction is perpendicular to the second direction. The projection light source is configured to emit an S-polarized light. The polarized beam splitting prism is configured to reflect the S-polarized light emitted from the projection light source to the polarized light converting component. The polarized light converting component is configured to convert the S-polarized light reflected from the polarizing beam splitting prism to a P-polarized light and emit the P-polarized light to the polarizing beam splitting prism, such that the polarized beam splitting prism transmits the P-polarized light emitted from the polarized light converting component to the LCOS imaging chip. The LCOS imaging chip is configured to modulate the P-polarized light transmitted by the polarizing beam splitting prism to the S-polarized light and emit the S-polarized light to the polarizing beam splitting prism, such that the polarizing beam splitting prism reflects the S-polarized light emitted from the LCOS imaging chip to the projection lens. That is, the S-polarized light emitted from the projection light source in the first direction is processed by the polarizing beam splitting prism, the polarized light converting component, and the LCOS imaging chip, and emitted to the projection lens in the first direction, such that the projection lens is linearly arranged with the projection light source. Further, the LCOS imaging chip and the polarized light converting component are both small in size, such that layout thereof is reasonable and compact. In this way, a waste of space is reduced, and the size of the projector is reduced, such that the projector is convenient to carry. It should be noted that the specification and drawings of the present disclosure illustrate preferred embodiments of the present disclosure. However, the present disclosure may be implemented in different manners, and is not limited to the embodiments described in the specification. The embodiments described are not intended to limit the present disclosure, but are directed to rendering a thorough and comprehensive understanding of the disclosure of the present disclosure. In addition, the above described technical feature may incorporate and combine with each other to derive various embodiments not illustrated in the above specification, and such derived embodiments shall all be deemed as falling within the scope of the specification of the present disclosure. Further, a person skilled in the art may make improvements or variations according to the above description, and such improvements or variations shall all fall within the protection scope as defined by the claims of the present disclosure. | 20,937 |
11860526 | DETAILED DESCRIPTION OF THE INVENTION In the following, with reference to the drawings that accompany this disclosure, the technical solutions provided in the various embodiments of the disclosure are described in greater detail. It should be noted that the embodiments provided in the disclosure shall be considered to represent only part, but not all, of the embodiments that the present disclosure covers, and thus shall not be considered to impose any limitation on the protection scope of the disclosure. Based on the embodiments provided herein, other embodiments with slight variations in designs, as long as they follow the gist of the invention disclosed herein, and can be easily obtained by people of ordinary skills in the art without involving any creative work, shall be considered to be covered by the scope of the disclosure. In a first aspect, the present disclosure provides a beam modulation apparatus, which is configured to modulate and/or redirect an input light field entering thereinto to thereby obtain an output light field. FIG.1Ashows a block diagram of a beam modulation apparatus provided by some embodiments of the disclosure. As illustrated, the beam modulation apparatus1000includes a PBS prism100and a liquid crystal on silicon (LCOS) assembly200, which are optically coupled with each other to modulate an input light field before outputting an output light field. The input light field is configured to be a composite light field including two spatially coupled light fields, i.e. a first light field and a second light field, which are both linearly polarized optical beams but have two different orthogonal linear polarization states (i.e. S polarization state and P polarization state). In other words, in the input light field configured to enter into the beam modulation apparatus1000, the first light field and the second light field are configured to be a P-polarized optical beam and an S-polarized optical beam respectively (as illustrated inFIG.1A), or alternatively to be an S-polarized optical beam and a P-polarized optical beam respectively (not shown in the drawings). Herein optionally, the input light field is further configured to be a white light field which is coupled by three primary colour lights. Specifically, the first light field can comprise two primary colour light (e.g. the first and second primary colour lights), which are coupled with each other in a time-sharing manner, whereas the second light field comprises the last primary colour light (i.e. the third primary colour light). Such a configuration for the input light field allows the beam modulation apparatus1000to be used as part of a projection system, such as a TV projection system or a microdisplay system (e.g. virtual reality (VR), augmented reality (AR) display, etc.) to project/display color images based on the output light field emitted therefrom. Optionally, the beam modulation apparatus1000can further include a quarter-wave plate assembly300(as shown by the box with broken lines inFIG.1A), which is optically coupled with both the PBS prism100and the LCOS assembly200, and is configured to increase the contrast of the output light field emitted from the beam modulation apparatus1000. The structural diagram of the beam modulation apparatus1000shown inFIG.1Ais further illustrated inFIG.1B. As shown, in the apparatus1000, the PBS prism100includes two right-angle prisms (i.e. triangular prisms)120and140, which are attached against each other on their respective bases (i.e. hypotenuses) to thereby form a PBS cube. Optionally, the two prisms120and140are glued or cemented together by arranging a thin film of a optical glue/cement material (e.g. epoxy) at an interface C between the two prisms120and140. Optionally, the two prisms120and140can be arranged together without using a glue or cement. The PBS cube100is provided with an optical incident surface A and an optical exit surface B, and the input light field is configured to enter into the PBS cube100through the optical incident surface A along a first axis, and the output light field is configured to exit the PBS cube100through the optical exit surface B along a second axis that is orthogonal to the first axis (i.e. substantially 90°). Both the first axis and the second axis have substantially 45° relative to the interface C. The interface C of the PBS prism100is configured to selectively allow a P-polarized incoming light to transmit therethrough and an S-polarized incoming light to reflect thereby. For this purpose, a polarizing means130can be sandwiched at the interface C to make it a polarizing surface for the PBS cube100. Optionally, the polarizing means130can comprise a polarizing spectroscopic film or a wire grid. In one example, the polarizing spectroscopic film can include a dielectric beamsplitter coating, which can be coated on the hypotenuse surface of one or both of the triangular prisms120and140before they are cemented together with a optical cement. Depending on the applications, the PBS prism can be configured to have a different extinction ratio for the transmitted beam (Tp:Ts). For example, in applications such as in a TV projection system or an VR/AR display system, the PBS prism100in the beam modulation apparatus1000can have an extinction ratio of at least 500:1, or more preferably at least 1000:1. As illustrated inFIG.1B, the LCOS assembly200includes two LCOS panels. A first LCOS panel220is arranged over a first side surface of the PBS prism100that is opposing to the optical incident surface A, configured to have its reflective surface D2 facing the PBS prism100and in parallel to the first side surface such that an incoming beam from the PBS prism100can be reflected back to the PBS prism100along the first axis. A second LCOS panel240is arranged over a second side surface of the PBS prism that is opposing to the optical exit surface B, configured to have its reflective surface D4 facing the PBS prism100and in parallel to the first side surface such that an incoming beam from the PBS prism100can be reflected back to the PBS prism100along the second axis. Each of the first LCOS panel220and the second LCOS panel240comprises a plurality of pixels over the respective reflective surface thereof, wherein each of the plurality of pixels is configured to be controllably switched on or off such that the polarization state of a light beam reflected by a portion of the reflective surface corresponding thereto is changed or remains unchanged. FIGS.2A and2Billustrate the optical paths of a P-polarized light beam within the input light field (“P-pol (input)”) that enters through the optical incident surface A into the PBS cube100, transmits through the polarizing interface C, and sheds onto the first LCOS panel220at a region corresponding to one particular pixel thereon when the pixel is at the “on” state (the filled circle inFIG.2A) or at the “off” state (the empty circle inFIG.2B), where straight lines with arrow heads and broken lines with arrow heads refer to P-polarized light beams and S-polarized light beams respectively. If the pixel is switched on, as shown inFIG.2A, the incident P-polarized light beam (“P-pol (incident)”) gets reflected back to give rise to a reflected S-polarized light beam (“S-pol (reflected)”), which enters into the PBS cube100, and upon reaching the polarizing interface C, gets reflected thereby to exit out of the PBS cube100through the optical exit surface B for output. If the pixel is switched off, as illustrated inFIG.2B, the incident P-polarized light beam (“P-pol (incident)”) gets reflected back by the first LCOS panel220to give rise to a reflected P-polarized light beam (“P-pol (reflected)”), which enters into the PBS cube100, and upon reaching the polarizing interface C, gets transmitted therethrough to exit out of the PBS cube100through the optical incident surface A to thus get lost without being outputted from the optical exit surface B. As such, by controlling the state of the pixel on the first LCOS panel220, the input P-polarized light beam can be manipulated to be outputted (as a S-polarized light beam) or lost from the optical exit surface B of the prism cube100. FIGS.2C and2Dillustrate the optical paths of a S-polarized light beam within the input light field (“S-pol (input)”) that enters through the optical incident surface A into the PBS cube100, gets reflected at the polarizing interface C, and sheds onto the second LCOS panel240at a region corresponding to one particular pixel thereon when the pixel is at the “on” state (the filled circle inFIG.2C) or at the “off” state (the empty circle inFIG.2D), where straight lines with arrow heads and broken lines with arrow heads refer to P-polarized light beams and S-polarized light beams respectively. If the pixel is switched on, as shown inFIG.2C, the incident S-polarized light beam (“S-pol (incident)”) gets reflected back by the second LCOS panel240to give rise to a reflected P-polarized light beam (“P-pol (reflected)”), which enters into the PBS cube100, and upon reaching the polarizing interface C, gets transmitted therethrough to exit out of the PBS cube100through the optical exit surface B for output. If the pixel is switched off, as illustrated inFIG.2D, the incident S-polarized light beam (“S-pol (incident)”) gets reflected back by the second LCOS panel240to give rise to a reflected S-polarized light beam (“S-pol (reflected)”), which enters into the PBS cube100, and upon reaching the polarizing interface C, gets reflected again to exit out of the PBS cube100through the optical incident surface A to thus get lost without being outputted from the optical exit surface B. As such, by controlling the state of the pixel on the second LCOS panel240, the input S-polarized light beam can be manipulated to be outputted (as a P-polarized light beam) or lost from the optical exit surface B. It is noted thatFIGS.2A and2B, andFIGS.2C and2Donly exhibit two extreme states of the pixel (i.e. “on” and “off”) on the first LCOS panel220and the second LCOS panel240, respectively. It is possible that for a pixel on a particular LCOS panel, it is manipulated to allow partial polarity (or polarization state) conversion of the incident light beam, i.e. polarization rotation at an angle 0°<α<90°, and thus the reflected light beam is substantially a mix of a P-polarized light beam and S-polarized beam. Depending on where the reflected light beam is from, only a corresponding light beam can realize an output from the optical exit surface B of the PBS prism100, which is only a fraction, but not all, of the light beam. It is additionally noted that the above two pixel-controlled manipulations on the input P-polarized light beam and on the input S-polarized light beam in the input light field can be simultaneously and synchronizingly performed to realize the controlled spatial coupling of the two outputted light beams to give rise to a coloured pixel with different pitches on the display screen. It is further noted that each of the first LCOS panel220and the second LCOS panel240contains a plurality of pixels (i.e. the plurality of first pixels and the plurality of second pixels) whose states can be individually controlled by means of a controlling circuit (not shown), such as a CMOS chip or another chip, that is assembled together with the LCOS panels, which is configured to receive instructions from a processor based on a pre-determined program. As such, a coloured image can be generated on a display screen based on the output light field that is outputted from the beam modulation apparatus as disclosed herein which is substantially a combination of a plurality of outputted light beams corresponding to the plurality of first and second pixels respectively on the first and second LCOS panels. As further illustrated inFIG.1B, a quarter-wave plate assembly300comprising two quarter-wave plates (i.e. a first quarter-wave plate320and a second quarter-wave plate340) can be arranged in the beam modulation apparatus1000. The first quarter-wave plate320is sandwiched between the PBS prism100and the first LCOS panel220along the first axis, and the second quarter-wave plate340is sandwiched between the PBS prism100and the second LCOS panel240along the second axis. It is noted that they are optional, and it is possible that none, or only one of the two quarter-wave plates, is arranged in the beam modulation apparatus1000. In the beam modulation apparatus1000, an anti-reflection or anti-reflective (AR) coating or film can be optionally provided on one or more of the optical incident surface A, the optical exit surface B, the first side surface, or the second side surface of the PBS prism100so as to reduce the Fresnel reflections thereon. In a second aspect, a projection system is further provided, which includes a beam modulation apparatus as described above. FIG.3illustrates a block diagram of a projection system according to some embodiments of the disclosure. As illustrated, the projection system001comprises a light source apparatus4000, a polarization apparatus3000, a polarization modulation apparatus2000, a beam modulation apparatus1000, and a projection apparatus5000, which are optically coupled sequentially in the optical path. Among those, the beam modulation apparatus1000can be based on any one of the embodiments of the beam modulation apparatus as described above in the first aspect. Following the configuration scheme as described above, the beam modulation apparatus1000is configured to modulate an input light field comprising a first light field and a second light field with different linear polarization states (i.e. an S-polarized and a P-polarized) to thereby provide an output light field to the projection apparatus5000for projection. Upstream the optical path, the first light field and the second light field inputted to the beam modulation apparatus1000can be provided by the polarization modulation apparatus2000, which is configured, upon receiving a third light field and a fourth light field having a substantially same polarization state, to modulate a polarization state of one, but not another, of the third light field and the fourth light field to correspondingly output, and then to provide to the beam modulation apparatus1000, the first light field and the second light field respectively. In one example illustrated inFIG.3, the third light field and the fourth light field inputted to the polarization modulation apparatus2000are both be S-polarized, and by means of the polarization modulation apparatus2000, the S-polarized third light field is modulated to become P-polarized first light field, and the S-polarized fourth light field becomes S-polarized second light field. Other embodiments, where the third and fourth light fields inputted to the polarization modulation apparatus2000are both P-polarized, or the outputted first and second light fields are respectively S- and P-polarized, are also possible but not shown. More specifically, the polarization modulation apparatus2000can be configured to rotate a linear polarization axis of the one of the third light field and the fourth light field by 90°, and can optionally comprise a Faraday rotator, a Birefringent rotator, or a prism rotator. For example, the polarization modulation apparatus2000can comprise a half-wave plate, which is substantially a Birefringent rotator. Further upstream, the third light field and the fourth light field inputted to the polarization modulation apparatus2000can be provided by the polarization apparatus3000, which is configured to polarize a non-polarized fifth light field and a non-polarized sixth light field incident thereinto to correspondingly output, and thereby to provide to the polarization modulation apparatus2000, the third light field and the fourth light field respectively, both having a substantially same polarization state. In a specific embodiment, the polarization apparatus comprises a front fly-eye lens, a rear fly-eye lens, and a PCS polarizing array of prisms. The front fly-eye lens is configured to optically divide each of the fifth light field and the sixth light field into a plurality of beams; the rear fly-eye lens is configured to focus the plurality of beams on a rear surface thereof; and the PCS polarizing array of prisms is configured to polarize the focused plurality of beams outputted from the rear fly-eye lens to obtain the third light field and the fourth light field. More details are provided below with the specific Embodiment 1. Further upstream, the non-polarized fifth light field and the non-polarized sixth light field inputted to the polarization apparatus3000are provided by a light source apparatus4000. Each of the fifth light field and the sixth light field can comprise a single light beam, yet according to some preferred embodiments of the projection system, each pair of the fifth light field and the sixth light field, the third light field and the fourth light field, and the first light field and the second light field is a composite white light field formed by spatially coupling two light field in each pair, such that the projection system can be applied in a TV projection system or a VR/AR display system, to project colored images. For this purpose, in each pair of the light fields in each composite white light field, one light field is configured to be a time-sharing coupling light field comprising a first primary colour light and a second primary colour light (i.e. the first primary colour light and the second primary colour light are temporally coupled with each other in a time-sharing manner), and the other light field is configured to comprise a third primary colour light. In the illustrating example inFIG.3, the fifth light field, and correspondingly the third light field, and the first light field are each configured to be a time-sharing coupling light field (labelled by * in the figure) comprising a first primary colour light and a second primary colour light coupled with each other in a time-sharing manner. The sixth light field, and correspondingly the fourth light field, and the second light field each comprise a third primary colour light. According to some embodiments, the first primary colour light can be a blue light (B), and the second primary colour light and the third primary colour light are respectively a green light (G) and a red light (R), or are respectively a red light (R) and a green light (G). In line with these above schemes, the light source apparatus4000is configured to emit the first primary colour light (i.e. blue light), and is further configured to obtain each of the second primary colour light (i.e. green light) and the third primary colour light (i.e. red light) by excitation on a corresponding fluorescence material (i.e. dye) by the first primary colour light (i.e. blue light). In one specific embodiment that will also be described in detail below in Embodiment 1, the light source apparatus4000includes a light source module4100comprising a first light source sub-module4110and a second light source sub-module4120, configured to respectively emit a first beam of the first primary colour light and a second beam of the first primary colour light (e.g. two beams of blue light), as illustrated inFIG.4A. The light source apparatus4000further includes a first fluorescence plate4200and a second fluorescence plate4300, which are optically aligned with, and configured to respectively receive, the first beam of the first primary colour light and the second beam of the first primary colour light. The first fluorescence plate4200comprises a transmission zone4210and a first fluorescence zone4220on a surface thereof facing the first beam of the first primary colour light. The first fluorescence zone4220comprises a first fluorescence material (i.e. dye #1) capable of generating the second primary colour light upon excitation by the first primary colour light. The first fluorescence plate4200is further configured to have the transmission zone4210and the first fluorescence zone4220to alternately face the first beam of the first primary colour light in a predetermined manner (e.g. scheduled durations in one cycle), such that when the first beam of first primary colour light transmitted through the transmission zone (i.e. the transmitted first primary colour light) is spatially combined optically with a beam of second primary colour light generated on the first fluorescence zone (i.e. the excitedly generated second primary colour light, e.g. green light inFIG.4A), e.g. by modulating the optical paths for the transmitted first primary colour light and for the excited second primary colour light, a time-sharing coupled light field forms to thereby give rise to the fifth light field. The second fluorescence plate4300contains a second fluorescence zone4320, which comprises a second fluorescence material (i.e. dye #2) on a surface thereof facing the second beam of the first primary colour light, configured such that upon excitation by the second beam of the first primary colour light, a beam of the third primary colour light (e.g. excited red light inFIG.4A) is generated on the second fluorescence zone4320to thereby obtain the sixth light field. Herein, optionally the first fluorescence plate4200can be in a form of a spinning wheel (i.e. colour wheel), with each of the transmission zone and the first fluorescence zone arranged in a fan-shaped area thereon. An angle of the transmission zone and an angle of the first fluorescence zone can optionally be configured to be complementary (i.e. having a sum of 360°). For example, they can be approximately between 89°-91° and approximately 269°-271°, respectively, and preferably can be approximately 90° and approximately 270°. Furthermore, the spinning wheel can have a spinning rate of at least 7,200 rpm, so as to realize the alternate acquisitions of the first primary colour light and the second primary colour light for the generation of the time-sharing coupled light field (i.e. the fifth light field). Optionally, the second fluorescence plate4300can also be in a form of a spinning wheel. In order to realize the spatial combination of the optical paths for the transmitted first primary colour light and for the excited second primary colour light, the light source apparatus can further include a set of reflectors, which are arranged such that an optical path of the first beam of first primary colour light transmitted through the transmission zone is redirected to optically combine with an optical path of the beam of second primary colour light to thereby give rise to the fifth light field. In any of the embodiments of the light source apparatus, each of the first light source sub-module and the second light source sub-module in the light source module can comprise a laser diode array. The light source module can further comprise a first collimating lens array and a second collimating lens array, which are arranged over a light-emitting surface of the first light source sub-module and the second light source sub-module respectively. Sub-eyes in each of the first collimating lens array and the second collimating lens array are arranged to correspondingly align with laser diodes in the laser diode array of a corresponding light source sub-module. Furthermore, each sub-eye comprises a hyperbolic aspheric lens, having a curved surface expressed as: z=cxx2+cyy21+1-(1+kx)cx2x2-(1+ky)cy2y2); Herein Cxis a curvature of the hyperbolic aspheric lens in an x-direction, Cyis a curvature of the hyperbolic aspheric lens in a y-direction, Kxis a conic coefficient of the hyperbolic aspheric lens in the x-direction, and Kyis the conic coefficient of the hyperbolic aspheric lens in the y-direction. The light source apparatus can further comprise a first dichroic filter and a second dichroic filter, which are arranged over a light-emitting surface of the first light source sub-module and the second light source sub-module respectively and are configured to filter each of the first beam of the primary colour light and the second beam of the primary colour light respectively. It can be further configured such that a far field of the first primary color light is in a Gaussian distribution, and as such, the light source apparatus further comprises a first diffusion plate and a second diffusion plate, which are respectively arranged between the first light source sub-module and the first dichroic filter and between the second light source sub-module and the second dichroic filter. The first diffusion plate and the second diffusion plate are configured to diffuse the first beam of the first primary colour light and the second beam of the first primary colour light, such that a far field thereof is expanded to have a bi-directional flat top like distribution. Furthermore, each of the first diffusion plate and the second diffusion plate can be optionally configured to have two-way diffusion characteristics, having a diffusion half angle of approximately 1.2°-1.8° in a horizontal direction and of approximately 0.65°-1.05° in a pitch direction. In a preferred embodiment, the diffusion half angles of the first diffusion plate and of the first diffusion plate can both be approximately 1.5° in the horizontal direction and approximately 0.85° in the pitch direction. In order to eliminate a speckle of a light field formed by the first primary colour light as much as possible, the light source apparatus can further include one or more of the following means: (1) the transmission zone comprising a fan-shaped diffusion plate; (2) a collimation lens module arranged over a light-emitting surface of the first fluorescence plate; and (3) a collimation compensation lens and a third diffusion plate arranged on an optical path of the first beam of first primary colour light, and the third diffusion plate is configured to have a continuously small movement. More specifically, the third diffusion plate can be mechanically coupled (i.e. connected) with a vibration motor, which is configured to have vibration frequency between 100 Hz and 300 Hz, and the third diffusion plate can have a diffusion half angle between 2° and 3°. The light source apparatus can further include a first focusing lens module and a second focusing lens module, which are respectively arranged between the first dichroic filter and the first fluorescence plate and between the second dichroic filter and the second fluorescence plate. Each of the first focusing lens module and the second focusing lens module comprises, along a light transmission direction, a first focusing lens sub-module and a second focusing lens sub-module, respectively comprising an aspheric lens and a spherical lens. Optionally, an aspheric surface of the aspheric lens can be expressed as: z=cρ21+1-(1+k)c2ρ2+ΣAnρ2n; Herein c is a curvature at a sphere vertex, k is a coefficient of an aspherical surface of a quadratic term, An is a coefficient of the aspherical surface of a higher-order term, n is taken as 2-7, and ρ is a normalized radial coordinate. In the above embodiments, the first primary colour light is a blue light (B), and the second primary colour light and the third primary colour light are respectively a green light (G) and a red light (R), or are respectively a red light (R) and a green light (G). Furthermore, the generation of both the second and third primary colour lights relies on the excitation of a corresponding dye material by the first primary colour light. However, it should be noted that these configurations only represent illustrating examples, and shall not be deemed to limit the scope of the present disclosure. Other embodiments are also possible. For example, in another embodiment as illustrated inFIG.4B, the light source apparatus4000is configured to separately emit two beams of the third primary colour light (e.g. a first beam of blue light and a second beam of blue light as illustrated inFIG.4B) by means of the first and second light source sub-module4110and4120respectively. A third fluorescence plate4400(e.g. a spinning colour wheel) having a third fluorescence zone4410(comprising dye #1) and a fourth fluorescence zone4420(comprising dye #2) on a surface thereof facing the first beam of blue light is configured to alternately generate a beam of the first primary colour light (i.e. excited green light) and a beam of the second primary colour light (i.e. excited red light) upon excitation by the first beam of the third primary colour light (i.e. blue light). The beam of the first primary colour light and the beam of the second primary colour light that are alternately generated by the third fluorescence plate4400can be managed to be coupled and outputted as the time-sharing coupled fifth light field. The second beam of the third primary colour light (i.e. blue light) emitted by the second light source sub-module4120directly provides the sixth light field. This embodiment of the light source apparatus4000has a relatively simpler structure. In yet another example, the light source apparatus4000can be configured to emit the first primary colour light (e.g. by means of the first light source sub-module) and the second primary colour light (e.g. by means of the second light source sub-module), and is further configured to obtain the third primary colour light by excitation on a corresponding fluorescence material by the first primary colour light or the second primary colour light. In one such specific embodiment, the first, second, and third primary colour lights are respectively a blue light (B), a green light (G), and a red light (G), and the light source apparatus is configured to separately emit the blue light and the green light, and is further configured to obtain the red light by excitation on a corresponding fluorescence material by the emitted green light. As such, the outputted fifth light field comprises time-sharing coupled blue light and green light, and the outputted sixth light field comprises the red light. In the following, one specific embodiment (i.e. Embodiment 1) of a laser television projection system is provided as an illustrating example. FIGS.5and6respectively illustrate a front view and a bottom view of a laser television projection system shown in schematic diagrams according to one specific embodiment of the disclosure. As shown, this specific embodiment of the laser television projection system includes a projection light source module1, a homogenization and polarization module2, a half-wave plate5, a PBS prism6, an LCOS light valve assembly8, and a projection lens9. The projection light source module1is configured to provide a first white light field. The first white light field is substantially a coupled light field which comprises a first primary color light, a second primary color light, and a third primary color light. It is further configured such that the first primary color light is time-sharing coupled with the second primary color light to first obtain a time-sharing coupled light field, and the time-sharing coupled light field is further spatially coupled with the third primary color light to thereby obtain the first white light field. The homogenization and polarization module2is configured to polarize the first white light field to thereby obtain a second white light field. In the second white light field, each light is a polarized light with a preset type, and the preset-type polarized light is one of a P-polarized light or an S-polarized light. The half-wave plate5is configured to phase convert one of the time-sharing coupling light field in the second white light field or the third primary color light to thereby obtain a third white light field. The third white light field includes a P-polarized light and an S-polarized light. The PBS prism6is configured to transmit the P-polarized light in the third white light field, and to reflect the S-polarized light in the third white light field. A first reflected light field obtained by the first LCOS light valve after transmission of the P-polarized light and the second reflected light field obtained by the second LCOS light valve after reflection of the S-polarized light are transmitted through the PBS prism6before shedding into the projection lens9. The LCOS light valve assembly8includes a first LCOS light valve and a second LCOS light valve. In the specific laser television projection system as illustrated inFIGS.5and6, through the combination of the homogenization and polarizing module2and the half-wave plate5, the polarization direction of the time-sharing coupling light field and the polarization direction of the third primary color light in the third white light field are configured to have a difference of 90°. Further through the PBS prism6, the time-sharing coupling light field and the third primary color light are respectively transmitted to their corresponding light valves (i.e. the first LCOS light valve and the second LCOS light valve) in the LCOS light valve assembly8. As such, the issue of using too many lens elements in a laser television projection system having the conventional three-piece light valve structure due to the split folding of the relay optical path and other factors can be avoided. The reduced use of lens elements can shorten the back intercept of the projection lens9, and can effectively reduce the thickness of the existing laser television projection system, leading to a decreased overall shape and reduced cost. Through the use of the double LCOS light valve architecture, the display brightness and the picture quality can be improved, thereby meeting the demand for high resolution and ultra short focal projection. In this specific embodiment of the laser television projection system, the projection lens9adopts a UST projection lens, and more specifically, a 4K high-resolution ultra short focus projection lens. The projection lens9includes a positive focus lens group, a negative focus lens group, and a reflecting bowl. The projection lens9adopts an object telecentric design in a reflection form, the transmission ratio of the projection lens9is less than 0.25, the projection size of the projection lens9is between 80 and 120 inches, and the equivalent rear intercept of the projection lens9in the air is greater than 15.2 mm. It should be noted that other types of projection lens can also be used according to different embodiments of the disclosure. With reference toFIG.5andFIG.6,FIG.5is a front view of a laser television projection system provided by some embodiments of the disclosure, andFIG.6is a bottom view of a laser television projection system provided by some embodiments of the disclosure. The laser television projection system includes: a projection light source module1, a homogenization and polarization module2, a half-wave plate5, a PBS prism6, an LCOS light valve assembly8, and a projection lens9. The LCOS light valve assembly8includes a first LCOS light valve and a second LCOS light valve; The projection light source module1is configured to provide a first white light field, and the first white light field is a coupled light field comprising a first primary color light, a second primary color light, and a third primary color light. The first primary color light is time-sharing coupled with the second primary color light to obtain a time-sharing coupled light field, and the time-sharing coupled light field is spatially coupled with the third primary color light to obtain the first white light field. The homogenization and polarization module2is configured to polarize the first white light field to obtain a second white light field. In the second white light field, each light is a polarized light with a preset type, and the preset-type polarized light is one of a P-polarized light or an S-polarized light. The half-wave plate5is configured to phase convert one of the time-sharing coupling light field in the second white light field or the third primary color light to obtain a third white light field. The third white light field includes a P-polarized light and an S-polarized light. The PBS prism6is configured to transmit the P-polarized light in the third white light field, and to reflect the S-polarized light in the third white light field. A first reflected light field obtained by the first LCOS light valve after transmission of the P-polarized light and the second reflected light field obtained by the second LCOS light valve after reflection of the S-polarized light are transmitted through the PBS prism6before shedding into the projection lens9. In the laser television projection system as illustrated inFIGS.5and6, through the combination of the homogenization and polarizing module2and the half-wave plate5, the polarization direction of the time-sharing coupling light field and the polarization direction of the third primary color light in the third white light field will have a difference of 90°. Further through the PBS prism6, the time-sharing coupling light field and the third primary color light are transmitted to the corresponding light valve. As such, the use of too many lens elements in the three-piece light valve structure because of the split folding of the relay optical path and other factors can be avoided. The reduced use of lens elements can shorten the back intercept of the projection lens9, and effectively reduce the thickness of the existing laser television projection system, leading to a reduced overall shape and reduced cost. Through the use of the double LCOS light valve architecture, the display brightness and the picture quality can be improved, thereby meeting the demand for high resolution and ultra short focal projection. Furthermore, in one embodiment of the disclosure, the projection lens9can adopt a UST projection lens, and more specifically, a 4K high-resolution ultra short focus projection lens. The projection lens9comprises a positive focus lens group, a negative focus lens group, and a reflecting bowl. The projection lens9adopts an object telecentric design in a reflection form, the transmission ratio of the projection lens9is less than 0.25, the projection size of the projection lens9is between 80 and 120 inches, and the equivalent rear intercept of the projection lens9in the air is greater than 15.2 mm. In other embodiments, other types of projection lens can also be used. There is no limitation herein. Furthermore, a quarter-wave plate7can be arranged between the LCOS light valve assembly8and the PBS prism6, which can improve the picture contrast. Specifically, a first quarter-wave plate7can be arranged between the first LCOS light valve and the PBS prism6, and a second quarter-wave plate can be arranged between the second LCOS light valve and the PBS prism6. Specifically, the H-ZLAF52A material can be used for the PBS prism6, which allows the polarization contrast of the PBS prism6to be greater than 1000:1, so as to improve the picture contrast. Specifically, the PBS prism6can be formed by gluing two triangular prisms. The glued surface of each triangular prism is plated with a polarizing spectroscopic film. The S-polarized light is reflected at the polarizing spectroscopic film, and the P-polarized light is transmitted through the polarizing spectroscopic film. Furthermore, with reference toFIG.7, which shows a right view of the local structure (i.e. the assembled structure of the PBS prism6and the LCOS light valve assembly8) in the embodiment of the laser television projection system shown inFIG.6. One of the first LCOS light valve and the second LCOS light valve is arranged over the top of the PBS prism6, and the other is arranged over a side of the PBS prism6. The projection lens9, the PBS prism6and the second LCOS light valve are arranged in a same straight-line direction, and the PBS prism6is arranged between the projection lens9and the second LCOS light valve. Specifically, the first LCOS light valve and the second LCOS light valve can have a size of approximately 0.55 inches, such that the size of the laser television projection system can be reduced. Specifically, the P-polarized light in the third white light field transmits through the polarizing spectroscopic film of the PBS prism6, and then is reflected back by the first LCOS light valve to thereby obtain the first reflected light field. The first reflected light field is an S-polarized light. The first reflected light field is reflected at the polarizing spectroscopic film of the PBS prism6, and then transmits to the projection lens9. The S-polarized light in the third white light field transmits to the polarizing spectroscopic film of the PBS prism6. The second reflected light field is a P-polarized light, and the second reflected light field transmits through the polarizing spectroscopic film of the PBS prism6, and then transmitted to the projection lens9. Furthermore, the first primary color light is a blue light, the second primary color light is a green light, and the third primary color light is a red light. Alternatively, the first primary color light is a blue light, the second primary color light is a red light, and the third primary color light is a green light. Furthermore, the half-wave plate5can be a R (red) half-wave plate, a RB (red and blue) half-wave plate, a G (green) half-wave plate, or a GB (green and blue) half-wave plate. The R half-wave plate is configured to rotate the polarization direction of the red light by 90°; the RB half-wave plate is configured to rotate the polarization directions of the red light and the blue light by 90°; the G half-wave plate is configured to rotate the polarization direction of the green light by 90°; and the GB half-wave plate is configured to rotate the polarization directions of the green light and the blue light by 90° Specifically, if the first primary color light, the second primary color light, and the third primary color light are respectively a blue light, a green light, and a red light, the half-wave plate5can be a R half-wave plate or a GB half-wave plate, as long as the polarization directions of the time-sharing coupling light field and the third primary color light in the third white light field have a difference of 90°. If the first primary color light, the second primary color light, and the third primary color light are respectively a blue light, a red light, and a green light, the half-wave plate5can be a G half-wave plate or a RB half-wave plate, as long as the polarization directions of the time-sharing coupling light field and the third primary color light in the third white light field have a difference of 90°. The homogenization and polarizing module2includes a front fly-eye lens, a rear fly-eye lens, and a PCS polarizing array prism. The first white light field is divided into a plurality of light arrays after passing through the front fly-eye lens, which are focused on the rear surface of the rear fly-eye lens. The PCS polarizing array prism polarizes the light field transmitted through the rear fly-eye lens to thereby obtain the second white light field. Specifically, with reference toFIG.8, it shows a structural diagram of the PCS polarizing array prism provided by the embodiment of the disclosure. The PCS polarizing array prism includes a first parallel square prism201, a second parallel square prism202, a metal plate203and a half-wave plate204. There number of the first parallel square prism201, the second parallel square prism202, the metal plate203and the half-wave plate204is more than one. In a direction from bottom to top, it is arranged in a sequence of one first parallel square prism201, one second parallel square prism202, one first parallel square prism201, one second parallel square prism202. . . The two ends of each first parallel square prism201and each second parallel square prism202are vertical faces. A polarizing spectroscopic film is arranged between each first parallel square prism201and its neighboring second parallel square prism202. A metal plate203is arranged at a front end of each first parallel square prism201in a one-by-one corresponding manner. The light field transmitted through the rear fly-eye lens passes through a space between every two adjacent metal plates203to thereby enter into a second parallel square prism202. When a half-wave plate204is arranged at the back end of the first parallel square prism201and the half wave plate204corresponds to the first parallel square prism201in a one-by-one relationship, an S-polarized light in the first white light field is reflected by the polarizing spectroscopic film, and then sheds vertically at an adjacent polarizing spectroscopic film, where it is reflected to then horizontally shed at the back end of the second parallel square prism202; whereas a P-polarized light in the first white light field is transmitted through the polarizating spectroscopic film, and then sheds at the back end of the first parallel square prism201, and when passing through the half-wave plate204, the polarization direction of it rotates by 90°. As such, the lights included in the second white light field are all S-polarized lights. When a half-wave plate204is arranged at the back end of the second parallel square prism202and the half-wave plate204corresponds to the second parallel square prism201in a one-by-one relationship, a P-polarized light in the first white light field is transmitted through the polarizating spectroscopic film, and then sheds at the back end of the first parallel square prism201; whereas an S-polarized light in the first white light field is reflected by the polarizing spectroscopic film, and then sheds vertically at an adjacent polarizing spectroscopic film and get reflected. When it horizontally transmits through the half-wave plate204, the polarization direction rotates by 90°. As such, the lights included in the second white light field are all P-polarized lights. A shaping lens module3and a folding mirror4are arranged between the homogenizing and polarizing module2and the half-wave plate5. The shaping lens module3includes a first lens and a second lens. After the second white light field transmitted through the homogenizing and polarizing module2passes through the first lens, it is reflected by the folding mirror4to shed to the second lens, and then passes through the second lens to the half-wave plate5; The shaping lens module3, the front fly-eye lens, and the rear fly-eye lens are arranged in a manner that is required for forming a Kohler lighting system, so as to form a uniform lighting field on the exit pupil position of the shaping lens module. In the laser television projection system provided by the embodiment of the disclosure, the first white light field is divided into a plurality of light arrays after passing through the front fly-eye lens, which are then focused on the rear surface of the rear fly-eye lens, and then pass through the space between adjacent metal plates203. As such, the utilization ratio of the first white light field can be ensured, and the loss of the second white light field at the PCS polarizing array prism can be reduced. Furthermore, the horizontal position of each half wave plate204coincides with the horizontal position of the space between adjacent metal plates203, which can better improve the utilization ratio of the first white light field and further reduce the loss of the second white light field at the PCS polarizing array prism. Furthermore, in another embodiment of the disclosure illustrated inFIG.9, the structural diagram of the projection light source module1according to the embodiment of the disclosure is diagramed. The projection light source module1includes: a light source module101, a first dichroic filter103, a second dichroic filter104, a first fluorescence wheel (i.e. first colour wheel)106, a second fluorescence wheel (i.e. second colour wheel)107, a first reflector109, a second reflector110, and a third reflector111. The light source module101is configured to emit two parallel first primary color lights, one of which is directed to the first fluorescence wheel106and the other to the second fluorescence wheel107. The first fluorescence wheel106includes a transmission zone, a first fluorescence zone and a heat dissipation substrate that corresponds to the first fluorescence zone. The side of the heat dissipation substrate that is close to the first fluorescence zone is coated with a specular high-reflection film (or mirror-surface highly reflective surface or alike). The first primary color light passes through the second dichroic filter104that is arranged between the light source module101and the first fluorescence wheel106, and then sheds to the first fluorescence wheel106. When the first primary color light sheds on the first fluorescence zone, it is excited to thereby obtain the second primary color light. The second primary color light is reflected by the specular high-reflection film, and then sheds onto, and get reflected by, the second dichroic filter104, so as to obtain a horizontal light field of the second primary color light. When the first primary color light sheds on the transmission zone, it is transmitted therethrough, and the first primary color light transmitted through the transmission zone is sequentially reflected by the first reflector109, the second reflector110, and the third reflector111, so as to obtain a horizontal light field of the first primary color light. After the horizontal light field of the first primary color light passes through the second dichroic filter104, it is coupled with the horizontal light field of the second primary color light in a time-sharing manner to thereby obtain the time-sharing coupled light field. The second fluorescence wheel107includes a second fluorescence zone and a heat dissipation substrate that corresponds to the second fluorescence zone. The side of the heat dissipation substrate that is close to the second fluorescence zone is coated with a specular high-reflection film (or mirror-surface highly reflective surface or alike). The first primary color light passes through the first dichroic filter103that is arranged between the light source module101and the second fluorescence wheel107, and then sheds onto the second fluorescence zone of the second fluorescence wheel107to be excited to thereby obtain the third primary color light. The third primary color light is reflected by the specular high-reflection film, and then sheds onto, and get reflected by, the first dichroic filter103, so as to obtain a horizontal light field of the third primary color light. Specifically, the first fluorescence wheel106is provided with a motor and a controller, and the controller controls the motor to maintain the rotation speed of the first fluorescence wheel106at 7,200 rpm or 14,400 rpm; the second fluorescence wheel107is also provided with a motor and a controller, and the controller controls the motor to maintain the rotation speed of the second fluorescence wheel107at 7,200 rpm or 14,400 rpm. Furthermore, the light source module101includes a first array laser light source sub-module and a second array laser light source sub-module, and the structures of the first array laser light source sub-module and of the second array laser light source sub-module are the same. Each of the first array light source module and the second array light source module includes a laser diode array, a heat conducting copper plate, a heat conducting tube, a heat sink, a collimating lens array, a power supply, a control system, and a fan. The laser diode array is evenly arranged on the heat conducting copper plate. The heat conducting tube is inserted in the heat conducting copper plate. The heat conducting tube contains refrigerant, and the other end of the heat conducting tube is connected with a heat sink. Under the action of the fan, the heat of the heat sink is taken away. The power source is a constant-current power source. The control system uses the mode of PWM (pulse width modulation) to control the amplitude of the current, to control the on-off status and the intensity of the current in the laser diode array, and to monitor the temperature of each target stripe of the laser diode array and the rotation speed of the fan. The collimating lens array is arranged at the front end of the laser diode array, and the sub-eyes on the lens surface of the collimating lens array are arranged to correspond to the laser diodes in the laser diode array in a one-by-one relationship. Each sub-eye on the lens surface of the collimating lens array adopts a hyperbolic aspheric lens, and the hyperbolic equation of the curved surface of the hyperbolic aspheric lens is expressed as: z=-cxx2+cyy21+1-(1+kx)cx2x2-(1+ky)cy2y2); Herein, Cxis the curvature of the hyperbolic aspheric lens in the x-direction, Cyis the curvature of the hyperbolic aspheric lens in the y-direction, Kxis the conic coefficient of the hyperbolic aspheric lens in the x-direction, and Kyis the conic coefficient of the hyperbolic aspheric lens in the y-direction. By adopting the sub-eye structure as mentioned above, the laser field can be collimated better and the beam divergence angle can be reduced. Furthermore, the far field of the first primary color light is in a Gaussian distribution; A diffusion plate102is arranged between the light source module101and the first dichroic filter103or the second dichroic filter104, and the diffusion plate102includes a first diffusion plate and a second diffusion plate. Correspondingly, the first diffusion plate is arranged between the first array light source module and the second dichroic filter104, and the second diffusion plate is arranged between the second array light source module and the first dichroic filter103; The first diffusion plate and the second diffusion plate are configured to diffuse the first primary color light, so that the far field of the first primary color light can be expanded to have a bi-directional flat top like distribution. Specifically, the first diffusion plate is configured to reflect the second primary color light in the light field that is excited returned (i.e. excited returned light field), and is also configured to filter the lights other than the second primary color light in the excited returned light field. The second diffusion plate is configured to reflect the third primary color light in the excited returned light field, and is also configured to filter the lights other than the third primary color light in the excited returned light field. As such. the requirements of white field color matching can be met. Specifically, if the first fluorescence zone on the first fluorescence wheel106is provided with a green fluorescent plate, only lights with wavelengths of more than 500 nm are retained in the light field reflected by the second dichroic filter104. If the second fluorescence zone on the second fluorescence wheel107is provided with a red fluorescent plate, only lights with wavelengths of more than 600 nm are retained in the light field reflected by the first dichroic filter103. The first diffusion plate and the second diffusion plate are configured to each have two-way diffusion characteristics, in which the diffusion half angle is controlled to be 1.2°-1.8° in the horizontal direction and 0.65°-1.05° in the pitch direction. Preferably, the diffusion half angle is approximately 1.5° in the horizontal direction and approximately 0.85° in the pitch direction. Furthermore, a focusing lens module105is further included in the system, which includes a first focusing lens sub-module and a second focusing lens sub-module; The first focusing lens sub-module is arranged between the first dichroic filter103and the second fluorescence wheel107. The first focusing lens sub-module is configured to focus the first primary color light that is transmitted through the first dichroic filter103on the second fluorescence wheel107to thereby form a strip facula, and to collimate the third primary color light that is excited by the second fluorescence wheel107into a parallel light field, which then sheds to the first dichroic filter103; The second focusing lens sub-module is arranged between the second dichroic filter104and the first fluorescence wheel106. The second focusing lens sub-module is configured to focus the first primary color light that is transmitted through the second dichroic filter104on the first fluorescence wheel106to thereby form a strip facula, and to collimate the second primary color light that is excited by the first fluorescence wheel106into a parallel light field, which then sheds to the second dichroic filter104. Furthermore, the structure of the first focusing lens sub-module and the structure of the second focusing lens sub-module are substantially the same. Each of the first focusing lens sub-module and the second focusing lens sub-module includes an aspheric lens and a spherical lens. The aspheric lens is arranged to be more closer to the light source module, and the spherical lens is arranged to be more closer to the first fluorescence wheel or the second fluorescence wheel. The equation of the aspheric surface of the aspheric lens can be expressed as: z=cρ21+1-(1+k)c2ρ2+ΣAnρ2n; Herein, c is the curvature at the sphere vertex, k is the coefficient of the aspherical surface of the quadratic term, Anis the coefficient of the aspherical surface of the higher-order term, n is taken as 2-7, and ρ is the normalized radial coordinate. By use of the above-mentioned focusing lens module105, the aberration can be reduced, leading to a better focusing effect, which is conducive to focusing on the first fluorescence wheel106and the second fluorescence wheel107to form a strip facula. The combined focal length f of the first focusing lens sub-module and the second focusing lens sub-module is controlled to be between 15 mm and 20 mm, and the maximum aperture is controlled to be within 32 mm. Furthermore, the transmission zone can be provided with a fan-shaped diffusion plate, and a collimation lens module108is arranged between the first fluorescence wheel106and the first reflector109, which can eliminate the speckle of the blue light field. Specifically, the diffusion half angle of the fan-shaped diffusion plate is controlled to be between 3.5° and 6.5°. A collimation compensation lens112and a third diffusion plate113are sequentially arranged between the third reflector111and the second dichroic filter104. The third diffusion plate113is configured to move left and right, or up and down circularly. Specifically, the third diffusion plate113can be slightly shifted left and right or up and down by loading a vibration motor on the third diffusion plate113, and the vibration frequency of the motor is controlled to be between 100 Hz and 300 Hz, which can eliminate the speckle of the blue light field. Specifically, the diffusion half angle of the third diffusion plate113is controlled to be between 2° and 3°. Preferably, the diffusion half angle of the third diffusion plate113is approximately 2.5°. Furthermore, the transmission zone is configured to have a fan-shaped structure (i.e. sector structure), and the first fluorescence zone is configured to have a fan-shaped ring structure. The angle of the transmission zone is 89°-91°, and the angle of the first fluorescence zone is 269°-271°. The angle of the transmission zone is configured to be complementary to the angle of the first fluorescence zone, which is conducive to the realization of white light color matching field. When the angle of the transmission zone is 90°, the angle of the first fluorescence zone is 270°, which can better meet the matching requirements. The projection light source module for the laser television projection system provided by the embodiment of the disclosure realizes the time-sharing coupling of the first primary color light and the second primary color light to thereby obtain the time-sharing coupling light field, and further realizes the spatial coupling of the time-sharing coupling light field and the third primary color light to thereby obtain the first white light field. In addition, the double-route fluorescence wheel is used to realize RGB color matching, which can improve the display color gamut, and solve the issue of speckles in the pure laser display to thereby reproduce the objective world with rich and gorgeous colors. By simply folding the optical path, the system layout is compact, which can effectively reduce the thickness of the existing laser TV, reduce the overall shape, and reduce the cost. | 60,883 |
11860527 | DESCRIPTION OF THE EMBODIMENTS In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., is used with reference to the orientation of the Figure(s) being described. The components of the present invention can be positioned in a number of different orientations. As such, the directional terminology is used for purposes of illustration and is in no way limiting. On the other hand, the drawings are only schematic and the sizes of components may be exaggerated for clarity. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless limited otherwise, the terms “connected,” “coupled,” and “mounted” and variations thereof herein are used broadly and encompass direct and indirect connections, couplings, and mountings. Similarly, the terms “facing,” “faces” and variations thereof herein are used broadly and encompass direct and indirect facing, and “adjacent to” and variations thereof herein are used broadly and encompass directly and indirectly “adjacent to”. Therefore, the description of “A” component facing “B” component herein may contain the situations that “A” component directly faces “B” component or one or more additional components are between “A” component and “B” component. Also, the description of “A” component “adjacent to” “B” component herein may contain the situations that “A” component is directly “adjacent to” “B” component or one or more additional components are between “A” component and “B” component. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive. FIG.1Ais a light path schematic diagram of excitation light beams of an illumination system according to an embodiment of the disclosure,FIG.1Bis a light path schematic diagram of a converted light beam of the illumination system ofFIG.1A, andFIG.1Cis a schematic front view of an excitation light source array ofFIG.1A. Referring toFIG.1A,FIG.1B, andFIG.1C, an illumination system100of the embodiment includes an excitation light source array110, a multi-region dichroic device200, a color sequence generator300, a wavelength converter400, and a light uniformizing element120. Front views of the multi-region dichroic device200, the color sequence generator300, and the wavelength converter400are respectively illustrated next to the light path diagrams ofFIG.1AandFIG.1B. The excitation light source array110is configured to emit a plurality of excitation light beams111. In the embodiment, the excitation light source array110includes a plurality of light emitting elements112arranged in an array. The excitation light source array110is, for example, a laser diode array, and the light emitting elements112are, for example, laser diodes. However, in other embodiments, the light emitting elements112may also be light emitting diodes or other light emitting elements. In the embodiment, the excitation light beams111are, for example, blue light beams. The multi-region dichroic device200has a plurality of first dichroic regions210and a plurality of non-dichroic regions220arranged alternately in stripe shapes, where the first dichroic regions210are respectively arranged on transmission paths of the excitation light beams111. The color sequence generator300has a light-transmitting region310and at least one second dichroic region320(inFIG.1A, three second dichroic regions322,324, and326are taken as an example for description), where the excitation light beams111from the excitation light source array110are transmitted to the color sequence generator300through the first dichroic regions210of the multi-region dichroic device200. In the embodiment, the first dichroic regions210are, for example, formed by a plurality of dichroic films that are spaced apart from each other and coated on a surface of a transparent substrate (for example, the surface facing away from the excitation light source array110), and the dichroic films allow the blue light beams (i.e., the excitation light beams111) to penetrate through and reflect light beams of other colors. In other words, the first dichroic regions210respectively allow the excitation light beams111to penetrate through and transmit the same to the color sequence generator300. In the embodiment, the color sequence generator300is, for example, a color wheel, which includes the light-transmitting region310and the at least one second dichroic region320(for example, the second dichroic regions322,324, and326); the color sequence generator300is suitable for rotating so that the light-transmitting region310and the second dichroic regions322,324, and326are sequentially cut into the transmission paths of the excitation light beams111, where the light-transmitting region310of the color sequence generator300may be a light diffusing region, which may diffuse the blue excitation light beams (i.e., the excitation light beams111) to achieve an effect of suppressing a speckle phenomenon, and the second dichroic regions322,324, and326, for example, respectively allow red light, yellow light, and green light to penetrate through. When the light-transmitting region310is located on the transmission paths of the excitation light beams111, the excitation light beams111penetrate through the light-transmitting region310and are transmitted to the light uniformizing element120. When the at least one second dichroic region320is located on the transmission paths of the excitation light beams111, the at least one second dichroic region320respectively reflects the excitation light beams111to the non-dichroic regions220of the multi-region dichroic device200. The excitation light beams111reflected from the color sequence generator300are transmitted to the wavelength converter400through the non-dichroic regions220of the multi-region dichroic device200. In the embodiment, the non-dichroic regions220are, for example, a plurality of reflective mirror regions respectively reflecting the excitation light beams111reflected by the at least one second dichroic region320to the wavelength converter400. In the embodiment, the reflective mirror regions are, for example, formed by a plurality of reflective films that are spaced apart from each other and coated on a surface of a transparent substrate (for example, the surface facing away from the excitation light source array110), and the reflective films are, for example, metallic coating films or non-metallic reflective films. The wavelength converter400converts the excitation light beams111into a converted light beam411. The converted light beam411is transmitted back to the multi-region dichroic device200, the converted light beam411is transmitted to the color sequence generator300through the multi-region dichroic device200, and at least a portion of the converted light beam411penetrates through the at least one second dichroic region320. In the embodiment, the wavelength converter400is a wavelength conversion wheel suitable for rotation and has a wavelength conversion region410. The wavelength conversion region410is a complete ring-shaped region, and when the wavelength conversion wheel rotates, the excitation light beams111irradiate within the complete ring-shaped region. In the embodiment, the wavelength conversion region410is, for example, provided with a phosphor layer. The wavelength converter400has a reflective substrate405, such as a metal substrate, and the wavelength conversion region410is, for example, formed by a phosphor layer coated on a surface of the reflective substrate405. The reflective substrate405may present a complete round shape (a disk shape). In the embodiment, when the excitation light beam111of the blue color irradiates the phosphor layer, it excites the converted light beam411with a yellow-green color, and the reflective substrate405reflects at least a portion of the converted light beam411transmitted toward the reflective substrate405to the multi-region dichroic device200. In the embodiment, when the converted light beam411with the yellow-green color from the wavelength converter400is transmitted back to the first dichroic regions210and the non-dichroic regions220of the multi-region dichroic device200, the first dichroic regions210are used to reflect light beams of other colors except for the blue light beams, and the non-dichroic regions220are used to reflect all of the light beams. Therefore, the first dichroic regions210and the non-dichroic regions220may all reflect the converted light beam411to the color sequence generator300. At this moment, the converted light beam411with the yellow-green color irradiates the second dichroic regions320and is filtered by the second dichroic regions320. For example, the second dichroic regions322,324, and326are sequentially cut into a transmission path of the converted light beam411, and a portion of the converted light beam411sequentially penetrates through the second dichroic regions322,324, and326to respectively form a red light beam, a yellow light beam and a green light beam for being transmitted to the light uniformizing element120. In this way, in collaboration with the excitation light beams111of the blue color that penetrate through the light-transmitting region310when the light-transmitting region310is cut into the transmission paths of the excitation light beams111, the illumination system100may sequentially provide the blue light beam, the red light beam, the yellow light beam, and the green light beam to a subsequent light valve (which will be introduced in the following content) to form a color image. In addition, the excitation light beams111and the converted light beam411penetrating through the color sequence generator300may be transmitted to the light uniformizing element120to uniformize an intensity distribution of light beam. In the embodiment, the light uniformizing element120is a light integration rod. However, in other embodiments, the light uniformizing element120may also be a lens array. In the embodiment, the illumination system100further includes a light converging lens130disposed between the multi-region dichroic device200and the color sequence generator300, the light converging lens130is configured to converge the excitation light beams111from the first dichroic regions210on the color sequence generator300, and the light converging lens30is configured to transmit the excitation light beams111reflected by the at least one second dichroic region320to the non-dichroic regions220. In the embodiment, a central optical axis A of the light converging lens130has an off-axis amount F relative to a central optical axis E of the excitation light source array110, i.e., a distance between the central optical axis A of the light converging lens130and the central optical axis E of the excitation light source array110is greater than zero, in this way, a light path of the excitation light beams111passing through the light converging lens130and a light path of the excitation light beams111reflected by the at least one second dichroic region320and passing through the light converging lens130are not overlapped. The off-axis amount F is configured to make the excitation light beams111originally penetrating through the first dichroic regions210and reflected by the at least one second dichroic region320of the color sequence generator300be transmitted to the non-dichroic regions220adjacent to the adjacent first dichroic regions210(for example, the adjacent non-dichroic regions220located at the upper right of the first dichroic regions210inFIG.1A). In the embodiment, the illumination system100further includes at least one light converging lens140(two light converging lenses140are taken as an example for description inFIG.1A), which are arranged between the multi-region dichroic device200and the wavelength converter400, where the light converging lens140converges the excitation light beams111on the wavelength converter400, i.e., to the phosphor layer, so as to increase the energy of exciting the phosphor layer. In the illumination system100of the embodiment, since the excitation light beams111emitted by the excitation light source array110are transmitted to the color sequence generator300to form passing light beams and light beams reflected to the wavelength converter400at different timings, the structure of the wavelength converter400may be effectively simplified. For example, the reflective substrate405of the wavelength converter400may accordingly present a complete round shape without a notch, which may increase a structural strength and a heat dissipation area of the wavelength converter400. Since the reflective substrate405presents the complete round shape, the wavelength converter400itself is relatively balanced and symmetrical, and dynamic balance adjustment work hours of the wavelength converter400may be shortened. In addition, compared to the prior art where the blue light passing through the notch of the phosphor wheel is combined with phosphor light in a manner of reflecting by a plurality of mirrors, in the embodiment, the multi-region dichroic device200and the color sequence generator300are used to split light, which may simplify the light paths of the illumination system100to reduce a volume of the illumination system100. Moreover, in the illumination system100of the embodiment, the wavelength conversion region410on the wavelength converter400is a complete ring-shaped region, so that the rotation of the color sequence generator300and the rotation of the wavelength converter400may not be synchronized. In this way, complexity of electronic control is reduced, and the costs of the illumination system100are effectively lowered. FIG.2is a schematic front view of a wavelength converter of an illumination system according to another embodiment of the disclosure. Referring toFIG.1AandFIG.2, the illumination system of the embodiment is similar to the illumination system100ofFIG.1A, and a difference there between is that the illumination system of the embodiment adopts a wavelength converter400ato replace the wavelength converter400in the illumination system100ofFIG.1A. In the wavelength converter400aof the embodiment, a wavelength conversion region410ais a C-shaped region, and when the wavelength converter400arotates, the excitation light beams111irradiate within the C-shaped region. In the embodiment, the wavelength conversion region410ais, for example, provided with three types of phosphor layers (wavelength conversion layers). The wavelength conversion region410ais provided with a first wavelength conversion layer412, a second wavelength conversion layer414, and a third wavelength conversion layer416. The first wavelength conversion layer412, the second wavelength conversion layer414, and the third wavelength conversion layer416are sequentially cut into the transmission paths of the excitation light beams111coming from the multi-region dichroic device200. When the first wavelength conversion layer412is located on the transmission paths of the excitation light beams111, the first wavelength conversion layer412converts the excitation light beams111into a first converted light beam. When the second wavelength conversion layer414is located on the transmission paths of the excitation light beams111, the second wavelength conversion layer414converts the excitation light beams111into a second converted light beam. When the third wavelength conversion layer416is located on the transmission paths of the excitation light beams111, the third wavelength conversion layer416converts the excitation light beams111into a third converted light beam. The first converted light beam, the second converted light beam, and the third converted light beam are sequentially reflected to the multi-region dichroic device200by the reflective substrate405, and then reflected to the color sequence generator300by the multi-region dichroic device200. The at least one second dichroic region320includes a first color filter region corresponding to the first converted light beam (i.e., the second dichroic region322), a second color filter region corresponding to the second converted light beam (i.e., the second dichroic region324), and a third color filter region corresponding to the third converted light beam (i.e., the second dichroic region326) for respectively allowing a portion of the first converted light beam, a portion of the second converted light beam, and a portion of the third converted light beam to penetrate through. For example, the first converted light beam, the second converted light beam, and the third converted light beam are, for example, respectively an orange light beam, a yellow light beam, and a green light beam, and the second dichroic region322, the second dichroic region324, and the second dichroic region326may, for example, respectively filter the orange light beam, the yellow light beam, and the green light beam into a red light beam, a purer yellow light beam and a purer green light beam. In other embodiments, the wavelength conversion region410amay also have only two wavelength conversion layers, such as the first wavelength conversion layer412and the third wavelength conversion layer416, and the second dichroic region320may also have only two color filter regions, such as the second dichroic region322and the second dichroic region326, but the disclosure does not limit the numbers of the wavelength conversion layers and the color filter regions. In addition, in the embodiment, the rotation of the color sequence generator300and the rotation of the wavelength converter400aare synchronized, and the two may be synchronized by a synchronization circuit electrically connected to the color sequence generator300and the wavelength converter400a. FIG.3is a light path schematic diagram of excitation light beams of an illumination system according to another embodiment of the disclosure. Referring toFIG.3, an illumination system100bof the embodiment is similar to the illumination system100ofFIG.1, and differences there between are as follows. In the illumination system100bof the embodiment, a focal point F1at which the excitation light beams111are converged by the light converging lens140is not coincided with a wavelength conversion surface S1of the wavelength converter400, and the wavelength conversion surface S1is, for example, a surface of the phosphor layer or a surface of the reflective substrate405. In this way, a light spot irradiated on the phosphor layer (the wavelength conversion layer) of the wavelength converter400becomes large, a peak value of light intensity of the light spot is thereby reduced, and a risk that the phosphor layer is burned by heat is also lowered. FIG.4Ais a light path schematic diagram of excitation light beams of an illumination system according to still another embodiment of the disclosure, andFIG.4Bis a light path schematic diagram of a converted light beam of the illumination system ofFIG.4A. Referring toFIG.4AandFIG.4B, an illumination system100cof the embodiment is similar to the illumination system100ofFIG.1A, and differences there between are as follows. The illumination system100cof the embodiment further includes a lens array or a diffuser plate150cdisposed between the multi-region dichroic device200and the wavelength converter400to uniformize light spots of the excitation light beams111formed on the wavelength converter400. In this way, a peak value of light intensity of the light spots is reduced, and the risk that the phosphor layer is burned by heat is also lowered. FIG.5Ais a light path schematic diagram of excitation light beams of an illumination system according to yet another embodiment of the disclosure, andFIG.5Bis a light path schematic diagram of a converted light beam of the illumination system ofFIG.5A. Referring toFIG.5AandFIG.5B, an illumination system100dof the embodiment is similar to the illumination system100ofFIG.1A, and differences there between are as follows. In a multi-region dichroic device200dof the illumination system100dof the embodiment, the first dichroic regions210drespectively reflect the excitation light beams111to the color sequence generator300. In the embodiment, the first dichroic regions210dmay be formed by a plurality of strip-shaped dichroic films spaced apart from each other on a surface of a transparent substrate (for example, the surface facing the excitation light source array110), and the dichroic films may reflect the blue light beams, and allow light beams of other colors to penetrate through. In addition, the non-dichroic regions220dare a plurality of light-transmitting regions respectively allowing the excitation light beams111reflected by the at least one second dichroic region320to penetrate through to be transmitted to the wavelength converter400. In the embodiment, at least one of two surfaces of the transparent substrate of the multi-region dichroic device200dmay be plated with an anti-reflection layer, so that light transmission efficiency of the light-transmitting region is improved, and the light-transmitting region is, for example, a transparent region. However, in another embodiment, the light-transmitting region may also be a hollow region of the transparent substrate. After the wavelength converter400converts the excitation light beams111into the converted light beam411with the yellow-green color, the converted light beam411may penetrate through the first dichroic regions210dand the non-dichroic regions220dfor being transmitted to the color sequence generator300. FIG.6is a light path schematic diagram of a projection apparatus according to an embodiment of the disclosure. Referring toFIG.1A,FIG.1BandFIG.6, the illumination systems100,100b,100c, and100dof the above embodiments may all be configured in a projection apparatus700of the embodiment, and in the following description, the illumination system100ofFIG.1Ais, for example, configured in the projection apparatus700of the embodiment. The projection apparatus700of the embodiment includes the illumination system100, a light valve500and a projection lens600. The light valve500is arranged on the transmission paths of the excitation light beams111and the converted light beam411coming from the color sequence generator300, for example, configured on the transmission paths of the excitation light beams111and the converted light beam411coming from the light uniformizing element120, so as to convert the excitation light beams111and the converted light beam411into an image light beam501. In the embodiment, the light valve500is, for example, a digital micro-mirror device (DMD) or a liquid-crystal-on-silicon panel (LCOS panel). However, in other embodiments, the light valve500may also be a transmissive liquid crystal panel. The projection lens600is disposed on a transmission path of the image light beam501to project the image light beam501out of the projection apparatus700, for example, to project the image light beam501on a screen to form an image. In summary, in the illumination system and the projection apparatus of the embodiments of the disclosure, the excitation light beams emitted by the excitation light source array are transmitted to the color sequence generator to form passing light beams and light beams reflected to the wavelength converter at different timings. In this way, the structure of the wavelength converter is effectively simplified, the structural strength and the heat dissipation area of the wavelength converter are increased, the dynamic balance adjustment work hours of the wavelength converter are shortened, the light paths of the illumination system are simplified, the volume of the illumination system is reduced, and complexity of electronic control is decreased, and that the costs of the illumination system are effectively reduced. The foregoing description of the preferred embodiments of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form or to exemplary embodiments disclosed. Accordingly, the foregoing description should be regarded as illustrative rather than restrictive. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The embodiments are chosen and described in order to best explain the principles of the invention and its best mode practical application, thereby to enable persons skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated. Therefore, the term “the invention”, “the present invention” or the like does not necessarily limit the claim scope to a specific embodiment, and the reference to particularly preferred exemplary embodiments of the invention does not imply a limitation on the invention, and no such limitation is to be inferred. The invention is limited only by the spirit and scope of the appended claims. Moreover, these claims may refer to use “first”, “second”, etc. following with noun or element. Such terms should be understood as a nomenclature and should not be construed as giving the limitation on the number of the elements modified by such nomenclature unless specific number has been given. The abstract of the disclosure is provided to comply with the rules requiring an abstract, which will allow a searcher to quickly ascertain the subject matter of the technical disclosure of any patent issued from this disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Any advantages and benefits described may not apply to all embodiments of the invention. It should be appreciated that variations may be made in the embodiments described by persons skilled in the art without departing from the scope of the present invention as defined by the following claims. Moreover, no element and component in the present disclosure is intended to be dedicated to the public regardless of whether the element or component is explicitly recited in the following claims. | 27,024 |
11860528 | DETAILED DESCRIPTION Before describing several exemplary embodiments of the disclosure, it is to be understood that the disclosure is not limited to the details of construction or process steps set forth in the following description. The disclosure is capable of other embodiments and of being practiced or being carried out in various ways. The term “horizontal” as used herein is defined as a plane parallel to the plane or surface of a mask blank, regardless of its orientation. The term “vertical” refers to a direction perpendicular to the horizontal as just defined. Terms, such as “above”, “below”, “bottom”, “top”, “side” (as in “sidewall”), “higher”, “lower”, “upper”, “over”, and “under”, are defined with respect to the horizontal plane, as shown in the figures. The term “on” indicates that there is direct contact between elements. The term “directly on” indicates that there is direct contact between elements with no intervening elements. As used in this specification and the appended claims, the terms “precursor”, “reactant”, “reactive gas” and the like are used interchangeably to refer to any gaseous species that reacts with the substrate surface. Those skilled in the art will understand that the use of ordinals such as “first” and “second” to describe process regions do not imply a specific location within the processing chamber, or order of exposure within the processing chamber. As used in this specification and the appended claims, the term “substrate” refers to a surface, or portion of a surface, upon which a process acts. It will also be understood by those skilled in the art that reference to a substrate is to only a portion of the substrate, unless the context clearly indicates otherwise. Additionally, in some embodiments, reference to depositing on a substrate includes depositing on both a bare substrate and a substrate with one or more films or features deposited or formed thereon. In specific embodiments, a substrate is an EUV mask blank or an EUV mask blank. Thus, the phrases “EUV mask blank” and “EUV reticle blank” may refer to a surface, or portion of a surface of an EUV mask blank or EUV reticle blank, upon which a process acts. In some embodiments, reference to depositing on an EUV mask blank or EUV reticle blank includes depositing on both a bare UV mask blank or EUV reticle blank and an UV mask blank or EUV reticle blank with one or more films or features deposited or formed thereon. Referring now toFIG.1, an exemplary embodiment of an extreme ultraviolet lithography system100is shown. The extreme ultraviolet lithography system100includes an extreme ultraviolet light source102for producing extreme ultraviolet light112, a set of reflective elements, and a target substrate110. The reflective elements include a condenser104, an EUV reflective mask106, an optical reduction assembly108, a mask blank, a mirror, or a combination thereof. The extreme ultraviolet light source102generates the extreme ultraviolet light112. The extreme ultraviolet light112is electromagnetic radiation having a wavelength in a range of 5 to 50 nanometers (nm). For example, the extreme ultraviolet light source102includes a laser, a laser produced plasma, a discharge produced plasma, a free-electron laser, synchrotron radiation, or a combination thereof. The extreme ultraviolet light source102generates the extreme ultraviolet light112having a variety of characteristics. The extreme ultraviolet light source102produces broadband extreme ultraviolet radiation over a range of wavelengths. For example, the extreme ultraviolet light source102generates the extreme ultraviolet light112having wavelengths ranging from 5 to 50 nm, from 10 to 25 nm, from 12.5 nm to 14.5 nm, for example 13.5 nm. In one or more embodiments, the extreme ultraviolet light source102produces the extreme ultraviolet light112having a narrow bandwidth. For example, the extreme ultraviolet light source102generates the extreme ultraviolet light112at 13.5 nm. The center of the wavelength peak is 13.5 nm. The condenser104is an optical unit for reflecting and focusing the extreme ultraviolet light112. The condenser104reflects and concentrates the extreme ultraviolet light112from the extreme ultraviolet light source102to illuminate the EUV reflective mask106. Although the condenser104is shown as a single element, it is understood that the condenser104of some embodiments includes one or more reflective elements such as concave mirrors, convex mirrors, flat mirrors, or a combination thereof, for reflecting and concentrating the extreme ultraviolet light112. For example, the condenser104of some embodiments is a single concave mirror or an optical assembly having convex, concave, and flat optical elements. The EUV reflective mask106is an extreme ultraviolet reflective element having a mask pattern114. The EUV reflective mask106creates a lithographic pattern to form a circuitry layout to be formed on the target substrate110. The EUV reflective mask106reflects the extreme ultraviolet light112. The mask pattern114defines a portion of a circuitry layout. The optical reduction assembly108is an optical unit for reducing the image of the mask pattern114. The reflection of the extreme ultraviolet light112from the EUV reflective mask106is reduced by the optical reduction assembly108and reflected on to the target substrate110. The optical reduction assembly108of some embodiments includes mirrors and other optical elements to reduce the size of the image of the mask pattern114. For example, the optical reduction assembly108of some embodiments includes concave mirrors for reflecting and focusing the extreme ultraviolet light112. The optical reduction assembly108reduces the size of the image of the mask pattern114on the target substrate110. For example, the mask pattern114of some embodiments is imaged at a 4:1 ratio by the optical reduction assembly108on the target substrate110to form the circuitry represented by the mask pattern114on the target substrate110. The extreme ultraviolet light112of some embodiments scans the reflective mask106synchronously with the target substrate110to form the mask pattern114on the target substrate110. Referring now toFIG.2, an embodiment of a conventional substrate processing platform200is shown. The conventional substrate or EUV mask blank processing platform200includes a factory interface202into which the source substrates203,205are loaded and from which substrates that have been processed in the EUV mask blank processing platform200. Adjacent the factory interface202are substrate transport boxes204and other components to transfer a substrate from an ambient factory environment outside the substrate processing platform200to vacuum inside the substrate processing platform200. The factory interface is within an enclosure that is under slight vacuum pressure to keep the factory interface in a controlled environment. The ambient factory environment is outside the factory interface. The substrate handling vacuum chamber208contains two vacuum chambers, a first vacuum chamber210and a second vacuum chamber212. The first vacuum chamber210includes a first substrate handling system214and the second vacuum chamber212includes a second substrate handling system216. The substrate handling vacuum chamber208has a plurality of ports around its periphery for attachment of various other systems. The first vacuum chamber210has a degas system218, a first physical vapor deposition system220, a second physical vapor deposition system222, and a pre-clean system224. The degas system218is for thermally desorbing moisture from the substrates. The pre-clean system224is for cleaning the surfaces of the substrates, mask blanks, mirrors, or other optical components. The physical vapor deposition systems, such as the first physical vapor deposition system220and the second physical vapor deposition system222, are used to form thin films of conductive materials on the source substrates203. For example, the physical vapor deposition systems include vacuum deposition system such as magnetron sputtering systems, ion sputtering systems, pulsed laser deposition, cathode arc deposition, or a combination thereof. The physical vapor deposition systems, such as the magnetron sputtering system, form thin layers on the source substrates203including the layers of silicon, metals, alloys, compounds, or a combination thereof. The physical vapor deposition system forms reflective layers, capping layers, and absorber layers. For example, the physical vapor deposition systems form layers of silicon, molybdenum, titanium oxide, titanium dioxide, ruthenium oxide, niobium oxide, ruthenium tungsten, ruthenium molybdenum, ruthenium niobium, chromium, tantalum, nitrides, compounds, or a combination thereof. Although some compounds are described as an oxide, it is understood that the compounds include oxides, dioxides, atomic mixtures having oxygen atoms, or a combination thereof. The second vacuum chamber212has a first multi-cathode source226, a chemical vapor deposition system228, a cure chamber230, and an ultra-smooth deposition chamber232connected to it. For example, the chemical vapor deposition system228includes a flowable chemical vapor deposition system (FCVD), a plasma assisted chemical vapor deposition system (CVD), an aerosol assisted CVD, a hot filament CVD system, or a similar system. In another example, the chemical vapor deposition system228, the cure chamber230, and the ultra-smooth deposition chamber232are in a separate system from the conventional EUV mask blank processing platform200. The chemical vapor deposition system228forms thin films of material on the source substrates203. For example, the chemical vapor deposition system228is used to form layers of materials on the source substrates203including mono-crystalline layers, polycrystalline layers, amorphous layers, epitaxial layers, or a combination thereof. The chemical vapor deposition system228of some embodiments forms layers of silicon, silicon oxides, silicon oxycarbide, carbon, tungsten, silicon carbide, silicon nitride, titanium nitride, metals, alloys, and other materials suitable for chemical vapor deposition. For example, the chemical vapor deposition system forms planarization layers. The first substrate handling system214is capable of moving the source substrates203between an atmospheric handling system and the various systems around the periphery of the first vacuum chamber210in a continuous vacuum. The second substrate handling system216is capable of moving the source substrates203around the second vacuum chamber212while maintaining the source substrates203in a continuous vacuum. The conventional EUV mask blank processing platform200transfers the source substrates203between the first substrate handling system214, the second substrate handling system216in a continuous vacuum. A challenge in the manufacture of substrates that are processed in a manner that there are material layers deposited on both sides, for example, EUV mask blanks, in a multi-chamber substrate processing platform is to achieve the deposition and anneal of multiple layers with minimal particulate defects. One or more embodiments of the instant disclosure provide a multi-chamber substrate processing platform or system to deposit all layers of substrates such as EUV mask blanks in one contained system with minimal robotic transfers, a high payload robot, a slit valve design to reduce defects, and a substrate flipping fixtures in a factory interface to enable in-situ deposition on of both sides of substrates such as EUV mask blanks. The multi-chamber substrate processing platforms and methods described herein according to one or more embodiments accommodate the formation of a diverse number of layers such as, for example, reflective multilayer stacks comprising alternating bilayers, absorber layer(s), capping layers, and backside layers. In addition, multi-chamber substrate processing platforms and methods described herein according to one or more embodiments accommodate annealing chamber(s) and PVD chambers for the formation of advanced absorbers for both EUV mask blanks and conventional semiconductor wafers. One or more embodiments provide a multi-chamber substrate processing platform configured to manufacture EUV mask blanks with reduced defects, high yield and low cost. The multi-chamber substrate processing platform can manufacture an EUV mask blank with just two robotic transfers and one substrate flipping fixture to meet all deposition and anneal requirements of substrate processes that require anneal and coating on a front side and a back side of a substrate. In addition, the multi-chamber substrate processing platform can process various substrates differing in material type, weight and geometry (e.g., 300 mm wafers and 152 mm×152 mm ULE substrates for EUV mask blanks), and can incorporate a chamber required for next node of advanced absorber development. EUV mask blanks comprise a high quality rectangular substrate (e.g., ultra low expansion glass) deposited with a mirror layer (40 Si/Mo bilayers), a capping layer (e.g, Ru), absorber layer(s) (e.g., TaN) on a front side of the substrate and a layer (e.g., CrN) on the backside of the substrate. In the current process, different tools are required to perform the steps to form these layers and anneal the substrates, which increases defect counts and lowers yield. Moreover, multiple different systems requiring a larger footprint provide an operational disadvantage. The multi-chamber substrate processing platform described herein provides a single system having a small footprint, low defect generation, and high yield. One or more embodiments of the present disclosure advantageously provides a system or platform configured to manufacture a variety of substrates including EUV mask blanks and EUV reticle blanks with reduced defects. In some embodiments, EUV mask blanks are produced at higher yield and lower cost compared to production in conventional EUV mask blank processing platform. One or more of these advantages are achieved with a substrate processing system or platform such as an EUV mask blank processing system platform including two robotic transfers and one substrate flipping fixture to meet all deposition and anneal requirements. In specific embodiments, the substrate processing platform or system or EUV mask blank processing system or platform utilizes no more than two central robots, and in specific embodiments, a single central robot to effect robotic transfers. In one or more embodiments, the platform or system described herein comprise no more than one substrate flipping fixture to rotate the substrate, such as an EUV mask blank substrate during processing in the processing system or platform. Moreover, one or more embodiments of the substrate processing system or platform is capable of being configured to process various substrates differing in material type, weight and geometry (e.g., 300 mm wafers, 152 mm×152 mm EUV mask blanks comprising ultra low expansion substrates) and can incorporate chambers required for the next node of advanced EUV absorber development. Furthermore, one or more embodiments of the present disclosure advantageously provides for a substrate processing system or platform configured to perform all deposition and anneal steps for EUV mask or EUV reticle development in one system, reducing defect generation on EUV masks or EUV reticles that are caused by multiple robotic transfers or manual transfer (e.g. tool-to-tool (or chamber-to-chamber)). In some embodiments, a high payload vacuum transfer design and vertical slit valve design reduce generation of defects, resulting in production of EUV masks and EUV reticles at high yields. In some embodiments, the substrate processing system or platform is configured for one or more of types of substrates, for example, conventional 300 mm wafers and EUV mask blank substrates and EUV reticle blanks. As shown inFIG.3, an embodiment of a substrate processing system or platform300comprises a central transfer chamber310, which in some embodiments, is the only or sole or a single central transfer chamber310in the substrate processing system or platform300. In other words, there is no more than one central transfer chamber310, and the single central transfer chamber310comprises includes a single central robot500(shown in detail inFIGS.10-14) mounted on a central robot hub312. The central robot500disposed in the single central transfer chamber310is configured to load and unload a substrate from substrate processing chambers362,364,366,368and370. In the embodiment shown there are five substrate processing chambers, and as described in more detail herein, the substrate processing system or platform300includes no more than five substrate processing chambers. The substrate processing system or platform300further comprises a factory interface302which is disposed between substrate processing chambers and an ambient factory environment303from which substrates are loaded. In one or more embodiments, the factory interface302is configured as a dual factory interface with a substrate flipping fixture400, which as described further below, is configured to enable the substrate processing platform or system to allow for deposition on both sides of any substrate (e.g., a 300 mm diameter wafer and EUV mask blanks (e.g., 152 mm×152 mm mask blank)). In some embodiments, and as explained in further detail below, the single central robot500is configured to support a 1 kg payload, making it suitable to support and transfer EUV mask blank substrates, which are processed using an EUV mask blank carrier assembly including a carrier base and a top shield. The substrate flipping fixture400(described in more detail below) is disposed within the factory interface302and configured to rotate a substrate having a front side and a back side 180 degrees such that a layer can be deposited on both the front side and the back side of the substrate in one of the substrate processing chambers. The factory interface302further comprises a plurality of load ports304and one of the load ports may be configured to load one type of substrate, such as 300 mm wafers, while other of the load ports may be configured to load another type of substrate such as EUV mask blanks together with an EUV mask blank carrier assembly. Between the factory interface302and the substrate processing chambers362,364,366,368and370there is a first load lock chamber332and a second load lock chamber334, which are under vacuum conditions. The factory interface is isolated from the ambient factory environment by an enclosure348. The first load lock chamber332and the second load lock chamber334are positioned between the factory interface302and the central transfer chamber310and are configured to be an intermediate transfer space from the factory interface302to the substrate processing chambers362,364,366,368and370. The substrate flipping fixture400, as explained in further detail below, is configured to flip or rotate substrates 180 degrees. There is a factory interface robot352configured to load substrates from the ambient factory environment303into the factory interface302and onto the substrate flipping fixture400. In some embodiments, the factory interface robot352is configured to transfer the substrate between the factory interface302and the first load lock chamber332and between the factory interface302and the second load lock chamber334. The factory interface robot352is further configured in some embodiments to load and unload substrates to and from a first build module fixture450and a second build module fixture451, which are utilized in the processing of EUV mask blank substrates and EUV mask blank carrier assemblies as described further with respect toFIG.7. The central transfer chamber310is configured as a substrate handling vacuum chamber through which all transfers between processing chambers362,364,366,368and370occur, providing a platform having a small footprint and reduced robotic transfers by having a single central robot500for all transfers between the substrate processing chambers362,364,366,368and370surrounding the central transfer chamber310, which are all under vacuum conditions. The central transfer chamber310includes a centrally located central robot500(described further with respect toFIGS.10-14) positioned on a central robot hub312. As explained in further detail below, the central robot500is configured to load and transfer substrates along a predetermined path. In some embodiments, the central robot500is configured to load and transfer substrates in a clockwise path. In some embodiments, the central robot is configured to load and transfer substrates and in a counter-clockwise path. In some embodiments, the central robot500is configured to load and transfer substrates between the two or more processing chambers362,364,366,368and370. The central transfer chamber310has a plurality of ports380(seeFIG.12) around its periphery in communication with each of the two or more processing chambers362,364,366,368and370or components of various other systems. In some embodiments, the processing chambers362,364,366,368and370include multi-cathode physical vapor deposition (PVD) chambers, substrate annealing chambers, degas chambers and pre-clean chambers. The degas chamber is for thermally desorbing moisture from the substrates and the pre-clean chamber is for cleaning the surfaces of the substrates. The two or more processing chambers362,364,366,368and370are positioned around the periphery of the central transfer chamber310. As shown inFIG.3, a first multi-cathode PVD chamber362is in communication with the central transfer chamber310and adjacent to the first load lock chamber332. A first substrate annealing chamber364is in communication with the central transfer chamber and adjacent to the first multi-cathode PVD chamber362. In some embodiments, a second multi-cathode PVD chamber366is positioned adjacent to the first substrate annealing chamber364and a second substrate annealing chamber368is positioned adjacent to the second multi-cathode PVD chamber366. The second multi-cathode PVD chamber366and the second substrate annealing chamber368each are in communication with the central transfer chamber310through a port380. In some embodiments, the second substrate annealing chamber368is adjacent to the second load lock chamber334. In some embodiments, the second substrate annealing chamber368is adjacent to a third multi-cathode PVD chamber370, the third multi-cathode PVD chamber in communication with the central transfer chamber310and positioned adjacent to the second load lock chamber334. In some embodiments, the one or more of the annealing chambers (364,368) are a multi-substrate annealing chamber configured to anneal one or more substrates. In some embodiments, one or more of the multi-cathode PVD chambers (362,366,370) are a multi-cathode PVD chamber configured to deposit one or more different layers on one or more substrates. In some embodiments, one or more of the annealing chambers (364,368) and one or more of the multi-cathode PVD chambers (362,366,370) comprise one or more regions with a substrate support surface configured to receive one or more substrates from the central robot500. In some embodiments, the five substrate processing chambers362,364,366,368,370comprise at least two multi-cathode physical vapor deposition (PVD) chambers, a single cathode PVD chamber and at least one substrate annealing chamber, and at least a first multi-cathode PVD chamber configured to deposit different layers. In some embodiments, the processing chambers362,364,366,368include absorber (TaN), backside (CrN) and anneal chambers. In some embodiments, one or more of the multi-cathode PVD chambers (362,366,370) include an advanced absorber multi-cathode deposition chamber. In some embodiments, the one or more of the multi-cathode PVD chambers (362,366,370) are configured as a PVD chamber which includes a plurality of cathode assemblies. The plurality of cathode assemblies positioned above shield holes of an upper shield. Each of the plurality of cathode assemblies are include one or more targets and are configured to deposit material from the one or more targets onto a reticle or substrate. In some embodiment, the PVD chamber is also provided with a rotating pedestal. In such embodiments, the PVD chamber is configured to alternately sputter material from the one or more targets without rotating the upper shield. In some embodiments, the one or more targets comprise a molybdenum target and a silicon target. Plasma sputtering may be accomplished using either DC sputtering or RF sputtering in the PVD chamber. In some embodiments, the process chamber includes a feed structure for coupling RF and DC energy to the targets associated with each cathode assembly. FIGS.4and5an exemplary embodiment of the port380that provides an interface between the central transfer chamber310and each of the substrate processing chambers362,364,366,368and370, and in the embodiments shown there is a port380for each of the substrate processing chamber. In the embodiment shown, the port380configured as a vertical slit valve381.FIG.4illustrates a perspective view of the vertical slit valve381andFIG.5illustrates a cross-sectional view of the vertical slit valve381. As shown inFIGS.4and5, a chamber wall311of the central transfer chamber310has a vertical opening382through which a substrate passes through. As indicated by the arrow, a door384travels in a perpendicular direction relative to the vertical opening382. Stated differently, the door384travels parallel to the chamber wall311of the central transfer chamber310. Due to the door384traveling a shorter distance relative to conventional ports, less impact is created when the door384is fully closed, and thus particle generation is minimized. FIG.6illustrates a perspective view of the substrate flipping fixture400, which is positioned within the enclosure348of the factory interface302. As previously described, a layer of material is deposited on a first side (e.g., a front side) of a substrate such as an EUV mask blank substrate in one of the substrate processing chamber362,364,366,368,370, and then the substrate is removed by the central robot500to the central transfer chamber310and then to the factory interface302where the substrate flipped or rotated 180 degrees so that the front side faces downward and the bottom side faces upward. Integration of a substrate flipping fixture400inside the factory interface302reduces unnecessary additional handling and transport of the substrate to a different system, which aids in reducing particles as every transfer of the transfer during transfer and processing is potential source of particle generation. In some embodiments, the substrate flipping fixture400is mounted above a build module fixture located within the factory interface302. The substrate flipping fixture400comprises a base402supporting a vertical slide404and a motor406positioned above the vertical slide404. A pair of gripping elements408,409are configured to grip edges616eof the substrate616(e.g., and EUV mask blank) between the front side616fand the backside616bof the substrate. The gripping elements408,409may be in the form of spaced apart arms as shown, which may be closed closer together to grip or hold the substrate616and opened further apart to release the substrate616. The gripping elements408,409may comprise gripping features, which may comprise blocks of material such as rubber or plastic. The gripping elements408,409are configured to rotate 180 degrees in the direction of arrows410,412driven by motor406to cause the front side616fof the substrate616to rotate from a first position in which the front side616ffaces upward to a second position in which the front side616ffaces downward and the backside616bfaces upward. The substrate flipping fixture400further comprises a moveable support fixture422including two support arms420spaced apart and configured to support the substrate616during substrate flipping or rotation process. The moveable support fixture422travels along the vertical slide404in an up and down direction, and in some embodiments is powered by motor406. The operation cycle of the substrate flipping fixture is initiated when the substrate flipping fixture400receives substrate616, which in the embodiment shown, is an EUV mask blank substrate from the factory interface robot352, as described further below. Referring now toFIGS.7-9, various stages of a substrate flipping or rotation process are shown.FIG.7illustrates a perspective view of a substrate flipping fixture400positioned adjacent the first build module fixture450in the factory interface302. It will be appreciated that inFIG.7, the first build module fixture450is shown in a position to better view the components of a mask blank carrier assembly460comprising a carrier base462and a top shield464having an opening466therein configured to receive an EUV mask blank substrate465as shown inFIG.15(e.g., an EUV mask blank having length×width dimensions of 152 mm×152 mm). The build module fixture450comprises a main support body470and a plurality of lift pins468(only one is visible inFIG.7). The build module fixture450can be mounted a frame other suitable structure within the factory interface so that the build module fixture450is mounted in a position below the moveable support fixture422. The lift pins468can be actuated by a pneumatic, hydraulic or motor (e.g., servo motor) actuator, which causes the top shield464to be lifted from the carrier base462, exposing a substrate616supported by the EUV mask blank carrier assembly and contained within the opening. Lifting the cover top shield464exposes the substrate616so that the substrate can be accessed by a robot blade to move the substrate to the moveable support fixture422during a flipping or rotation process. FIG.8illustrates a perspective view of a factory interface robot blade353attached to the factory interface robot352(ofFIG.3). The factory interface robot blade353is moveable up and down as indicated by arrow692and in the direction indicated by arrow690to allow the factory interface robot352to lift and lower the substrate616to the various components of the substrate flipping fixture400, namely the moveable support fixture422and the pair of gripping elements408and409. As shown inFIG.8, the front side616fof the substrate616is facing in the upward orientation and the backside616bis facing in the downward orientation. FIG.9illustrates the substrate flipping fixture400with the pair of gripping elements408,409holding the substrate616in a raised position above the moveable support fixture422after the substrate show inFIG.8has been rotated 180 degrees. The backside616bis now facing in an upward orientation and the front side616fis facing in a downward orientation. The moveable support fixture422can now be moved closer to the pair of gripping elements408,409, and when the standoffs412a,412b,412c, and412dcontact the substrate616, the gripping elements408,409are moved apart. Then, the factory interface robot blade353is used to remove the substrate and place the substrate back in one of load lock chambers332and334. Next, the central robot500moves the substrate to a substrate processing chamber so that a material layer can be deposited on backside616bof the substrate. In operation, the factory interface robot352positions substrate616in the form an a EUV mask blank on standoffs412a,412b,412cand412don the support arms420of the moveable support fixture422as shown inFIG.8. The moveable support fixture422is then moved upward so that the pair of gripping elements408,409are in position to grip the edges616eof the substrate. The pair of gripping elements408,409may be moved apart during the operation in which the moveable support fixture is moved closer to the pair of gripping elements408,409, and the pair of gripping elements408,409can be moved closer together to grip the substrate616at edges616e. The gripping elements408,409may each have gripping element sensors432a,432b, which communicate with the motor406to allow the motor to grip and release the substrate. The gripping element sensors432a,432bare in communication with a controller351, which is configured to execute instructions to control operation of the substrate processing system or platform300, including the central robot500and the factory interface robot352. Support arm sensors432c,432dare also in communication with the controller to control upward and downward movement of the moveable support fixture422. FIGS.10through15illustrate an exemplary embodiment of a central robot500. Conventional robotic arms used in semiconductor substrate processing chambers to handle 300 mm wafers are configured for one or more individual transfer tasks. Furthermore, conventional robotic arms are limited by the weight payload of the arm and the reach of the arm. Where greater payloads and arm reach are required in specific applications, premature wear of the robot, particle generation and droop can occur. The described central robot500advantageously provides for supporting loads of 1 kg with a reach of 39.5 inches while reducing droop and producing less particles and contaminants. As previously described, the central transfer chamber310(ofFIG.3) includes a centrally located central robot500positioned on the central robot hub312.FIG.10illustrates a perspective view of the central robot500andFIG.11illustrates a top view of the central robot500. As shown inFIGS.10and11, the central robot500comprises the central robot hub312, a robot arm assembly510, a base end effector550and a robot blade570. The robot arm assembly extends and retracts in an x-y plane, the x-y plane traversing z-plane. The central robot hub312has a substantially cylindrical body502, a top surface504, and a central axis501traversing the top surface504in a z-plane. The robot arm assembly510comprises a first arm512, a second arm522, a first linkage532and a second linkage542. In some embodiments, the central robot hub312further comprises a first rotating disk506and a second rotating disk508. The first rotating disk506adjacent to the top surface504, and the second rotating disk508is adjacent to the first rotating disk506. The first and second rotating disks (506,508) are configured to rotate in opposite directions. The first rotating disk506has a thickness and an outside surface, and the second rotating disk508has a thickness and an outside surface. The central robot hub312, first rotating disk506and second rotating disk508have a common center axis501. As shown inFIG.12, the first arm512has a proximal end514and a distal end516, the proximal end514mounted to the robot hub (not shown). As used herein, a proximal direction is defined by a position closest to the robot hub and a distal direction is defined by a position furthest from the robot hub. Likewise, the second arm522has a proximal end524and a distal end526, the proximal end524mounted to the robot hub (not shown). The first linkage532has a proximal end534and a distal end536, the proximal end534connected to the distal end516of the first arm512. The second linkage542has a proximal end544and a distal end546, the proximal end544connected to the distal end526of the second arm522. In some embodiments, the proximal end514of the first arm512is mounted to the outside surface of the first rotating disk506and the proximal end524of the second arm522is mounted to the outside surface of the second rotating disk508. The base end effector assembly550has a medial end552and a lateral end554. The medial end552connected to the distal end536of the first linkage532and the lateral end554connected to the distal end546of the second linkage542. The distal portion576is configured to support a reticle or EUV mask blank substrate. In some embodiments, the base end effector550comprises a flexure plate556and a cover558, the flexure plate and cover configured to secure the robot blade. As best shown inFIGS.11and14, the robot blade570extends from the end effector assembly. The robot blade570has a top surface580and a bottom surface (not shown) defining a thickness. The robot blade570comprises a proximal portion572having a proximal end574attached to the end effector assembly and a distal portion576having a distal end578. The distal portion576defines a region where a reticle or substrate616is seated. In some embodiments, the distal portion576comprises a plurality of ribs582extending along the distal portion576. In some embodiments, the distal portion576has a notch584. As best shown inFIG.14, the notch584has a peak586located a distance D1from the distal end578. In some embodiments, the distance D1of the peak586is greater than 12 inches. FIG.11illustrates the central robot500in a fully retracted position.FIG.13Aillustrates the central robot in a fully extended position, the central robot500having a reticle or substrate616seated on the peak586of notch584of the robot blade570.FIG.13Billustrates the central robot in the fully extended position. A reach R of the central robot500is defined by the distance between the common center axis501of the central robot hub312and the peak586of the notch584, the reach extending in the x-y plane. Where a reticle or substrate616is seated on the peak586, a center axis91of the reticle or substrate616is positioned over the peak586. In some embodiments, the maximum reach R of the central robot500in the fully extended position is greater than 39 inches. In some embodiments, the minimum reach R of the central robot500in the fully retracted position is 12 inches. A sweep diameter of the central robot500is defined by the common center axis501of the central robot hub312to the furthest point of the central robot500when the central robot500is in the fully retracted position. As shown inFIG.11, in the present embodiment, due to the length of the robot blade570, the furthest point is from the common center axis501of the central robot hub312to the distal end578of the robot blade570. In the present embodiment, the sweep diameter is less than 37.4 inches. In some embodiments with a shorter robot blade570, the furthest point is from the common center axis501to the distal ends526,516of the first and second arms512,522. FIG.15illustrates a cross-sectional view of the robot blade570having a reticle or substrate616seated on the robot blade570, the robot blade570partially inserted into one of the plurality of ports380of the central transfer chamber310. Due to the weight of the reticle or substrate616being at 1 kg and due to the maximum reach of the central robot500in the fully extended position is greater than 39 inches, droop of the central robot500in the z-x plane can affect the passage of the robot blade570inserted into one of the plurality of ports380of the central transfer chamber310. In some embodiments, the distal end578of the robot blade570deflects by less than 0.1975 inches in the z-direction relative to the common central axis501of the central robot hub312over the maximum reach of the fully extended central robot500while the central robot500is under a load of 1 kg. A droop ratio of the central robot500is defined by deflection of the distal end578in the z-direction relative to the common central axis501of the central robot hub312over a maximum reach of the fully extended central robot500, the central robot500under a load of 1 kg. In some embodiments, the droop ratio is 0.005. In some embodiments, the base end effector (not shown) deflects by less than 0.0165 inches relative to the common central axis501of the central robot hub312over a maximum reach of the fully extended central robot500, the central robot500under a load of 1 kg. According to one or more embodiments, a central robot500is provided with a robot arm and blade designed for minimum deflection and longer reach. The robot arm linkage is designed to support the blade which carries loads of about one kilogram. The new robot arm linkage is designed so that fewer particles are generated in use and reduced droop. A motion profile, in particular acceleration and de-acceleration of the robot is tuned such that the motion of the assembly has less vibration and generates less particles. With the design of new linkage and tuning with new motion profile, the modified robot is capable of handling higher payload and longer reach with minimum particle generation and droop. Handling of heavier EUV mask blank substrates and carrier assemblies with low particle generation during transfer of the EUV mask blank substrates is beneficial for EUV mask blank manufacturing. In some embodiments, the controller351is configured to execute instructions to cause the substrate processing platform to load a substrate from the factory interface, to a first load lock chamber and to a first multi-cathode PVD chamber, deposit a layer on a front side of the substrate, remove the substrate from the first multi-cathode PVD chamber and load the substrate in one of the first load lock chamber and the second load lock chamber and then transfer the substrate back to the factory interface, rotate the substrate 180 degrees to provide a rotated substrate, transfer the rotated substrate to one of the first load lock chamber and the second load lock chamber, load the rotated substrate to one of a first multi-cathode PVD chamber, a second multi-cathode PVD chamber and single cathode PVD chamber and deposit a layer on the back side of the rotated substrate There may be a single controller351as shown or multiple controllers. When there is more than one controller, each of the controllers is in communication with each of the other controllers to control of the overall functions of the substrate processing system or platform300. For example, when multiple controllers are utilized, a primary control processor is coupled to and in communication with each of the other controllers to control the system. The controller is one of any form of general-purpose computer processor, microcontroller, microprocessor, etc., that can be used in an industrial setting for controlling various chambers and sub-processors. As used herein, “in communication” means that the controller can send and receive signals via a hard-wired communication line or wirelessly. Each controller can comprise processor, a memory coupled to the processor, input/output devices coupled to the processor, and support circuits to provide communication between the different electronic components. The memory includes one or more of transitory memory (e.g., random access memory) and non-transitory memory (e.g., storage) and the memory of the processor may be one or more of readily available memory such as random access memory (RAM), read-only memory (ROM), floppy disk, hard disk, or any other form of digital storage, local or remote. The memory394can retain an instruction set that is operable by the processor392to control parameters and components of the system. The support circuits are coupled to the processor for supporting the processor in a conventional manner. Circuits may include, for example, cache, power supplies, clock circuits, input/output circuitry, subsystems, and the like. The methods described herein may generally be stored in the memory as a software routine that, when executed by the processor, causes the process chamber to perform processes of the present disclosure. The software routine may also be stored and/or executed by a second processor that is remotely located from the hardware being controlled by the processor. In one or more embodiments, some or all of the methods of the present disclosure are controlled hardware. As such, in some embodiments, the processes are implemented by software and executed using a computer system, in hardware as, e.g., an application specific integrated circuit or other type of hardware implementation, or as a combination of software and hardware. The software routine, when executed by the processor, transforms the general purpose computer into a specific purpose computer (controller) that controls the chamber operation such that the processes are performed. In some embodiments, the controller351is configured to control the factory interface302, the processing chambers362,364,366,368,370and rotation of the central robot500positioned within the central transfer chamber310and. Referring toFIG.16, an embodiment of an extreme ultraviolet reflective element602is shown. In one or more embodiments, the extreme ultraviolet reflective element602is an EUV mask blank or an EUV mirror. The EUV mask blank and the EUV mirror are structures for reflecting the extreme ultraviolet light112ofFIG.1. The extreme ultraviolet reflective element602includes a substrate604, a reflective multilayer stack606of reflective layers, and a capping layer608. The reflective multilayer stack606reflects EUV radiation, for example at a wavelength of 13.5 nm. In one or more embodiments, the extreme ultraviolet mirror is used to form reflecting structures for use in the condenser104ofFIG.1or the optical reduction assembly108ofFIG.1. The extreme ultraviolet reflective element602, which in some embodiments is an EUV mask blank, includes the substrate604, the reflective multilayer stack606of reflective layers comprising alternating layers of silicon and molybdenum, and an optional capping layer608. The extreme ultraviolet reflective element602in some embodiments is a EUV mask blank, which is used to form the reflective mask106ofFIG.1by patterning. In the following sections, the term for the EUV mask blank is used interchangeably with the term of the extreme ultraviolet mirror for simplicity. The EUV mask blank is an optically flat structure used for forming the reflective mask106having the mask pattern114. In one or more embodiments, the reflective surface of the EUV mask blank forms a flat focal plane for reflecting the incident light, such as the extreme ultraviolet light112ofFIG.1. The substrate604is an element for providing structural support to the extreme ultraviolet reflective element602. In one or more embodiments, the substrate604is made from a material having a low coefficient of thermal expansion (CTE) to provide stability during temperature changes. In one or more embodiments, the substrate604has properties such as stability against mechanical cycling, thermal cycling, crystal formation, or a combination thereof. The substrate604according to one or more embodiments is formed from a material such as silicon, glass, oxides, ceramics, glass ceramics, or a combination thereof. The reflective multilayer stack606is a structure that is reflective to the extreme ultraviolet light112. The reflective multilayer stack606includes alternating reflective layers of a first reflective layer612and a second reflective layer614. The first reflective layer612and the second reflective layer614form a reflective pair618. In a non-limiting embodiment, the multilayer stack606includes a range of 20-60 of the reflective pairs618for a total of up to 120 reflective layers. The first reflective layer612and the second reflective layer614according to one or more embodiments are formed from a variety of materials. In an embodiment, the first reflective layer612and the second reflective layer614are formed from silicon and molybdenum, respectively. The first reflective layer612and the second reflective layer614of some embodiments have a variety of structures. In an embodiment, both the first reflective layer612and the second reflective layer614are formed with a single layer, multiple layers, a divided layer structure, non-uniform structures, or a combination thereof. Because most materials absorb light at extreme ultraviolet wavelengths, the optical elements used are reflective instead of the transmissive, as used in other lithography systems. The reflective multilayer stack606forms a reflective structure by having alternating thin layers of materials with different optical properties to create a Bragg reflector or mirror. The reflective multilayer stack606according to one or more embodiments is formed in a variety of ways. In an embodiment, the first reflective layer612and the second reflective layer614are formed with magnetron sputtering, ion sputtering systems, pulsed laser deposition, cathode arc deposition, or a combination thereof. In an illustrative embodiment, the reflective multilayer stack606is formed using a physical vapor deposition technique, such as magnetron sputtering. In an embodiment, the first reflective layer612and the second reflective layer614of the reflective multilayer stack606have the characteristics of being formed by the magnetron sputtering technique including precise thickness, low roughness, and clean interfaces between the layers. In an embodiment, the first reflective layer612and the second reflective layer614of the reflective multilayer stack606have the characteristics of being formed by the physical vapor deposition including precise thickness, low roughness, and clean interfaces between the layers. The physical dimensions of the layers of the reflective multilayer stack606formed using the physical vapor deposition technique is precisely controlled to increase reflectivity. In an embodiment, the first reflective layer612, such as a layer of silicon, has a thickness of 4.1 nm. The second reflective layer614, such as a layer of molybdenum, has a thickness of 2.8 nm. The thickness of the layers dictates the peak reflectivity wavelength of the extreme ultraviolet reflective element. If the thickness of the layers is incorrect, the reflectivity at the desired wavelength 13.5 nm of some embodiments reduced. In one or more embodiments, the capping layer608is a protective layer allowing the transmission of the extreme ultraviolet light112. In an embodiment, the capping layer608is formed directly on the reflective multilayer stack606. In one or more embodiments, the capping layer608protects the reflective multilayer stack606from contaminants and mechanical damage. In one embodiment, the reflective multilayer stack606is sensitive to contamination by oxygen, carbon, hydrocarbons, or a combination thereof. The capping layer608according to an embodiment interacts with the contaminants to neutralize them. Referring now toFIG.17, a method700is shown. In one or more embodiments, a method700of processing an EUV mask blank substrate in a substrate processing platform comprises at710depositing a reflective multilayer stack on a front side of an EUV mask blank substrate. At720, the EUV mask blank is annealed. At730, an absorber layer is deposited on the reflective multilayer stack. Optionally, a capping layer may be deposited on the reflective multilayer stack prior to depositing the absorber layer. At740, the EUV mask blank is rotated or flipped 180 degrees. After flipping the substrate, at750a backside layer, e.g., CrN, is deposited on the backside of the EUV mask blank. The EUV mask blank may be annealed at760. FIG.18discloses another embodiment of a method800. At802, the method involves using a factory interface robot to remove the EUV mask blank substrate from a factory interface and transfer the EUV mask blank substrate to a first load lock chamber. At803, the method involves using a central robot to transfer the EUV mask blank from the first load lock chamber through a single central transfer chamber and to a first multi-cathode PVD chamber. At804, the method includes depositing a reflective multilayer stack comprising a plurality of bilayer pairs on a front side of the EUV mask blank substrate in the first multi-cathode PVD chamber. At806, the central robot is used to transfer the EUV mask blank substrate to a second multi-cathode PVD chamber. At808, an absorber layer on the front side of the EUV mask blank substrate after depositing the reflective multilayer stack in the second multi-cathode PVD chamber; At808, the EUV mask blank substrate is annealed in a first substrate annealing chamber. At810, the method includes utilizing the central robot to transfer the EUV mask blank substrate to the first load lock chamber or a second load lock chamber. At812, the method includes utilizing the factory interface robot to transfer the EUV mask blank substrate after annealing from the first load lock chamber or the second load lock chamber back to the factory interface. At814, the method includes utilizing a substrate flipping fixture positioned in the factory interface to rotate the substrate 180 degrees so that the front side is facing downward. At816, the method includes utilizing the factory interface robot to transfer the EUV mask blank to the first load lock chamber or the second load lock chamber. At818, the method includes utilizing the central robot to transfer the EUV mask blank with the front side facing downward and a backside facing upward from the first load lock chamber or the second load lock chamber through the single central transfer chamber and to one of the first multi-cathode PVD chamber, the second multi-cathode PVD chamber and a single cathode PVD chamber and then depositing a backside layer on the back side of the EUV mask blank substrate. In one or more embodiments of the method, the plurality of bilayer pairs comprises Si and Mo. In one or more embodiments, the absorber layer is deposited on the reflective multilayer stack. In one or more embodiments, the backside layer comprises CrN. The method in some embodiments includes supporting the EUV mask blank substrate a moveable support fixture prior to rotating the substrate 180 degrees. In some embodiments of the method, the substrate flipping fixture further comprises a pair of gripping elements, and the method includes using the pair of gripping elements to grip edges of the substrate between the front side and the back side, moving the moveable support fixture away from the gripping elements, and rotating the gripping arms 180 degrees to cause the front side of the substrate to rotate from a first position in which the front side faces upward to a second position in which the front side faces downward. Embodiments of the method may include EUV mask blank fixture on a build module fixture mounted adjacent the substrate flipping fixture, the build module fixture configured to support an EUV mask blank carrier assembly including a carrier base and a top shield having an opening therein sized and shaped to receive an EUV mask blank. In one or more embodiments, the method involves the build module fixture further comprising lift pins configured to separate the top shield from the carrier base and using the lift pins to separate the top shield from the carrier base to expose the EUV mask blank substrate Reference throughout this specification to “one embodiment,” “certain embodiments,” “one or more embodiments” or “an embodiment” means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Thus, the appearances of the phrases such as “in one or more embodiments,” “in certain embodiments,” “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Although the disclosure herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the method and apparatus of the present disclosure without departing from the spirit and scope of the disclosure. Thus, it is intended that the present disclosure include modifications and variations that are within the scope of the appended claims and their equivalents. | 56,238 |
11860529 | DESCRIPTION OF THE PREFERRED EMBODIMENT A substrate with a multilayer reflection film for an EUV mask blank of the invention includes a substrate, and a multilayer reflection film formed on the substrate (on one main surface or the front surface of the substrate) and reflects exposure light, in particular, a multilayer reflection film that reflects EUV light. The multilayer reflection film may be formed in contact with the one main surface of the substrate. Further, an undercoat film may be formed between the substrate and the multilayer reflection film. A wavelength of EUV light used for EUV lithography using EUV light as exposure light is 13 to 14 nm, and is usually light having a wavelength of about 13.5 nm (for example, 13.4 to 13.6 nm). FIG.1is an intermediate omitted partial cross-sectional view of an example of a substrate with a multilayer reflection film for an EUV mask blank of the invention. The substrate with a multilayer reflection film for an EUV mask blank10includes a multilayer reflection film2formed on one main surface and in contact with the one main surface of the substrate1. The substrate preferably has low thermal expansion property, and, for example, the substrate is preferably composed of a material having a coefficient of thermal expansion within ±2×10−8/C°, preferably ±5×10−9/C°. Further, a substrate which is sufficiently flattened is preferably used, a surface roughness RMS of the main surface of the substrate is preferably not more than 0.5 nm, more preferably not more than 0.2 nm. Such a surface roughness can be obtained by polishing the substrate. The multilayer reflection film in an EUV mask is a film that reflects EUV light as exposure light. In the invention, the multilayer reflection film includes a Si/Mo laminated portion consisting of multiple layers in which Si (silicon) layers and Mo (molybdenum) layers are alternately laminated. In the Si/Mo laminated portion, a Si layer composed of a material having a relatively high refractive index with respect to EUV light, and a Mo layer composed of a material having a relatively low refractive index with respect to EUV light are periodically laminated. The Si layer and the Mo layer are layers formed of a simple substance of silicon and a simple substance of molybdenum, respectively. The number of laminated layers of the Si layer and the Mo layer is preferably, for example, not less than 40 cycles (not less than 40 layers, respectively), and preferably not more than 60 cycles (not more than 60 layers, respectively). A thickness of the Si layer and the Mo layer of the Si/Mo laminated portion is appropriately set according to the exposure wavelength. The thickness of the Si layer is preferably not more than 5 nm, and the thickness of the Mo layer is not more than 4 nm. A lower limit of the thickness of the Si layer is normally not less than 1 nm, however, not limited thereto. A lower limit of the thickness of the Mo layer is normally not less than 1 nm, however, not limited thereto. The thickness of the Si layer and the Mo layer may be set so as to obtain a high reflectance for EUV light. Further, the thickness of each of the Si layers and each of the Mo layers may be constant or different in each layer. A total thickness of the Si/Mo laminated portion is normally about 250 to 450 nm. In the invention, a layer containing Si and N preferably intervenes at one or more portions between the Si layer and the Mo layer of the Si/Mo laminated portion, and is preferably contact with both of the Si layer and the Mo layer. The layer containing Si and N is preferably free of oxygen. As the layer containing Si and N, in particular, SiN layer is preferable. Here, “SiN” means that the constituent elements are only Si and N. A nitrogen content of the layer containing Si and N is preferably not less than 1 at %, more preferably not less than 5 at %, and not more than 60 at %, more preferably not more than 57 at %. The layer containing Si and N has a thickness of preferably not more than 2 nm, more preferably not more than 1 nm. A lower limit of the thickness of the layer containing Si and N is preferably not less than 0.1 nm, however not limited thereto. The layer containing Si and N is preferably formed at one or more portions between the Si layer and the Mo layer constituting the Si/Mo laminated portion. The layer containing Si and N may be formed at a part or all of the substrate side (lower side) of the Mo layer, and a part or all of the side remote from the substrate (upper side) of the Mo layer. The layer containing Si and N is more preferably formed at all portions between the Si layer and the Mo layer. Particularly, in the viewpoint of obtaining high reflectance, the Si/Mo laminated portion preferably includes not less than 30, more preferably not less than 40 of three-layer laminated structure units. The three-layer laminated structure unit consist of, from the substrate side; i) the Si layer, the layer containing Si and N formed in contact with the Si layer, and the Mo layer formed in contact with the layer containing Si and N, or; ii) the Mo layer, the layer containing Si and N formed in contact with the Mo layer, and the Si layer formed in contact with the layer containing Si and N. Further, the Si/Mo laminated portion preferably includes not less than 30, more preferably not less than 40 of four-layer laminated structure units. The four-layer laminated structure unit consists of, from the substrate side, the Si layer, the layer containing Si and N formed in contact with the Si layer, the Mo layer formed in contact with the layer containing Si and N, and a layer containing Si and N formed in contact with the Mo layer. Upper limits of the three-layer laminated structure unit and the four-layer laminated structure units are not more than 60, respectively. In the invention, the Si/Mo laminated portion shown inFIG.1is exemplified as a concrete example. In the multilayer reflection film2of the substrate with a multilayer reflection film for an EUV mask blank10shown inFIG.1, the Si/Mo laminated portion21is formed in contact with the substrate1. In the Si/Mo laminated portion21, the Si layers211and the Mo layers212are alternately laminated. In this case, the Si layer211is disposed at the side closest to the substrate1, and the Mo layer212is disposed at the side remotest from the substrate1. Further, the layer containing Si and N213is formed at each of the portions between the Si layer211and the Mo layer212, and the layer containing Si and N213is contact with the Si layer211and the Mo layer212. Thus, in this case, the Si/Mo laminated portion21includes the three-layer laminated structure unit consist of, from the substrate side, the Si layer211, the layer containing Si and N213, and the Mo layer212, and the three-layer laminated structure unit consist of, from the substrate side, the Mo layer212, the layer containing Si and N213, and the Si layer211. Further, the Si/Mo laminated portion21includes the four-layer laminated structure unit consist of, from the substrate side, the Si layer211, the layer containing Si and N213, the Mo layer212, and the layer containing Si and N213. The Si/Mo laminated portion is formed by alternately laminating the Si layers and the Mo layers.FIG.2is an intermediate omitted partial cross-sectional view for explaining a substrate with a multilayer reflection film for an EUV mask blank including an ideal multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer.FIG.3is an intermediate omitted partial cross-sectional view for explaining a reflective mask blank including a conventional multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer. In case that the Si/Mo laminated portion21of the multilayer reflection film2of the substrate with a multilayer reflection film for an EUV mask blank10is formed by laminating directly on the Si layers and the Mo layers each other, the state of the Si/Mo laminated portion21consisting of only the Si layers211and the Mo layers212in which the Si layers and the Mo layers are contacted each other is ideal, as shown inFIG.2. In such a Si/Mo laminated portion21, a theoretical reflectance can be obtained by the Si/Mo laminated portion consisting of only the Si layers and the Mo layers. It is not impossible in principle to form the Si/Mo laminated portion having this structure. However, when the Si/Mo laminated portion is formed by a realistic method, in reality, as shown inFIG.3, Si and Mo are mixed at the portion where the Si layer211and the Mo layer212are contacted each other, and as a result, an interdiffusion layer21aconsisting of Si and Mo is unintentionally formed at this portion. When such an interdiffusion layer consisting of Si and Mo is formed, a reflectance of the Si/Mo laminated portion is decreased from the theoretical reflectance obtained by the Si/Mo laminated portion consisting of only the Si layers and the Mo layers. Further, when the multilayer reflection film is heated in mask processing or in exposure by EUV light using a mask, the interdiffusion layer consisting of Si and Mo becomes thicker, or the interdiffusion layer consisting of Si and Mo changes in its nature, resulting in further decrease of reflectance. On the other hand, when the layer containing Si and N is formed between the Si layer and the Mo layer and in contact with both of the Si layer and the Mo layer, formation of the interdiffusion layer consisting of Si and Mo that causes reduction of reflectance is suppressed. Therefore, reduction of reflectance from the theoretical reflectance obtained by the Si/Mo laminated portion consisting of only the Si layers and the Mo layers is suppressed, and a high reflectance is accomplished in compared with a conventional reflectance. In addition, the portion between the Si layer and the Mo layer where the layer containing Si and N is not formed, the above-mentioned interdiffusion layer consisting of Si and Mo is normally formed in contact with both of the Si layer and the Mo layer. However, in the Si/Mo laminated portion of the multilayer reflection film of the invention, formation of the interdiffusion layer consisting of Si and Mo is suppressed at the portion between the Si layer and Mo layer where the layer containing Si and N is formed. Therefore, reduction of reflectance caused by heat is suppressed in compared with the conventional multilayer refection film in which the interdiffusion layer consisting of Si and Mo is formed at all of the portions between the Si layer and the Mo layer. From this point of view, the layer containing Si and N may be formed a part of the portions between the Si layer and Mo layer. However, it is advantageous that the layers containing Si and N are formed at many of the portions between the Si layer and the Mo layer, and it is particularly advantageous that the layers containing Si and N are formed at all of the portions between the Si layer and the Mo layer. In the invention, the multilayer reflection film includes a protection layer formed in contact with the Si/Mo laminated portion, as the uppermost layer. When the uppermost layer is the Si layer or the Mo layer, the layer is etched by dry etching using a fluorine-containing gas. Therefore, it is effective that the protection layer is formed on the Si/Mo laminated portion. The protection layer is also called a capping layer, and acts as an etching stopper when forming an absorber pattern from an absorber film formed on the protection layer. Therefore, a material having different etching properties from that of the absorber film is used for the protection layer. The protection layer is preferably effective also for protecting the multilayer reflection film when correcting the absorber pattern. In the invention, the protection layer includes a lower layer formed in contact with the Si/Mo laminated portion, and an upper layer formed at the side remotest from the substrate. The protection layer shown inFIG.1is exemplified as a concrete example. In the multilayer reflection film2of the substrate with a multilayer reflection film for an EUV mask blank10, the protection layer22is formed in contact with the Si/Mo laminated portion21, and the protection layer22consists of, from the side of the substrate1, the lower layer221and the upper layer222. As a material for the protection layer, a material containing ruthenium (Ru) is used. The lower layer is a layer composed of Ru (Ru layer), and the upper layer is composed of a material containing Ru and at least one selected from the group consisting of metals other than Ru, and metalloids. According to the feature, it is possible to suppress diffusion of oxygen from the protection layer to the Si/Mo laminated portion, and to improve etching properties and to impart cleaning resistance. The metals other than Ru are preferably metals having a low standard oxidation-reduction potential (expressed in formula of Ru2++2e−↔Ru) compared with that of Ru, particularly, metals that are easy to form oxides compared with Ru when the metal is mixed with Ru. The metals other than Ru are more preferably transition metals. As the metals other than Ru, Nb, Zr, Ti, Cr, and other metals are preferable, and as the metalloids, Si and other metalloids are preferable. A layer containing at least one selected from the group consisting of metals other than Ru, and metalloids preferably contains at least one of these together with Ru. According to the feature, when the protection layer is oxidized from the side remote from the Si/Mo laminated portion, the metalloid or the metal having a standard oxidation-reduction potential lower than that of Ru in the upper layer is oxidized first. As a result, the lower layer composed of Ru can be protected by the upper layer, oxygen is prevented from reaching the lower layer, and oxidation of the lower layer can be suppressed. Further, it is possible to suppress diffusion of oxygen into the Si/Mo laminated portion formed in contact with the protection layer (lower layer). In the upper layer, a content (at %) of the at least one selected from the group consisting of metals other than Ru, and metalloids is preferably the same or less than a content of Ru. A lower limit of the content (at %) of the at least one selected from the group consisting of metals other than Ru, and metalloids is preferably not less than 0.1 at %, more preferably not less than 1 at %, however, not limited thereto. The upper layer may further contain another element other metals and metalloids. Particularly, the upper layer may contain oxygen since the metalloids and metals having a low standard oxidation-reduction potential compared with that of Ru are stabilized by oxidation. The upper layer is preferably composed of a material containing Ru and at least one selected from the group consisting of metals and metalloids that have a low etching rate compared with that of Ru in a dry etching using an etching gas containing O2gas, particularly, an etching gas containing O2gas and Cl2gas. When the protection layer is a Ru layer, the Ru layer is etched by a dry etching using an etching gas containing O2gas. On the other hand, when the protection layer is a layer composed of a mixture of Ru, and Nb and/or Zr, resistance to a dry etching using an etching gas containing O2gas, particularly, a dry etching using an etching gas containing O2gas and Cl2gas, which is regularly applied as an etching to a chromium-containing material, and resistance to O3gas), which is used in a mask manufacturing process, are ensured. However, in a Ru material in which Nb and/or Zr are added, Nb and Zr are easily oxidized, thus, the surface of the protection layer is easily roughened. Further, oxygen in the air easily diffuses into the protection layer, thus, oxygen reaches to the Si/Mo laminated portion. In particular, when the uppermost layer of the Si/Mo laminated portion is the Si layer, silicon oxide is formed and the reflectance is decreased. In such a case, the protection layer may expand and be peeled off due to oxidation. Moreover, since the multilayer reflection film is required to have high reflectance, it is advantageous that the protection layer composed of a material containing Ru having a large extinction coefficient is thin. Meanwhile, it is preferable that the thickness of the protection layer is thick for the above-mentioned oxygen diffusion. Therefore, there is a limit to reduce the thickness of the protection layer in which the whole of the layer is formed by only the Ru material in which Nb and/or Zr are added. In the invention, the upper layer composed a material containing Ru and at least one selected from the group consisting of metals and metalloids that have a low etching rate compared with that of Ru in a dry etching using an etching gas containing O2gas, particularly, a dry etching using an etching gas containing O2gas and Cl2gas, which is regularly applied as an etching to a chromium-containing material, is disposed at the side remotest from the substrate in the protection layer. Further, the lower layer composed of Ru is disposed at the contacting side to the Si/Mo laminated portion in the protection layer. According to the features, resistance to a dry etching using an etching gas containing O2gas, particularly, a dry etching using an etching gas containing O2gas and Cl2gas, and resistance to O3gas), which is used in a mask manufacturing process, are ensured. Further, oxygen is prevented from reaching the lower layer by the upper layer containing a metal that is easy to form oxides compared with Ru. Moreover, it is possible to suppress the diffusion of oxygen from the protection layer to the Si/Mo laminated portion by contacting the lower layer composed of Ru with the Si/Mo laminated portion. Accordingly, the multilayer reflection film has necessary etching resistant and a high reflectance. A thickness of the protection layer is normally not more than 5 nm, in particular, preferably not more than 4 nm. A lower limit of the thickness of the protection layer is normally not less than 2 nm. In the thickness of the protection layer, a thickness of the lower layer is preferably not less than 0.5 nm, more preferably not less than 1 nm, and preferably not more than 2 nm, more preferably not more than 1.5 nm, and a thickness of the upper layer is preferably not less than 0.5 nm, more preferably not less than 1 nm, and preferably not more than 3 nm, more preferably not more than 2 nm. The upper layer of the protection layer of the invention may be, respectively, a layer having a single composition (composition not varying in the thickness direction), a layer having a graded composition in which a content of the metal or metalloid continuously increases to the side remote from the substrate in the thickness direction, or a layer consisting of two or more of sublayers and having a graded composition in which contents of the metal or metalloid of the sublayers increase in stepwise to the side remote from the substrate in the thickness direction. Among them, the layers having a graded composition are preferable. When the upper layer consists of two or more of sublayers, each of the sublayers may contain a different kind of the metal or metalloid each other. In the Si/Mo laminated portion, the layer disposed at the side closest to the substrate may be the Si layer or the Mo layer. On the other hand, the layer disposed at the side remotest from the substrate may be the Si layer or the Mo layer, however, the Mo layer is preferable. The layer contacted to the protection layer in the Si/Mo laminated portion is preferably the Mo layer. In case that the layer contacted to the protection layer in the Si/Mo laminated portion is the Si layer, when the protection layer composed of a material containing Ru is directly contacted to the Si/Mo laminated portion, a state in which the Si layer211of the Si/Mo laminated portion21and the protection layer22are contacted each other, as shown inFIG.2, is ideal. In such a state, decrease of reflectance in the multilayer reflection film2due to the protection layer22is limited, and a high reflectance can be obtained. It is not impossible in principle to form the state. However, when the protection layer is formed by a realistic method, in reality, as shown inFIG.3, Si and Ru are mixed at the portion where the Si layer211and the protection layer22are contacted each other, and as a result, an interdiffusion layer21bconsisting of Si and Ru is unintentionally formed at this portion. When such an interdiffusion layer consisting of Si and Ru is formed, a reflectance is decreased due to the interdiffusion layer21b. Further, when the multilayer reflection film is heated in mask processing or in exposure by EUV light using a mask, the interdiffusion layer consisting of Si and Ru becomes thicker, or the interdiffusion layer consisting of Si and Ru changes in its nature. Further, when the protection layer composed of a material containing Ru is exposed to the air, not only the protection layer but also the Si layer is oxidized, resulting in a further decrease of reflectance. On the other hand, as shown inFIG.1, when the Mo layer212in the Si/Mo laminated portion21is contacted to the protection layer22, the interdiffusion layer consisting of Si and Ru that causes decrease of reflectance is not formed between the Si/Mo laminated portion and the protection layer composed of a material containing Ru. Thus, a high reflectance is achieved in compared with the case in which the Si layer in the Si/Mo laminated portion is contacted to the protection layer. At the same time, crystallinity of the Ru layer contacted to Mo is improved, and a dense Ru layer can be obtained. When the Mo layer in the Si/Mo laminated portion is contacted to the protection layer, the layer containing Si and N is preferably formed between the closest Si layer, and the Mo layer contacted to the protection layer, and in contact with both of the Si layer and the Mo layer. In particular, the uppermost part of the multilayer reflection film preferably consists of, from the side remote from the substrate, the protection layer, the Mo layer, the layer containing Si and N, and the Si layer. The portion between the Si layer closest to the Mo layer in the Si/Mo laminated portion contacted to the protection layer, and the Mo layer contacted to the protection layer is easily affected by the protection layer. Further, when the multilayer reflection film is heated, this portion is most susceptible to the influence of heat, so this portion has highest potential for generating the interdiffusion layer composed of Si and Mo. Therefore, it is particularly effective for obtaining a high reflectance that the layer containing Si and N is formed between the Si layer and the Mo layer in the uppermost part of the multilayer reflection film. In this case, by forming the layer containing Si and N, even when the Mo layer contacted to the protection layer composed of a material containing Ru is formed thin, the protection layer composed of a material containing Ru, particularly, the lower layer becomes a stable state having a crystalline dense structure by forming the protection layer composed of a material containing Ru on the Mo layer even when the protection layer composed of a material containing Ru is comparatively thin. Therefore, a thickness of the Mo layer contacted to the protection layer composed of a material containing Ru is preferably not more than 2 nm, more preferably not more than 1 nm. Particularly, when the uppermost part of the multilayer reflection film consists of, from the side remote from the substrate, the protection layer, the Mo layer, the layer containing Si and N, and the Si layer, the protection layer preferably has a thickness of not more than 4 nm, the Mo layer preferably has a thickness of not more than 1 nm, the layer containing Si and N preferably has a thickness of not more than 2 nm, and the Si layer preferably has a thickness of not more than 4 nm. In the invention, with respect to a reflectance of the multilayer reflection film, the multilayer reflection film may have a peak reflectance of not less than 65% with respect to EUV light of the wavelength range of 13.4 to 13.6 nm at an incident angle of 6°. Even if the multilayer reflection film is heat-treated, for example, in the air at 200° C. for 10 minutes, the change (decrease) of the reflectance is small, and even after the heat treatment, the peak reflectance can be maintained in the range of not less than 65%. Examples of methods for forming the Si/Mo laminated portion include a sputtering method in which, to perform sputtering, power is supplied to a target, and plasma of an atmospheric gas is formed (an atmospheric gas is ionized) by the supplied power, and an ion beam sputtering method in which a target is irradiated with ion beam. The sputtering methods include a DC sputtering method in which a DC voltage is applied to a target, and an RF sputtering method in which a high frequency voltage is applied to a target. The sputtering method is a film forming method that utilizes sputtering phenomenon of gas ions by applying a voltage to a target with feeding a sputter gas into a chamber to ionize the gas. Particularly, a magnetron sputtering method has an advantage in productivity. The power may be applied to the target by a DC system or an RF system. The DC system also includes a pulse sputtering that inverts a negative bias applied to the target for a short time in order to prevent charge-up of the target. The Si/Mo laminated portion can be formed by a sputtering method using a sputtering apparatus to which a plurality of targets can be attached. The Si layer, the layer containing Si and N, and the Mo layer can be sequentially formed in sequential sputtering of a silicon (Si) target and a molybdenum (Mo) target. In particular, these layers may be formed by using the silicon (Si) target for forming the Si layer and the layer containing Si and N, and the molybdenum (Mo) target for forming the Mo layer; using, as a sputter gas, rare gas such as helium (He) gas, argon (Ar) gas, krypton (Kr) gas and xenon (Xe) gas in the case of forming the Si layer or the Mo layer, or the rare gas with a nitrogen-containing gas such as nitrogen gas (N2) gas in the case of forming the layer containing Si and N; and disposing the substrate and each of the targets in offset arrangement in which the vertical line passing through the center of the sputtering surface of each target does not match with the vertical line passing through the center of the film forming surface of the substrate. The sputtering is preferably performed while rotating the substrate along its main surface. Further, in this case, it is preferable that none of shield members such as shutters which shield between the substrate and the target are disposed. In addition, the layer containing Si and N may be formed by a reactive sputtering using a nitrogen-containing gas, or by using a silicon compound target such as silicon nitride target as the target. The Si/Mo laminated portion can be formed by a method including step (A): forming the Si/Mo laminated portion by sputtering. In this case, the sputtering is preferably performed by a magnetron sputtering apparatus including a chamber. Preferably, in this chamber, one or more Mo targets and one or more Si targets are attachable; powers are individually applicable to the Mo target and the Si target; the substrate and each of the targets are disposed in offset arrangement; none of shield members are disposed between the substrate and each of the targets; the substrate is rotatable along its main surface; and nitrogen-containing gas is introducible. The protection layer can be formed by, for example, a sputtering method such as an ion beam sputtering and a magnetron sputtering, as same in the case of the Si/Mo laminated portion. However, a magnetron sputtering method has an advantage, as same in the case of the Si/Mo laminated portion. The protection layer can be formed by a sputtering method using a sputtering apparatus to which a plurality of targets can be attached. In particular, the protection layer is formed by using a ruthenium (Ru) target, a target composed of metal other than Ru, or metalloid (for example, niobium (Nb), zirconium (Zr), titanium (Ti), chromium (Cr), silicon (Si), or the like), or a target composed of ruthenium (Ru), and metal other than Ru, or metalloid (for example, niobium (Nb), zirconium (Zr), titanium (Ti), chromium (Cr), silicon (Si), or the like); using, as a sputter gas, a rare gas such as helium (He) gas, argon (Ar) gas, krypton (Kr) gas and xenon (Xe) gas and an optional reactive gas such as an oxygen-containing gas, a nitrogen-containing gas and a carbon-containing gas; disposing the substrate and each of the targets in offset arrangement; and sputtering a single target, or multiple targets simultaneously. The sputtering is preferably performed while rotating the substrate along its main surface. For example, the lower layer can be formed by sputtering using a Ru target and a rare gas as a sputter gas. On the other hand, the upper layer can be formed by sputtering using a Ru target and a target containing one or more kinds of metals other than Ru, and/or metalloids, and a rare gas as a sputter gas. Particularly, when a plurality of targets are simultaneously sputtered, a layer having a graded composition can be formed by continuously or stepwise varying a ratio of power applied to each of the targets. When the upper layer of the protection layer is formed by a compound further containing another element other than metal and metalloid, the upper layer can be formed by reactive sputtering using, as a sputter gas, a reactive gas such as an oxygen-containing gas, a nitrogen-containing gas and a carbon containing gas together with a rare gas. Particularly, when a layer containing oxygen is formed, it is preferable to use oxygen gas (O2gas). Further, a target composed of a compound may also be used. The protection layer can be formed by a method including step (B): forming the protection layer by sputtering. In this case, the sputtering is preferably performed by a magnetron sputtering apparatus including a chamber. Preferably, in this chamber, one or more Ru targets and one or more targets containing at least one selected from the group consisting of metals other than Ru, and metalloids are attachable; powers are individually applicable to the Ru target and the target containing at least one selected from the group consisting of metals other than Ru, and metalloids; the substrate and each of the targets are disposed in offset arrangement; none of shield members are disposed between the substrate and each of the targets; the substrate is rotatable along its main surface; and oxygen-containing gas is introducible. In this way, the multilayer reflection film can be suitably formed by the step (A): forming the Si/Mo laminated portion by sputtering, and the step (B): forming the protection layer by sputtering. In this case, for example, the step (A) is performed in one sputter chamber, the substrate on which the Mo/Si laminated portion has been formed is transferred from the one sputter chamber to the other sputter chamber, and then, the step (B) is performed in the other sputter chamber. However, if the Si/Mo laminated portion is exposed in, for example, an atmosphere containing oxygen such as the air, an unnecessary oxide layer is formed at the portion between the Si/Mo laminated portion and the protection layer, resulting decrease of reflectance and causing peel at the portion of them in some cases. Therefore, when shifting to the step (B) after the step (A), it is preferable that none of gases reactable to the Mo/Si laminated portion, particularly none of oxygen-containing gases such as oxygen gas (O2gas) are contacted to the Mo/Si laminated portion that has been formed in the step (A), in particular, without exposing the Mo/Si laminated portion to the air until shifting to the step (B), then the step (B) is performed. When the step (A) is performed in one sputter chamber, and the substrate on which the Mo/Si laminated portion has been formed is transferred from the one sputter chamber to the other sputter chamber, and then the step (B) is performed in the other sputter chamber, as a method for performing the step (B) without contacting the Si/Mo laminated portion formed in the step (A) with gases which are reactable to the Mo/Si laminated portion, examples of the method include a method in which a transfer chamber capable of communicating with each of the sputter chambers individually or with both of the sputter chambers at the same time is provided between the one and other sputter chambers, and in which the substrate on which the Mo/Si laminated portion has been formed is transferred from the one sputter chamber via the transfer chamber to the other sputter chamber. At this time, both of the transfer from the one sputter chamber to the transfer chamber and the transfer from the transfer chamber to the other sputter chamber are preferably performed under an inert gas atmosphere at a normal pressure (atmospheric pressure) or a reduced pressure (lower pressure than normal pressure), or under vacuum. FIG.4is a conceptual diagram showing an example of a sputtering apparatus suitable for forming a multilayer reflection film of the invention. The sputtering apparatus100is configurated by a sputter chamber101for forming the Si/Mo laminated portion by sputtering, a sputter chamber102for forming the protection layer by sputtering, a transfer chamber103communicating with each of the sputter chambers101,102, and a load lock chamber104communicating with the transfer chamber103. An openable and closable bulkhead (not shown) is provided between the transfer chamber103and the load lock chamber104. Further, an openable and closable bulkhead is optionally provided between each of the sputter chambers101,102and the transfer chamber103. When the multilayer reflection film is formed by such a sputtering apparatus, for example, first, a substrate is introduced into the load lock chamber104, the pressure inside the load lock chamber104is reduced, the bulkhead is opened, the substrate is transferred via the transfer chamber103to the sputter chamber101, then, the Si/Mo laminated portion is formed in the sputter chamber101. Next, the substrate on which the Si/Mo laminated portion has been formed is transferred from the sputter chamber101via the transfer chamber103to the sputter chamber102, then the protection layer is formed in the sputter chamber102. Next, the substrate on which the Si/Mo laminated portion and the protection layer (multilayer reflection film) has been formed is transferred from the sputter chamber102via the transfer chamber103to the load lock chamber104, the bulkhead is closed, the pressure of inside of the road lock chamber104is returned to normal pressure, then the substrate on which the multilayer reflection film is formed is taken out. By such a way, a substrate with a multilayer reflection film for an EUV mask blank can be obtained. When the multilayer reflection film is formed in this way, the step (B) can be performed without contacting the Si/Mo laminated portion formed in the step (A) with gases which are reactable to the Mo/Si laminated portion. To manufacture an EUV mask blank, an absorber film is formed on the multilayer reflection film of the substrate with a multilayer reflection film for an EUV mask blank. The EUV mask blank of the invention includes an absorber film that is formed on the multilayer reflection film of the substrate with a multilayer reflection film for an EUV mask blank, and absorbs exposure light, specifically, an absorber film that absorbs EUV light and reduces a reflectance. The absorber film is preferably formed in contact with the protection layer. The EUV mask blank may further includes, on the absorber film, a hard mask film that acts as an etching mask for dry etching the absorber film. On the other hand, a conductive film may be formed on the other main surface (back side surface) of the substrate which is the opposite side to the one main surface, preferably in contact with the substrate. The conductive film is used for holding an EUV mask on an exposure tool by an electrostatic chuck. In the invention, one main surface of the substrate is defined as the front side or the upper side, and the other main surface is defined as the back side or the lower side. However, the front and back sides or the upper and lower sides in both surfaces are defined for the sake of convenience. Two main surfaces (film forming surfaces) are one and the other main surfaces, respectively. The front and back sides or the upper and lower sides may be substituted. From the EUV mask blank (a mask blank for EUV exposure), an EUV mask (a mask for EUV exposure) including an absorber pattern (a pattern of the absorber film) formed by patterning the absorber film is manufactured. The EUV mask blank and the EUV mask are a reflective mask blank and a reflective mask, respectively. The absorber film is formed on the multilayer film, and is a film that absorbs EUV light as an exposure light and reduces reflectance of the exposure light. A transfer pattern in an EUV mask is formed by a difference of reflectance between a portion on which the absorber film is formed and a portion on which the absorber film is not formed. A material of the absorber film is not limited as long as a material can absorb EUV light and is processible to a pattern. Examples of the material of the absorber film include, for example, a material containing tantalum (Ta) or chromium (Cr). The material containing Ta or Cr may contain oxygen (O), nitrogen (N), carbon (C), boron (B), or other elements. Examples of the material containing Ta include, for example, Ta simple substance, and a tantalum compound such as TaO, TaN, TaON, TaC, TaCN, TaCO, TaCON, TaB, TaOB, TaNB, TaONB, TaCB, TaCNB, TaCOB and TaCONB. Examples of the material containing Cr include, for example, Cr simple substance, and a chromium compound such as CrO, CrN, CrON, CrC, CrCN, CrCO, CrCON, CrB, CrOB, CrNB, CrONB, CrCB, CrCNB, CrCOB and CrCONB. The absorber film can be formed by a sputtering method, and the sputtering is preferably a magnetron sputtering. In particular, the absorber film is formed by using a metal target such as a chromium (Cr) target and a tantalum (Ta) target or a metal compound target such as a chromium compound target and a tantalum compound target (a target containing a metal such as Cr and Ta, and at least one selected from the group consisting of oxygen (O), nitrogen (N), carbon (C), boron (B) and the like); and using, as a sputter gas, a rare gas such as helium (He) gas, argon (Ar) gas, krypton (Kr) gas and xenon (Xe) gas in sputtering, or using the rare gas with a reactive gas such as an oxygen-containing gas, a nitrogen-containing gas and a carbon-containing gas in a reactive sputtering. On the side of absorber film that is remote from the substrate, a hard mask film (an etching mask film for the absorber film) having different etching properties from the absorber film may be provided preferably in contact with the absorber film. The hard mask film is a film that acts as an etching mask when an absorber film is etched by dry etching. After the absorber pattern is formed, the hard mask film may be left as the reflectance reducing film for reducing the reflectance at the wavelength of light used in inspections such as pattern inspection as a part of the absorber film, or removed to be absent on the reflective mask. Examples of the material of the hard mask film include a material containing chromium (Cr). A hard mask film composed of a material containing Cr is more preferable in case that the absorber film is composed of a material containing Ta and free of Cr. When a layer that mainly has a function for reducing the reflectance at the wavelength of light used in inspection such as pattern inspection (the reflectance reducing layer) is formed on the absorber film, the hard mask film may be formed on the reflectance reducing film of the absorber film. The hard mask film may be formed by, for example, a magnetron sputtering method. A thickness of the hard mask is normally about 5 to 20 nm, however, not limited thereto. A sheet resistance of the conductive film is preferably not more than 100Ω/□, and a material of the conductive film are not particularly limited. Examples of the material of the conductive film include, for example, a material containing tantalum (Ta) or chromium (Cr). The material containing Ta or Cr may contain oxygen (O), nitrogen (N), carbon (C), boron (B), or other elements. Examples of the material containing Ta include, for example, Ta simple substance, and a tantalum compound such as TaO, TaN, TaON, TaC, TaCN, TaCO, TaCON, TaB, TaOB, TaNB, TaONB, TaCB, TaCNB, TaCOB and TaCONB. Examples of the material containing Cr include, for example, Cr simple substance, and a chromium compound such as CrO, CrN, CrON, CrC, CrCN, CrCO, CrCON, CrB, CrOB, CrNB, CrONB, CrCB, CrCNB, CrCOB and CrCONB. A thickness of the conductive film is not particularly limited as long as the conductive film has a function for an electrostatic chuck use. The thickness is normally about 5 to 100 nm. The thickness of the conductive film is preferably formed so that a film stress is balanced with the multilayer reflection film and the absorber pattern after an EUV mask is obtained, that is, after the absorber pattern is formed. The conductive film may be formed before forming the multilayer reflection film, or after forming all of the films disposed at the side of the multilayer reflection film on the substrate. Further, the conductive film may be formed after forming a part of the films disposed at the side of the multilayer reflection film on the substrate, and then the remainder of the films disposed at the side of the multilayer reflection film on the substrate may be formed. The conductive film may be formed by, for example, a magnetron sputtering method. Further, the EUV mask blank may include a resist film formed on the side remotest from the substrate. The resist film is preferably an electron beam (EB) resist. EXAMPLES Examples of the invention are given below by way of illustration and not by way of limitation. Example 1 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas and N2gas as sputter gases. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 3.5 nm by introducing only Ar gas into a sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these three layers, the Si layer, the SiN layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Si layer was formed at a setting thickness of 2.3 nm by introducing only Ar gas into the sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 0.5 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. The protection layer was formed by magnetron sputtering using a Ru target and a Nd target as targets, and Ar gas as a sputter gas. One Ru target and one Nb target were attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. First, a Ru layer as a lower layer was formed at a setting thickness of 2.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Ru target. Next, a RuNb mixed layer as an upper layer was formed at a setting thickness of 0.5 nm by introducing Ar gas into the sputter chamber, and discharging simultaneously the Ru target and the Nb target, obtaining the protection layer consisting of two layers. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 0.5 nm, a Ru layer having a thickness of 2.0 nm, a Mo layer having a thickness of 0.5 nm, a SiN layer having a thickness of 0.5 nm, a Si layer having a thickness of 2.3 nm, a SiN layer having a thickness of 0.5 nm, a Mo layer having a thickness 3.0 nm, and a SiN layer having a thickness of 0.5 nm were observed in this order at the upper portion of the multilayer reflection film. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, the observed multilayer reflection film was similar to that not heat-treated. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 67% before the heat treatment and 65% after the heat treatment that satisfy a high reflectance of not less than 65%. Example 2 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas and N2gas as sputter gases. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 3.5 nm by introducing only Ar gas into a sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these three layers, the Si layer, the SiN layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Si layer was formed at a setting thickness of 2.3 nm by introducing only Ar gas into the sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 0.5 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. The protection layer was formed by magnetron sputtering using a Ru target and a Nd target as targets, and Ar gas as a sputter gas. One Ru target and one Nb target were attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. First, a Ru layer as a lower layer was formed at a setting thickness of 1.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Ru target. Next, a RuNb mixed layer as an upper layer was formed at a setting thickness of 1.5 nm by introducing Ar gas into the sputter chamber, and discharging simultaneously the Ru target and the Nb target, obtaining the protection layer consisting of two layers. At this time, the power applied to the Nb target was gradually increased to form a graded composition in which the Nb content continuously increases to the side remote from the substrate in the thickness direction. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 1.5 nm, a Ru layer having a thickness of 1.0 nm, a Mo layer having a thickness of 0.5 nm, a SiN layer having a thickness of 0.5 nm, a Si layer having a thickness of 2.3 nm, a SiN layer having a thickness of 0.5 nm, a Mo layer having a thickness 3.0 nm, and a SiN layer having a thickness of 0.5 nm were observed in this order at the upper portion of the multilayer reflection film. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, the observed multilayer reflection film was similar to that not heat-treated. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 67% before the heat treatment and 65% after the heat treatment that satisfy a high reflectance of not less than 65%. Example 3 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas and N2gas as sputter gases. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 3.5 nm by introducing only Ar gas into a sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these three layers, the Si layer, the SiN layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Si layer was formed at a setting thickness of 2.3 nm by introducing only Ar gas into the sputter chamber, and discharging only the Si target. Next, a SiN layer was formed at a setting thickness of 0.5 nm by introducing Ar gas and N2gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 0.5 nm by introducing only Ar gas into the sputter chamber, and discharging only the Mo target. The protection layer was formed by magnetron sputtering using a Ru target and a Nd target as targets, and Ar gas and O2gas as sputter gases. One Ru target and one Nb target were attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. First, a Ru layer as a lower layer was formed at a setting thickness of 1.0 nm by introducing only Ar gas into the sputter chamber, and discharging only the Ru target. Next, a RuNbO mixed layer as an upper layer was formed at a setting thickness of 1.5 nm by introducing Ar gas and O2gas into the sputter chamber, and discharging simultaneously the Ru target and the Nb target, obtaining the protection layer consisting of two layers. At this time, the power applied to the Nb target was gradually increased to form a graded composition in which the Nb content continuously increases to the side remote from the substrate in the thickness direction. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a RuNbO mixed layer having a thickness of 1.5 nm, a Ru layer having a thickness of 1.0 nm, a Mo layer having a thickness of 0.5 nm, a SiN layer having a thickness of 0.5 nm, a Si layer having a thickness of 2.3 nm, a SiN layer having a thickness of 0.5 nm, a Mo layer having a thickness 3.0 nm, and a SiN layer having a thickness of 0.5 nm were observed in this order at the upper portion of the multilayer reflection film. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, the observed multilayer reflection film was similar to that not heat-treated. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 67% before the heat treatment and 65% after the heat treatment that satisfy a high reflectance of not less than 65%. Comparative Example 1 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas as a sputter gas. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 4.0 nm by introducing Ar gas into a sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these two layers, the Si layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a Si layer was formed at a setting thickness of 3.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Si target. The protection layer was formed by magnetron sputtering using a Ru target and a Nd target as targets, and Ar gas as a sputter gas. One Ru target and one Nb target were attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. A RuNb mixed layer was formed at a setting thickness of 3.5 nm by introducing Ar gas into the sputter chamber, and discharging simultaneously the Ru target and the Nb target, obtaining the protection layer. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 2.5 nm, an interdiffusion layer having a thickness of 1.5 nm in which Si and Ru are mixed, a Si layer having a thickness of 2.5 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, a Mo layer having a thickness 2.5 nm, and an interdiffusion layer having a thickness of 1.5 nm in which Si and Mo are mixed were observed in this order at the upper portion of the multilayer reflection film. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 2 nm, a SiO layer having a thickness of 2 nm, a RuNbSi mixed layer having a thickness of 2 nm, a Si layer having a thickness of 1.0 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, a Mo layer having a thickness 2.0 nm, and an interdiffusion layer having a thickness of 2.0 nm in which Si and Mo are mixed were observed in this order at the upper portion of the multilayer reflection film. In this case, the SiO layer that was generated by a reaction of the Si layer with oxygen in the air which passed though the RuNd mixed layer was formed. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 64% before the heat treatment and 60% after the heat treatment, and each was a low reflectance of less than 65%. Comparative Example 2 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas as a sputter gas. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 4.0 nm by introducing Ar gas into a sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these two layers, the Si layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a Si layer was formed at a setting thickness of 4.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 1.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Mo target. The protection layer was formed by magnetron sputtering using a Ru target and a Nd target as targets, and Ar gas as a sputter gas. One Ru target and one Nb target were attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. A RuNb mixed layer was formed at a setting thickness of 2.0 nm by introducing Ar gas into the sputter chamber, and discharging simultaneously the Ru target and the Nb target, obtaining the protection layer. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 2.0 nm, a Mo layer having a thickness of 1.0 nm, an interdiffusion layer having a thickness of 1.5 nm in which Si and Mo are mixed, a Si layer having a thickness of 3.5 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, and a Mo layer having a thickness 2.5 nm were observed in this order at the upper portion of the multilayer reflection film. However, no interdiffusion layer was formed between the RuNb mixed layer and the Mo layer. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, from the surface side (the side remote from the substrate), a RuNb mixed layer having a thickness of 2 nm, a SiO layer having a thickness of 2 nm, an interdiffusion layer having a thickness of 1.0 nm in which Si and Mo are mixed, a Si layer having a thickness of 1.5 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, a Mo layer having a thickness 2.5 nm, and an interdiffusion layer having a thickness of 1.5 nm in which Si and Mo are mixed were observed in this order at the upper portion of the multilayer reflection film. In this case, the SiO layer that was generated by a reaction of the Si layer with oxygen in the air which passed though the RuNd mixed layer was formed. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 64% before the heat treatment and 62% after the heat treatment, and each was a low reflectance of less than 65%. Comparative Example 3 A multilayer reflection film consisting of a Si/Mo laminated portion and a protection layer containing Ru was formed on a substrate. The substrate was composed of a low thermal expansion material having a coefficient of thermal expansion within the range of ±5.0×10−9/° C., a surface roughness of the main surface of not more than 0.1 nm in RMS value, and a flatness of the main surface of 100 nm in TIR value. The Si/Mo laminated portion was formed by magnetron sputtering using a Si target and a Mo target as targets, and Ar gas as a sputter gas. One Si target and one Mo target were attached to a magnetron sputtering apparatus, the substrate and each target were arranged in an offset arrangement, and the substrate was rotated. First, a Si layer was formed at a setting thickness of 4.0 nm by introducing Ar gas into a sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 3.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Mo target. A cycle of the formations of these two layers, the Si layer and the Mo layer, as one cycle, was repeated for a total of 40 cycles. Then, a Si layer was formed at a setting thickness of 3.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Si target. Next, a Mo layer was formed at a setting thickness of 1.0 nm by introducing Ar gas into the sputter chamber, and discharging only the Mo target. The protection layer was formed by magnetron sputtering using a Ru target as a target, and Ar gas as a sputter gas. One Ru target was attached to another magnetron sputtering apparatus which differs from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion, the substrate was faced to the target, and the substrate was rotated. The substrate on which the Si/Mo laminated portion has been formed was transferred under vacuum from the magnetron sputtering apparatus used for forming the Si/Mo laminated portion to the other magnetron sputtering apparatus used for forming the protection layer via a transfer chamber that communicates with both sputter chambers. A Ru layer was formed at a setting thickness of 2.5 nm by introducing Ar gas into the sputter chamber, and discharging the Ru target, obtaining the protection layer. When the cross section of the obtained multilayer reflection film was observed by a transmission electron microscope (TEM), from the surface side (the side remote from the substrate), a Ru layer having a thickness of 2.5 nm, an interdiffusion layer having a thickness of 1.2 nm in which Si and Mo are mixed, a Si layer having a thickness of 2.8 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, a Mo layer having a thickness 2.7 nm, and an interdiffusion layer having a thickness of 1.3 nm in which Si and Mo are mixed were observed in this order at the upper portion of the multilayer reflection film. Further, the multilayer reflection film obtained by the same method was heat-treated at 200° C. for 10 minutes in air atmosphere. When the cross section of the multilayer reflection film was observed in the same manner, from the surface side (the side remote from the substrate), a Ru layer having a thickness of 2.5 nm, an interdiffusion layer having a thickness of 1.6 nm in which Si and Mo are mixed, a Si layer having a thickness of 2.6 nm, an interdiffusion layer having a thickness of 0.5 nm in which Si and Mo are mixed, a Mo layer having a thickness of 2.6 nm, and an interdiffusion layer having a thickness of 1.6 nm in which Si and Mo are mixed were observed in this order at the upper portion of the multilayer reflection film. Further, when a peak reflectance of EUV light in the wavelength range of 13.4 to 13.6 nm at an incident angle of 6° was measured before and after the heat treatment. The results were 65% before the heat treatment and 62% after the heat treatment, and the reflectance was a low reflectance of less than 65% in the case after the heat treatment. Japanese Patent Application No. 2020-151709 is incorporated herein by reference. Although some preferred embodiments have been described, many modifications and variations may be made thereto in light of the above teachings. It is therefore to be understood that the invention may be practiced otherwise than as specifically described without departing from the scope of the appended claims. | 69,017 |
11860530 | DETAILED DESCRIPTION The present disclosure relates generally to reflective masks for IC device manufacturing and, more particularly, to a reflective mask with venting features to prevent reflective mask defects. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Moreover, the formation of a feature on, connected to, and/or coupled to another feature in the present disclosure that follows may include embodiments in which the features are formed in direct contact, and may also include embodiments in which additional features may be formed interposing the features, such that the features may not be in direct contact. In addition, spatially relative terms, for example, “lower,” “upper,” “horizontal,” “vertical,” “above,” “over,” “below,” “beneath,” “up,” “down,” “top,” “bottom,” etc., as well as derivatives thereof (e.g., “horizontally,” “downwardly,” “upwardly,” etc.) are used for ease of the present disclosure of one features relationship to another feature. The spatially relative terms are intended to cover different orientations of the device including the features. Still further, when a number or a range of numbers is described with “about,” “approximate,” and the like, the term is intended to encompass numbers that are within a reasonable range including the number described, such as within +/−10% of the number described or other values as understood by person skilled in the art. For example, the term “about 5 nm” encompasses the dimension range from 4.5 nm to 5.5 nm. Reflective masks have been adopted for state-of-the-art photolithography radiation sources, such as an extreme ultraviolet (EUV) radiation source. A reflective mask for EUV radiation source includes a mask substrate, a reflective layer over the mask substrate, and an EUV absorber layer over the reflective layer. The EUV absorber layer is patterned to selectively expose the underlying reflective layer such that incident EUV radiation is absorbed by the remaining EUV absorber layer but is reflected by the exposed reflective layer. A reflective mask may include a plurality of main feature areas, each of which includes EUV absorber patterns that will print on a photoresist-borne workpiece. In that regard, the plurality of main feature areas may also be referred to as printing feature areas. Each of the plurality of main feature areas may be defined within or surrounded by pattern-free areas, where the EUV absorber layer is not patterned. Adjacent main feature areas are spaced apart by pattern-free areas. In addition, pattern-free areas may be present between edges of the reflective mask and the main feature areas. The pattern-free areas between edges of the reflective mask and the main feature areas may be or include the so-called black border areas. When an EUV reflective mask is protected by a pellicle on a pellicle frame, the black border area that is not covered by the pellicle frame may be situated between one or more main feature areas and the opening edge of the pellicle frame. In some instances, pattern-free areas of various sizes may also be present in the plurality of main feature areas. It has been observed that bubbles may be developed in the EUV absorber layer in pattern-free areas but are rarely observed in the areas with dense patterns. Although the mechanism of bubble generation is still under investigation, evidence suggests that the bubbling may be caused by vaporization of compounds intentionally or unintentionally incorporated in the EUV absorber layer. Such compounds may include, for example, water. It has been theorized that when these compounds vaporize and cannot escape due to the continuous EUV absorber layer in the pattern-free areas, the vaporized compounds would cause bubbling under the EUV absorber layer. When more and more bubbles form, they may coalesce to become a larger bubble. Larger bubbles may further coalesce to form a pocket, leading to delamination of the EUV absorber layer. Increase in energy of the EUV radiation source may exacerbate bubbling and accelerate delamination. Therefore, bubbling may be source of defects in an EUV reflective mask and may also shorten the lifetime of an EUV reflective mask. The present disclosure provides methods and mask designs to reduce defects in the EUV reflective masks and improve their lifetime. Methods of the present disclosure may identify venting feature insertion areas in a mask design and insert venting features in the venting feature insertion areas. As their name suggests, the venting features are openings in the EUV absorber layer to provide outlets for vaporized compounds, thereby preventing bubbling and delamination. Depending on their locations within or without the main feature areas, the venting features may be non-printing (sub-resolution) or printing features. For example, if venting features are inserted into main feature areas, they may be non-printing features. However, if venting features are only inserted into pattern-free areas outside the main feature areas, they may be printing features or non-printing features. A schematic diagram of a lithography system10is illustrated inFIG.1. The lithography system10, which may also be generically referred to as a scanner, is operable to perform a lithographic exposure process. In the illustrated embodiments, the lithography system10is an extreme ultraviolet (EUV) lithography system designed to expose a workpiece using EUV radiation having a wavelength ranging between about 1 nm and about 100 nm. In some exemplary embodiments, the lithography system10includes a radiation source12that generates EUV radiation with a wavelength centered at about 13.5 nm. In one such embodiment, the radiation source12utilizes laser-produced plasma (LPP) to generate the EUV radiation by heating a medium such as droplets of tin into a high-temperature plasma using a laser. The lithography system10may also include an illuminator14that focuses and shapes the radiation produced by the radiation source12. The illuminator14may include refractive optical components, including monolithic lenses and/or array lenses (e.g., zone plates), and may include reflective optical components, including monolithic mirrors and/or mirror arrays. The number of optical components shownFIG.1have been reduced for clarity, although in actual embodiments, the illuminator14includes dozens or even hundreds of lenses and/or mirrors. The optical components are arranged and aligned to project radiation emitted by the radiation source12onto a mask100retained on a mask stage16. The optical components of the illuminator14may also shape the radiation along the light path in order to produce a particular illumination pattern upon the mask100. After reflecting off the mask100, the radiation is directed through a projection optics module18, also referred to as a projection optics box (POB). Similar to the illuminator14, the projection optics module18may include refractive optical components, including monolithic lenses and/or array lenses (e.g., zone plates), and may include reflective optical components, including monolithic mirrors and/or mirror arrays. The optical components of the projection optics module18are arranged and aligned to direct radiation reflecting off the mask100and to project it onto a workpiece20, such as the illustrated semiconductor substrate or any other suitable workpiece, retained in a substrate stage22. In addition to guiding the radiation, the optical components of the projection optics module18may also enlarge, narrow, focus, and/or otherwise shape the radiation along the light path. Radiation projected by the projection optics module18on the workpiece20causes changes in a photosensitive component of the target. In a common example, the workpiece20includes a semiconductor substrate with a photosensitive resist layer. Portions of the photosensitive resist layer that are exposed to the radiation undergo a chemical transition making them either more or less sensitive to a developing process. In an exemplary embodiment, after exposure, the photosensitive resist layer undergoes a post-exposure baking, developing, rinsing, and drying in order to complete the transition. Subsequent processing steps performed on the semiconductor substrate may use the pattern to selectively process portions of the substrate. The mask100may have a construction illustrated inFIG.2. In some embodiments, the mask100includes a substrate102with a reflector (or a reflective layer) such as a multi-layer mirror (MLM)104disposed on the substrate102. In turn, an absorptive layer108is disposed on the MLM104. The composition of the substrate102, the MLM104, and the absorptive layer108are described in detail below. However, at a high level, regions of the mask100where the absorptive layer108is present absorb incident radiation, whereas regions of the mask100where the absorptive layer108is not present reflect incident radiation towards a target. The substrate102commonly includes a low thermal expansion material (LTEM). Exemplary low thermal expansion materials include quartz as well as LTEM glass, silicon, silicon carbide, silicon oxide, titanium oxide, Black Diamond® (a trademark of Applied Materials), and/or other low thermal expansion substances known in the art. To support the substrate102, a chucking layer such as an electrostatic chucking layer or a mechanical chuck may be attached to the back side of the substrate102. Exemplary electrostatic chucking layer materials include chromium nitride (CrN), chromium oxynitride (CrON), chromium (Cr), tantalum boron nitride (TaBN), and tantalum silicide (TaSi). The MLM104is disposed over the front side of the substrate102. The MLM104is a typical example of a reflective structure that is well-suited to EUV radiation. Rather than a single reflective surface, an MLM includes a number of alternating material layers. Typical numbers of alternating pairs range from 20 to 80, although the MLM104may include any number of pairs. The number of layers, the layer thickness, and the layers materials are selected to provide the desired reflectivity based on the exposure radiation and its properties such as wavelength and/or angle of incidence. For example, layer thickness may be tailored to achieve maximum constructive interference of EUV radiation reflected at each interface of the film pairs while achieving a minimum absorption of extreme ultraviolet radiation by the MLM104. Likewise, the materials used for each alternating pair may be selected based on their refractive index. In an exemplary embodiment, the MLM104includes forty pairs of alternating molybdenum and silicon (Mo—Si) layers. In a further exemplary embodiment, the MLM104includes sixty pairs of alternating molybdenum and beryllium (Mo—Be) layers. A capping layer106(also known as a buffer layer) may be disposed over the MLM104. In an embodiment, the capping layer106protects the MLM104during an etching and/or repair process. The capping layer106may include materials such as ruthenium (Ru), silicon dioxide (SiO2), and/or amorphous carbon. The absorptive layer108is disposed on the capping layer106and may include chromium (Cr), tantalum nitride (TaN), tantalum oxide (TaO), tantalum boron nitride (TaBN), titanium nitride (TiN), combinations thereof, and/or other suitable absorptive materials. In some embodiments, the absorptive layer110contains multiple layers of absorptive material, for example, layers of chromium and layers of tantalum nitride. The absorptive layer110may also include an anti-reflective coating (ARC). Suitable ARC materials include tantalum boron oxide (TaBO), chromium oxide (Cr2O3), silicon oxide (SiO2), silicon nitride (SiN), tantalum oxide (TaO5), tantalum oxynitride (Ta0N), and/or other suitable materials. The MLM104, the capping layer106, and the absorptive layer108may be disposed on the substrate102by various methods, including physical vapor deposition (PVD) processes such as evaporation and DC magnetron sputtering, a plating process such as electrode-less plating or electroplating, a chemical vapor deposition (CVD) process such as atmospheric pressure CVD (APCVD), low pressure CVD (LPCVD), plasma enhanced CVD (PECVD), or high density plasma CVD (HDP CVD), ion beam deposition, spin-on coating, and/or other methods known in the art. In an embodiment, the absorptive layer108is deposited by a sputtering deposition technique to achieve a controlled thickness and uniformity with relatively low defects and good adhesion. The compositions and/or physical structures of one or more layers described above may be selected based upon reflectivity/absorption of the radiation to be used with the mask100, the stress compatibility with adjacent layers, and/or other criteria known in the art. As described above, when the absorptive layer108extends continuously in a pattern-free area, vapor of compounds in the absorptive layer108or the capping layer106may not readily escape and may form bubbles under the absorptive layer108.FIG.3is a flow chart of a method200for inserting venting features in the mask100. Operations of the method200will be described below in conjunction withFIGS.4,5A,5B,6A,6B,7A,7B,7C,8, and9. It is understood that additional steps can be provided before, during, and after the method200, and some of the steps described can be replaced or eliminated for other embodiments of the method200. Referring toFIGS.3and4, method200includes a block202where a mask design1000is received. In some embodiments, the mask design1000includes main feature areas120, each of which includes patterns of printing features in the absorber layer. AlthoughFIG.4illustrates six rectangular main feature areas120, the present disclosure is not so limited. A mask design1000according to embodiments of the present disclosure may include more than 6 main feature areas and each of the main feature areas may include a polygonal shape, such as a square or a rectangle. In addition, the main feature areas120may not be aligned along the X and Y directions as illustrated inFIG.4. Each of the main feature areas120may be defined within and surrounded by pattern-free areas between adjacent main feature areas120or around the edge of the mask design1000. As describe above with respect toFIG.2, each of the main feature areas120include dense printing patterns in the absorber layer while the absorber layer in the pattern-free areas is free of patterns. For ease of reference, pattern-free areas in the margin of the mask design1000may be referred to as black border areas140and pattern-free areas between adjacent main feature areas120may be referred to as divider areas130. While not explicitly shown inFIG.4, each of the main feature areas120may also include pattern-free areas. The black border areas140may be exposed in an opening of a pellicle frame150representatively shown inFIG.4. With reference to the pellicle frame150, the black border areas140are disposed between main feature areas120and edges of the opening of the pellicle frame150. Referring toFIGS.3,5A and5B, method200includes a block204where a venting feature insertion area1100in the mask design1000is determined. In some embodiments, a venting feature insertion area1100is determined with use of a template shape1050shown inFIGS.5A and5B. In some implementations, the template shape1050may be rectangular or square in shape. In these implementations, the template shape1050includes a first side D1and a second side D2. The template shape1050is rectangular when the first side D1is different from the second side D2. The template shape1050is square when the first side D1is the same as the second side D2. In some instances, each of the first side D1and the second side D2is between about 3 μm and about 10 μm, such as between about 4 μm and about 6 μm. Based on the assumption that probability of bubbling is proportional to the dimensions of the pattern-free area, the template shape1050represents the smallest reasonable area where bubbling is unlikely. Put differently, when a pattern-free area is smaller than the template shape1050, vapor can be readily vented through neighboring patterns (i.e., openings) in a main feature area without tendency to generate a bubble. Reference is now made toFIGS.5A and5B. To determine a venting feature insertion area1100, the template shape1050is repeatedly fitted by use of a computer system into a pattern-free area in the mask design without overlapping with any of the printing feature1200in the main feature area120until no more template shape1050can fit into the remainder of the pattern-free area. Because only complete template shapes1050are allowed to fit in a venting feature insertion area1100, a venting feature insertion area1100includes an integer number of the template shape1050. If a pattern-free area or a portion thereof can only accommodate a less-than-complete portion of the template shape1050, that pattern-free area or portion thereof is excluded from the venting feature insertion area1100. Operations at block204may or may not take into consideration of a spacing S from the printing feature1200. The spacing S represents the spacing between an edge of the printing feature1200and an edge of a venting feature to be inserted into the venting feature insertion area1100at block206. In some instances, the spacing S is selected to prevent interference with the printing of the main feature area120. In one embodiment represented inFIG.5A, the spacing S from the edge of the printing feature1200is first determined and earmarked and then the template shape1050is repeatedly and non-overlappingly fitted into the pattern-free area outside of the spacing S. In another embodiment represented inFIG.5B, the template shape1050is repeatedly and non-overlappingly fitted into the pattern-free area without first identifying the spacing S from the edge of the printing feature1200. After the venting feature insertion area1100is determined, the spacing S is then excluded from the venting feature insertion area1100. In some embodiments, the spacing S is between about 1 μm and about 1.5 μm. Referring toFIGS.3,6A,6B, and8, method200includes a block206where venting features180are inserted into the venting feature insertion area1100of the mask design1000to obtain a modified mask design1000′. At block206, a computer system may be used to insert venting features180into the venting feature insertion area1100. Depending on how operations at block204are carried out, the venting features180may be inserted at block206to completely fill the venting feature insertion area1100as shown inFIG.6Aor partially fill the venting feature insertion area1100, short of the area reserved for the spacing S from the edge of the printing feature1200, as shown inFIG.6B. The venting features180may be represented by various example repeating units shown inFIGS.7A,7B and7C. Referring toFIG.7A, a first repeating unit182may include a rectangular array of elongated bars, each of which extends lengthwise along the X direction. Each of the elongated bars has a length L along the X direction and a width W along the Y direction and the rectangular array of elongated bars has a Y direction pitch P. In some implementations represented inFIG.7A, each of the elongated bars are aligned end-to-end along both the X direction and the Y direction. The elongated bars inFIG.7Aare parallel to one another along the X direction. Referring toFIG.7B, a second repeating unit184may include a rectangular array of elongated bars, each of which extends lengthwise along the Y direction. Each of the elongated bars in the first repeating unit182may be substantially similar to one in the second repeating unit184but is rotated by 90 degrees. That is, each elongated bar in the second repeating unit184has a length L along the Y direction, a width W along the X direction and the elongated bars have an X-direction pitch P. Similar to the first repeating unit182, the elongated bars in the second repeating unit184are aligned end-to-end along both the X direction and the Y direction. Different from elongated bars inFIG.7A, the elongated bars inFIG.7Bare parallel to one another along the Y direction. A third repeating unit186shown inFIG.7Cincludes a design different from those inFIGS.7A and7B. To prevent formation of a long-range gap G1along the Y direction inFIG.7Aor a long-range gap G2along the X direction inFIG.7B, elongated bars in the third repeating unit186are aligned only along the Y direction but are misaligned about the X direction. Elongated bars in the third repeating unit186may have similar dimensions and pitch as those in the second repeating unit184. Like elongated bars in the second repeating unit184, elongated bars in the third repeating unit186are also parallel to one another along the Y direction. The present disclosure contemplates both printing venting features180and sub-resolution (non-printing) venting features180. As the names implies, printing venting features180reflect sufficient radiation (or with a sufficient intensity) to exceed an exposure threshold and thereby cause a photoresist layer on the target to transition from one state to another, allowing a pattern to be developed. On the contrary, sub-resolution venting features180do not reflect sufficient radiation (or with a sufficient intensity) to exceed an exposure threshold and thereby cause a photoresist layer on the target to transition from one state to another. In some embodiments where the venting features180are sub-resolution venting features, the length L is between about 100 nm and about 2 μm; the width W is between about 4 nm and about 12 nm; and the pitch P is between about 20 nm and about 200 nm. In these embodiments, venting feature insertion areas1100are determined in the main feature areas, the divider areas and the black border areas and venting features180are inserted in the venting feature insertion areas1100. In some alternative embodiments where the venting features180are printing features, no venting feature insertion areas1100are determined in the main feature areas as presence of printing venting features may affect device performance. In these alternatively embodiments, the dimensions of the venting features180may be enlarged and the venting features180are only inserted into venting feature insertion areas in the divider areas130and the black border areas140. After the venting features180are inserted into the venting feature insertion areas1100determined in the main feature areas120, the divider areas130, and the black border areas140of the mask design1000, a modified mask design1000′ as shown inFIG.8is obtained. It is noted that, for clarity and simplicity, venting features180that may be inserted into venting feature insertion area1100(if any is determined at block204) in the main feature area120are omitted fromFIG.8. In embodiments where the venting features180are printing features, they are not inserted into the main feature areas120. Referring toFIGS.3and9, method200includes a block208where a modified mask100′ is fabricated based on the modified mask design1000′. In some embodiments, the modified mask100′ may be fabricated using deposition techniques and electron-beam (E-beam) writing. In some embodiments, a mask blank including a mask substrate, an MLM layer over the substrate, a capping layer over the MLM layer, and a blank absorber layer over the MLM layer is eceived and an E-beam writer is used to pattern the absorber layer. A fragmentary cross-sectional view of a venting feature insertion area1100of the modified mask100′ is schematically illustrated inFIG.9. The modified mask100′ includes venting features180in the venting feature insertion area1100. The venting feature insertion area1100may be in the main feature areas (such as the main feature areas120shown inFIG.4), the divider areas (such as the divider areas130shown inFIG.4), or the black border areas (such as the black border areas140shown inFIG.4). As a comparison, the mask100inFIG.2does not have any venting features180in pattern-free areas in the main feature areas, the divider areas, and the black border areas. In mask100inFIG.2, the divider areas (such as the divider areas130shown inFIG.4) and the black border areas (such as the black border areas140shown inFIG.4) include a continuous absorber layer that is free of any pattern. In other words, the divider areas and the black border areas in mask100ofFIG.2are free of openings that expose the underlying capping layer106or the MLM104. In some embodiments, the modified mask100′ may be part of a mask assembly that also includes a pellicle frame similar to the pellicle150shown inFIG.8. Methods of the present disclosure, such as method200inFIG.3, may be implemented at any point between generation of an IC design layout and the actual fabrication of the mask(s). Reference is now made toFIG.10, which illustrates a simplified block diagram of an integrated circuit (IC) manufacturing system300and associated IC manufacturing flow, which may benefit from various aspects of the present disclosure. The IC manufacturing system300includes a plurality of entities, such as a design house310, a mask house320, and an IC manufacturer340(i.e., an IC fab), that interact with one another in the design, development, and manufacturing cycles and/or services related to manufacturing an integrated circuit (IC) device350. The plurality of entities is connected by a communications network, which may be a single network or a variety of different networks, such as an intranet and the Internet, and may include wired and/or wireless communication channels. Each entity may interact with other entities and may provide services to and/or receive services from the other entities. One or more of the design house310, mask house320, and IC manufacturer340may have a common owner, and may even coexist in a common facility and use common resources. In various embodiments, the design house310, which may include one or more design teams, generates an IC design layout312. The IC design layout may include various geometrical patterns designed for the fabrication of the IC device350. By way of example, the geometrical patterns may correspond to patterns of metal, oxide, or semiconductor layers that make up the various components of the IC device350to be fabricated. The various layers combine to form various features of the IC device350. For example, various portions of the IC design layout may include features such as an active region, a gate electrode, source and drain regions, metal lines or vias of a metal interconnect, openings for bond pads, as well as other features known in the art which are to be formed within a semiconductor substrate (e.g., such as a silicon wafer) and various material layers disposed on the semiconductor substrate. In various examples, the design house310implements a design procedure to form the IC design layout. The design procedure may include logic design, physical design, and/or placement and routing. The IC design layout312may be presented in one or more data files having information related to the geometrical patterns which are to be used for fabrication of the IC device350. In some examples, the IC design layout312may be expressed in a GDSII file format or DFII file format. In some embodiments represented inFIG.10, venting features may be inserted into the IC design layout312at or by the design house310. Methods of the present disclosure, such as method200inFIG.3, may be performed and implemented as venting feature insertion360. In these embodiments, because the venting features are spaced apart from the main features (i.e., printing features) and do not interfere with the printing of the main features, the design house310may insert them after the IC design layout312is generated. In some embodiments, the design house310may transmit the IC design layout312to the mask house320, for example, via the network connection described above. The mask house320may then use the IC design layout312to manufacture one or more masks to be used for fabrication of the various layers of the IC device350according to the IC design layout312. In various examples, the mask house320performs mask data preparation322, where the IC design layout312is translated into a form that can be physically written by a mask writer (such as an E-beam writer), and mask fabrication330, where the design layout prepared by the mask data preparation is modified to comply with a particular mask writer and/or mask manufacturer and is then fabricated. In the example ofFIG.3, the mask data preparation322and mask fabrication330are illustrated as separate elements; however, in some embodiments, the mask data preparation322and mask fabrication330may be collectively referred to as mask preparation. The mask data preparation322may include various sub-operations. For example, mask data preparation322may include logic operation (LOP)324, optical proximity correction (OPC)326, and fracture328. In LOP324, feature dimensions in IC design layout312may be adjusted to corresponding photoresist features in photolithography processes. In OPC326, sub-resolution assist features (SRAF) and scattering bars may be inserted into the IC design layout312to enhance the exposure resolution. In fracture328, features in IC design layout312, sub-resolution assist features (SRAF) and scattering bars may be approximated by geometric shapes. The mask design1000described above with respect to the method200may correspond to the IC design layout312or modifications thereof after the LOP324, the OPC326, or fracture328. In some alternative embodiments represented inFIG.10, venting features may be inserted at mask data preparation322by the mask house320. In these alternative embodiments, the venting feature insertion360may be a sub-operation of mask data preparation322and may be performed along with LOP324, OPC326, or fracture328, between LOP324and OPC326, between OPC326and fracture328, or between fracture328and mask fabrication330. After mask data preparation322and during mask fabrication330, a mask or a group of masks may be fabricated based on the modified IC design layout (or modified mask design, such as the modified mask design1000′ inFIG.9). For example, an electron-beam (e-beam) writer or a mechanism of multiple e-beams is used to form a pattern on a mask (photomask or reticle) based on the modified mask design. The mask can be formed in various technologies. In some examples, the mask is formed using a phase shift technology. In a phase shift mask (PSM), various features in the pattern formed on the mask are configured to have a pre-configured phase difference to enhance image resolution and imaging quality. In various examples, the phase shift mask can be an attenuated PSM or alternating PSM. In some embodiments, the IC manufacturer340, such as a semiconductor foundry, uses the mask (or masks) fabricated by the mask house320to transfer one or more mask patterns onto a wafer and thus fabricate the IC device350on the wafer. The IC manufacturer340may include an IC fabrication facility that may include a myriad of manufacturing facilities for the fabrication of a variety of different IC products. For example, the IC manufacturer340may include a first manufacturing facility for front end fabrication of a plurality of IC products (i.e., front-end-of-line (FEOL) fabrication), while a second manufacturing facility may provide back end fabrication for the interconnection and packaging of the IC products (i.e., back-end-of-line (BEOL) fabrication), and a third manufacturing facility may provide other services for the foundry business (e.g., research and development). The present disclosure presents multiple embodiments and multiple advantages. It is understood that the attribution of an advantage to an embodiment is merely for clarity and understanding. Different embodiments can offer different advantages, and no particular advantage is required for any one embodiment. For example, methods of the present disclosure allow insertion of venting features into pattern-free areas in a reflective mask to prevent bubbling of a radiation absorber layer, thereby reducing defects and improving lifetime of the reflective mask. Thus, the present disclosure provides a photolithography mask with bubble-preventing venting features and a method for forming the mask. In one embodiment, a photolithographic mask assembly is provided. The photolithographic mask assembly includes a photolithographic mask. The photolithographic mask includes a capping layer over a substrate and an absorber layer disposed over the capping layer. The absorber layer includes a first main feature area, a second main feature area, and a first venting feature area disposed between the first main feature area and the second main feature area. The first venting feature area includes a plurality of venting features. In some embodiments, the capping layer includes ruthenium, silicon oxide, and/or amorphous carbon. In some implementations, the absorber layer includes Cr, TaN, TaO, TaBN, TiN, TaBO, Cr2O3, SiO2, or SiN. In some embodiments, the plurality of venting features includes a first plurality of elongated bars arranged in parallel. In some instances, dimensions of each of the first plurality of elongated bars are selected such that the first plurality of elongated members do not print in a photolithography process and the first plurality of elongated members do not affect printing of the first main feature area and the second main feature area in the photolithography process. In some embodiments, the photolithographic mask assembly may further include a pellicle frame disposed over the photolithographic mask. The pellicle frame includes an opening to expose a portion of the photolithographic mask and the photolithographic mask includes a black border area disposed between the first main feature area and the opening. In some implementations, the photolithographic mask assembly may further include a second venting feature area disposed within the black border area. In some embodiments, the second venting feature area includes a second plurality of elongated bars arranged in parallel. In another embodiment, a method is provided. The method includes receiving a photolithographic mask design that includes a first main feature area, a second main feature area, and a divider area between the first main feature area and the second main feature area, determining a venting feature insertion area within the first main feature area, the second main feature area, and the divider area, respectively, inserting a plurality of venting features in the venting feature insertion area of the photolithographic mask design to create a modified photolithographic mask design, and fabricating a photolithographic mask based on the modified photolithographic mask design. In some embodiments, the determining of the venting feature insertion area includes identifying a first pattern-free area in the first main feature area, the second main feature area, and the divider area as a portion of the venting feature insertion area if a template shape fits in the first pattern-free area, and excluding a second pattern-free area in in the first main feature area, the second main feature area, and the divider area from the venting feature insertion area if the template shape does not fit in the second pattern-free area. In some implementations, the template shape is rectangular and includes a side having a length between about 4 μm and about 6 μm. In some instances, the method may further include after the inserting of the plurality of venting features, performing optical proximity correction (OPC) to the modified photolithographic mask design. In some embodiments, the fabricating of the photolithographic mask includes receiving a mask substrate including a multi-layer mirror, a capping layer over the multi-layer mirror, and an extreme ultraviolet (EUV) absorber layer over the capping layer, and patterning the EUV absorber layer using an electron-beam writer. In some instances, the method may further include exposing the photolithographic mask to radiation, and using the radiation reflected from the first main feature area, the second main feature area, and the venting feature insertion area to expose a workpiece. In these instances, an intensity of the radiation reflected by the venting feature insertion area is maintained not to exceed an exposure threshold of a photoresist of the workpiece. In some embodiments, the exposing of the photolithographic mask to radiation includes exposing the photolithographic mask to extreme ultraviolet (EUV) radiation. In yet another embodiment, a method is provided. The method includes receiving a photolithographic mask design having a plurality of printing features and a plurality of pattern-free areas; identifying, in the photolithographic mask design, a venting feature insertion area when a template shape fits within the plurality of pattern-free area; inserting a plurality of venting features in the venting feature insertion area of the photolithographic mask design to obtain a modified photolithographic mask design; and fabricating a photolithographic mask based on the modified photolithographic mask design. In some embodiments, the method may further include after the inserting of the plurality of venting features and before the fabricating of the photolithographic mask, performing optical proximity correction (OPC) to the modified photolithographic mask design. In some implementations, the fabricating of the photolithographic mask includes receiving a mask substrate including a multi-layer mirror, a capping layer over the multi-layer mirror, and an extreme ultraviolet (EUV) absorber layer over the capping layer, and forming the plurality of venting features by completely removing the EUV absorber layer from at least a portion of the venting feature insertion area to expose the multi-layer mirror. In some instances, the EUV absorber layer includes Cr, TaN, TaO, TaBN, TiN, TaBO, Cr2O3, SiO2, or SiN. In some embodiments, the plurality of venting features are elongated in shape and disposed in parallel with one another. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 39,877 |
11860531 | DETAILED DESCRIPTION Aspects of the present disclosure relate to skeleton representations of layouts for the development of lithographic masks. Traditionally, mask layouts are represented by polygons and, more specifically, rectilinear polygons. However, as lithography becomes more complex and challenging, mask layouts are also becoming more complex. The use of sub-resolution assist features and non-rectangular and even curved shapes makes the traditional representation based on rectilinear polygons less than ideal. Representing a curved shape by rectilinear polygons requires a large number of polygons and still represents the curved edge by a staircase approximation. Even if non-rectilinear polygons are allowed, a curved edge will still require a polygon with a large number of sides to approximate the curve. Many processes used in the development of masks, such as optical proximity correction (OPC), inverse lithography technology (ILT), and lithography verification, require the analysis or manipulation of these representations. For example, these mask development processes may introduce slight changes in the layouts in order to determine whether the changes may be beneficial. In the approach described below, layouts are represented by a skeleton representation, rather than by polygons. The layouts could be the layout of the mask itself, the layout of the resulting printed pattern on the wafer, a two-dimensional representation of the aerial image produced by the mask, or other types of two-dimensional images used for mask development. The layout includes a large number of shapes, which traditionally may be represented by rectilinear polygons. Here, at least some of the shapes are represented by a skeleton representation instead of or in addition to other types of representations. The skeleton representation of an individual shape includes the “skeleton”, which corresponds to the axis or centerline of the shape. The elements of the skeleton include two or more nodes connected by edges. The nodes are the “joints” of the skeleton, and the edges are the “bones” of the skeleton. The skeleton representation also includes size parameters for at least some of these elements. These size parameters provide information about the size of the shape at different locations along the skeleton. For example, size parameters for edge elements in the skeleton representation may be based on widths of the shape along those edges. Size parameters for nodes in a skeleton representation may be based on local radii of the shape at those nodes. The skeleton representation can be a more compact and more computationally efficient representation of the layout for certain mask development processes. For example, if it is desired to move the position of a shape or change the thickness of a shape, this may be achieved simply by modifying the corresponding elements (nodes and edges) and/or by changing various size parameters for those elements. As a result, the computation speed is increased and the computation resources required (memory and processor utilization) is reduced. FIG.1depicts a flowchart for a mask development process in accordance with some embodiments of the present disclosure. The techniques described in this disclosure may be applied to many types of mask development processes120. This includes both mask synthesis processes (e.g., design a mask to achieve a desired result) and mask verification processes (e.g., confirm whether a mask design produces the result desired). It also includes both forward propagation processes (e.g., predict the aerial image produced by a mask) and backwards propagation (e.g., from a desired aerial image, propagate backwards to determine a mask that will produce that aerial image). Examples of mask development processes120include optical proximity correction (OPC), inverse lithography technology (ILT), and lithography verification (including mask design rule checks). The input to the mask development process120is a layout110of various geometrical shapes. For the mask development process120, the layout110may be a layout of a mask used to fabricate an integrated circuit, although the skeleton representation is not limited to this example. In many cases, the shapes in the layout are represented as polygons, as shown in the polygon representation115ofFIG.1. The result of the mask development process120will generically be referred to as the “solution”190. The solution will depend on the mask development process120. Table 1 below lists some example applications. The “Layout” column is the input to the mask development process, and the “Solution” column is the output. The column “Skeleton Representation” indicates what quantities may be represented by skeleton representations. TABLE 1Some example mask development processesMaskDevelopmentLayoutProcessSolutionSkeleton RepresentationDesiredOptical proximityMask layoutMask, or Printed patternprinted patterncorrection (OPC)or Inverseon waferDesiredInverseMask layoutMask, or Printed patternprinted patternlithographyor Inverseon wafertechnology (ILT)Mask layout,LithographyAerial image, SimulatedMask, or Printed patternDesiredverificationprinted pattern on wafer,or Inverseprinted patternPrinting quality metrics,on waferMask quality metricsDesiredSub-resolutionAdded assist featuresAssist features or Spaceprinted patternassist featuresbetween assist features, oron wafer(SRAF)Printed pattern or InverseDesiredDesignModified design layoutDesign to be modified, orprinted patternmodificationSpacing between designon wafer(retargeting)polygons being modified In one approach, the mask development process120is applied to the shapes in the layout by using a skeleton representation125for at least some of the shapes, as shown in the righthand side ofFIG.1. The layout110includes a large number of disjoint shapes. A skeleton representation is determined122for at least some of the shapes. The skeleton representation of an individual shape includes elements of two or more nodes connected by edges. The skeleton representation also includes size parameters for at least some of the elements. The mask development process is then applied124using the skeleton representation of the shapes. FIGS.2A-2Cdepict examples of polygon representations and the corresponding skeleton representations of individual shapes in the layout. In each figure, the left side shows the polygon representation with just the skeleton superimposed in dashed lines, and the right side shows the skeleton representation with both the skeleton and the edges defined by the size parameters. The polygon representation210is a piecewise linear approximation of the border of the shape. In these examples, the polygon representation210is based on border segments at 45 degree angles. The skeleton representation250includes a piecewise connected skeleton252of nodes and edges. In the figures, the nodes are circles and the edges are thicker lines connecting the circles. The skeleton252is a representation of the two dimensional mask polygon210but in a “lower” dimension. It may be based on a morphological erosion of the shape210. The skeleton representation250may have additional attributes. InFIG.2A, the nodes and edges are associated with size parameters. Nodes N are characterized by a local radius R, and edges E are characterized by a width W. These size parameters represent the local radius R of the curve defined by a node, and the width W of the shape defined by an edge. Node N1is a terminal node (i.e., connected to only one other node), so the shape has a tip with radius R=R0. Edges E1, E2, E3all have width W=W0=R0. Node N4is also a terminal node with radius R=R0. Nodes N2and N3are interior nodes (connected to two or more other nodes). Their radii R=R0define the shape of the local curved shape centered on those nodes. These attributes can be used to reconstruct the original shape210from the base skeleton252through a dilation or sizing procedure. The sizing procedure uses the size parameters defined at the nodes/edges to size the corresponding segment of the base shape210appropriately. Reducing the mask representation to a lower dimension allows simpler algorithms to be implemented for various geometry manipulations, and there are many features on the mask, such as assist features, which have high aspect ratios and are well represented by a non-branching skeleton, such as shown inFIG.2A. The skeleton representation may include additional attributes to represent more complicated shapes, including those with branching skeletons. InFIG.2B, the axis of the shape has multiple branches in an “H” shape, so the skeleton252is branching. It includes nodes N1-N6and edges E1-E5. In addition, the tips of the shape are rounded corners rather than circular arcs. To capture this, the terminal node N1is characterized by two size parameters: R and r. R defines the width of the terminal shape, and r defines the local radius of the corners. So r=0 specifies a squared tip, and r=R specifies a circular tip as shown inFIG.2A. InFIG.2B, 0<r<R and the tip has rounded corners. In addition, nodes N2and N5may have negative values of r, to specify the shape of the interior corners along the crossbar of the “H”. Other parameters may also be used to provide more degrees of freedom. For example, tips and other curves corresponding to nodes may be elliptical, polygonal or other shapes. The branches corresponding to edges may vary in width: linearly increasing or decreasing, varying periodically, or otherwise. Different parameters may be used to capture these and other variations. FIG.2Cshows another example where the skeleton252has six nodes N1-N6, but nodes N1-N3are connected as a polygon, as are nodes N4-N6. In some cases, shapes may have interior holes. Nodes and edges may also have attributes that capture information other than the original shape. One example is association from a node or edge to a particular polygon feature. For example, a particular skeleton node N or edge E can be associated with a particular polygon segment(s). This may be used in OPC solvers where the node N and its parameters, such as radius, are adjusted until the simulated mask (which is defined by N and other skeleton nodes) prints an image which is on target at the polygon segment or a point on the segment. Other examples include constraints on displacement of the node/edge, constraints on the size of the node/edge, and other constraints on the segment or the shape associated with the node/edge. Different techniques may be used to generate the skeleton representation from a polygon or other representation. One example is shown inFIGS.3A-3B. In this example, the polygon representation310of a shape is first converted to a Voronoi diagram320, as shown inFIG.3A. The Voronoi diagram is the medial axis of the polygon. As shown inFIG.3B, the Voronoi diagram is them trimmed to create the skeleton330of nodes and edges. Trimming may be based on functions of the distance between points on the polygon border which define a particular point on the Voronoi diagram. Alternatively, the trimming may be based on proximity of the medial axis to the border. The additional attributes, such as widths and radii and which polygon segments correspond to which skeleton elements, can then be attached to the skeleton. In the reverse direction, the skeleton representation may be converted to the full shape, such as a polygon representation, using different techniques. One approach is shown inFIGS.3C-3D.FIG.3Cshows the skeleton340. The nodes and edges of the skeleton are expanded according to their radii and widths, as indicated by the light arrows. Alternatively, polygons may be created for nodes and edges and then merging the polygons, with the resulting polygon350shown inFIG.3D. In some cases, the skeleton may be simplified prior to creating polygons. There are many use cases where the skeleton representation is more efficient than other representations. In one class of use cases, it is desirable to shift a shape or part of a shape in a direction that is perpendicular to the axis of the shape. The skeleton representation naturally allows for this. An example of this use case is the use of perturbation tables for OPC. Perturbation tables allow a tool to quickly compute changes in the solution produced by a lithography simulation from a mask M, when perturbations to the border of M are applied (i.e., when the borders of shapes are moved). For example, a perturbation table may tabulate the change in the wafer intensity for a given mask perturbation. The use of perturbation tables with skeletons is described below. The general mask synthesis problem can be framed as an optimization of a cost function C(L) which takes as input the lithographic signal L (e.g., a mask layout) and returns a cost value C. The gradient of C with respect to different perturbations P may be computed. These gradients dC/dP may be used to optimize the layout through a gradient based optimization scheme. FIG.4Adepicts perturbation of an end490of a shape. In the skeleton representation450, the end is represented by nodes N1, N2and edge E1. Nodes N1, N2have radius R=R0and edge E1has width W=W0=R0. If the polygon representation410is used, then the tip is represented by seven border segments421-427that approximate the rounded tip. This is not even that good an approximation. More accurate approximations would use even more border segments. Perturbing the tip would require moving all of the affected border segments together. As each individual border segment is perturbed, the resulting change in the cost function is measured and a lithography cost gradient dC/dP may be computed for a given perturbation P. However, if the skeleton representation for the shape is used, the gradient dC/dP may be computed with respect to the geometric parameters for the skeleton, which may be simpler and more computationally efficient. For example, assume that the perturbation of interest is increasing the size of the end490. For the skeleton representation, this is equivalent to increasing the size parameter W=R, with some corresponding movement of the location of node N2. The effect of the perturbation is given by the gradient dC/dW. In the polygon representation, the seven border segments421-428are moved together and the total gradient is the sum of the individual gradients with respect to each border segment. Now consider another example as shown inFIG.4B. In this example, node N1is the skeleton element to be optimized. In particular, consider the optimization of node N1's position along a particular direction v. First assume v=t(N1), the tangent to the skeleton edge E1which connects to node N1. This considers stretching and shrinking the shape along its axis. If the formulation is available, the gradient dC/dNTmay be calculated and used for optimization. Alternatively, the gradient may be constructed from the collective derivative of the polygon border segments associated with node N. Call this set D={Di}. From the construction of the polygon from the skeleton, the set D include border segments423-425. Each of those segments Diwill have its own magnitude of cost change dCiand the normals {ni} and lengths {Li} of those segments may be used to project the cost change onto t(N) according to Eqn. (1). TangentialStretchderivative=dCdNT=∑dCi*(ni·t(N1))*Li(1)where the summation is over the polygon segments i (segments423-425) and · is the dot product. Now consider a change in the radius R of the node N1. To compute the derivative with respect to the change in the radius at node N1(dC/dR), the system can use the sum of dCiprojected onto the outward normal of the corresponding polygon edges423-425. Now consider a lateral shift of the shape, which is a movement of the edge E1in a direction perpendicular to its direction. To compute the derivative with respect to lateral shift of the shape, the system can find the derivative with respect to moving edge E1in a direction orthogonal to the edge (dC/dE). If the polygon representation is used to compute this, the segments D are projected onto the normal vector n(E1) of edge E1. This is given by Eqn. (2). Shiftderivative=dCdE=∑dCi*(ni·n(E1))*Li(2)where the summation is over the affected polygon segments i. Note that the border segments D associated with E1in Eqn. (2) are a different set than those associated with node N1in Eqn. (1). Other variations of combinations of these derivatives can be computed so as to allow for other skeleton modifications that may be useful depending on the application needs. Once these gradients are computed, a gradient based optimization scheme may be used to move the skeleton elements or modify their parameters thus optimizing the mask to achieve an improved cost function C. The approach described above for OPC may also be used for ILT and other mask development processes. In ILT, the optimization of the skeleton elements uses an ILT cost function and corresponding gradient field, which may be computed as described above except that the derivative values dCiare computed from an ILT cost function gradient field which can be sampled at the polygon edges. The subsequent movement by tangential node shifting, modifying the radius, or edge shifting will be similar to that described above. This is because the ILT gradient is computed with respect to a level set function whose outward normal has magnitude=1 at the polygon edges. Similar to lithography cost based optimization steps, MRC (mask rule checking) enforcement can also be done by computing the appropriate corrections on the mask polygon edges, and then applying them in a way as described above to modify the skeleton element locations and/or parameters. Thus standard MRC checks can be applied that would be used for polygons, and then the violation information can be used to guide the skeleton geometric operations to resolve MRC violations. InFIG.5, border segments512and514of the polygon representation510are the two segments that are closest to each other. They are too close and create a mask rule violation. A polygon-based MRC check identifies these two border segments and the rule violation. Segment512is associated with node N1of the top skeleton, and segment514is associated with edge E1of the bottom skeleton. These skeleton elements550may then be modified in a manner that reduces or eliminates the MRC violation. Another way to compute an estimated MRC result is to compute the distances between adjacent skeletons and skeleton elements.FIG.6shows two separate skeletons650and655. The skeleton655is for a shape that doubles back on itself, for example a U-shape. Note that these are the skeletons themselves and not the shapes. The border of the shapes is omitted for clarity. The double-sided arrows show the locations of possible MRC violations. Given the distance between the two skeletons at these locations and their corresponding size parameters, MRC spacing violations may be assessed. This detection can be done by using edge to edge search algorithms. Alternatively, it may be done by creating additional “MRC” skeletons670,675of the mask skeletons650,655. In this figure, the MRC skeletons are shown as dashed lines, and these skeletons contain information about which mask skeleton edges are closest to them and how far away they are. This information can be used to compute MRC violation locations. This is a useful technique as the skeleton creation naturally determines which edges of skeleton655can be compared against each other to determine MRC spacing violations. FIG.7illustrates an example set of processes700used during the design, verification, and fabrication of an article of manufacture such as an integrated circuit to transform and verify design data and instructions that represent the integrated circuit. Each of these processes can be structured and enabled as multiple modules or operations. The term ‘EDA’ signifies the term ‘Electronic Design Automation.’ These processes start with the creation of a product idea710with information supplied by a designer, information which is transformed to create an article of manufacture that uses a set of EDA processes712. When the design is finalized, the design is taped-out734, which is when artwork (e.g., geometric patterns) for the integrated circuit is sent to a fabrication facility to manufacture the mask set, which is then used to manufacture the integrated circuit. After tape-out, a semiconductor die is fabricated736and packaging and assembly processes738are performed to produce the finished integrated circuit740. Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of abstraction may be used to design circuits and systems, using a hardware description language (‘HDL’) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (‘RTL’) description, a gate-level description, a layout-level description, or a mask-level description. Each lower abstraction level that is a less abstract description adds more useful detail into the design description, for example, more details for the modules that include the description. The lower levels of abstraction that are less abstract descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of abstraction language for specifying more detailed descriptions is SPICE, which is used for detailed descriptions of circuits with many analog components. Descriptions at each level of abstraction are enabled for use by the corresponding tools of that layer (e.g., a formal verification tool). A design process may use a sequence depicted inFIG.7. The processes described by be enabled by EDA products (or tools). During system design714, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for desired characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage. During logic design and functional verification716, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as testbench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification. During synthesis and design for test718, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification. During netlist verification720, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning722, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing. During layout or physical implementation724, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flipflop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products. During analysis and extraction726, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification728, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement730, the geometry of the layout is transformed to improve how the circuit design is manufactured. During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation732, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits. A storage subsystem of a computer system (such as computer system800ofFIG.8) may be used to store the programs and data structures that are used by some or all of the EDA products described herein, and products used for development of cells for the library and for physical and logical design that use the library. FIG.8illustrates an example machine of a computer system800within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system800includes a processing device802, a main memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory806(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device818, which communicate with each other via a bus830. Processing device802represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device802may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device802may be configured to execute instructions826for performing the operations and steps described herein. The computer system800may further include a network interface device808to communicate over the network820. The computer system800also may include a video display unit810(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device812(e.g., a keyboard), a cursor control device814(e.g., a mouse), a graphics processing unit822, a signal generation device816(e.g., a speaker), graphics processing unit822, video processing unit828, and audio processing unit832. The data storage device818may include a machine-readable storage medium824(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions826or software embodying any one or more of the methodologies or functions described herein. The instructions826may also reside, completely or at least partially, within the main memory804and/or within the processing device802during execution thereof by the computer system800, the main memory804and the processing device802also constituting machine-readable storage media. In some implementations, the instructions826include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium824is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device802to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 34,522 |
11860532 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components, values, operations, materials, arrangements, or the like, are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Other components, values, operations, materials, arrangements, or the like, are contemplated. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. The advanced lithography process, method, and materials described below is usable in many applications, including fin-type field effect transistors (FinFETs). For example, the fins may be patterned to produce a relatively close spacing between features, for which the present disclosure is well suited. In addition, spacers used in forming fins of FinFETs, also referred to as mandrels, are able to be processed according to the following description. Fiducial marks are marks which are not part of a pattern to be transferred to a wafer. Fiducial marks include identification marks, alignment marks, logos, instructions, other text or other suitable information conveying patterns. In some embodiments, the fiducial mark includes a Q-code, a barcode, a trademark, operating instructions or other suitable information. Using fiducial marks helps to identify a photomask (also called a mask or a reticle). Specific information related to the photomask is able to be stored in a non-transitory computer readable medium and retrieved based on the identity of the photomask. For example, in some embodiments, the fiducial marks are able to store locations of defects within the photomask; and a process for manufacturing the photomask is adjusted based on the stored locations of defects. Fiducial marks are also usable as alignment marks for e-beam writing tools used to pattern the photomask. E-beam writing tools use electron beams (e-beams) to selectively remove portions of the photomask in order to define the pattern to be transferred to the wafer. Using alignment marks with the e-beam writing tools helps to increase precision in formation of the pattern on the photomask. As scaling down of semiconductor devices continues, increased precision helps to increase production yield for devices having smaller critical dimensions (CDs). Other approaches determine a location of defects within the photomask and attempt to correct the defects. However, correction of the defects is not always possible. For example, if a defect is a result of an imperfection in a substrate of the photomask, correction of the defect is extremely difficult. In some instances, correction of the defect is imperfect, such that the precision of the pattern transferred to the wafer is reduced. However, identifying the locations of the defects in the photomask and then positioning the pattern on the photomask based on the identified locations reduces or avoids overlap between the pattern and identified defects. As a result, precise transfer of the pattern to the wafer increases and production yield increases. A photolithography arrangement, such as photolithography arrangement100(FIG.1), is usable to transfer the pattern on a photomask to a wafer. FIG.1is a schematic diagram of a photolithography arrangement100in accordance with some embodiments. Photolithography arrangement100includes a light source110. Light source110is configured to emit electromagnetic radiation for patterning a wafer120. A photomask130is located along an optical path between light source110and wafer120. Optical components140transfer the light from light source110to photomask130and then to wafer120. Light source110generates the radiation in a wavelength for patterning a photoresist on wafer120. In some embodiments, light source110is an ultraviolet (UV) light source, such as an extreme UV (EUV) light source, a vacuum UV (VUV) light source or another suitable light source. In some embodiments, light source110is a laser, a diode or another suitable light generating element. In some embodiments, light source110includes a collector configured to direct electrode magnetic radiation in a common direction along the optical path. In some embodiments, light source110includes a collimator configured to direct all beams of electromagnetic radiation parallel to each other. Wafer120includes a substrate, e.g., a semiconductor substrate, having a photoresist layer thereon. A material of the photoresist is matched to a wavelength of the electromagnetic radiation emitted by light source110. In some embodiments, the photoresist is a positive photoresist. In some embodiments, the photoresist is a negative photoresist. In some embodiments, wafer120includes active components. In some embodiments, wafer120includes an interconnect structure. Photomask130includes a pattern thereon to be transferred to wafer120. Photomask130is a reflective mask configured to reflect incident light. In some embodiments, photomask130is a transmissive mask configured to transmit incident light. In some embodiments, an orientation of a first feature of the pattern is rotated with respect to an orientation of a second feature of the pattern. An orientation of a feature is determined by a longitudinal direction of the feature in a direction parallel to a top surface of photomask130. In some embodiments, the pattern includes repeated sub-patterns. In some embodiments, a spacing between a first sub-pattern and a second sub-pattern is different from a spacing between a third sub-pattern and the second sub-pattern. Optical components140are configured to transfer light from light source110to photomask130and from photomask130to wafer120. Optical components140reduce a size of the pattern on photomask130so that a size of the pattern formed on wafer120is smaller than a size of the pattern on photomask130. In some embodiments, a ratio of the size of the pattern on photomask130to the size of the pattern on wafer120is 2:1; 4:1; 5:1; or another suitable reduction ratio. Optical components140are reflective elements and photolithography arrangement100is a catoptric arrangement. In some embodiments, at least one of optical components140is a transmissive element, and photolithography arrangement100is a catadioptric arrangement. By adjusting a location of at least a portion of the pattern on the photomask, the effect of defects on the transfer of the pattern to wafer120is reduced. In some embodiments, the adjusted location of the portion of the pattern causes the defect to be located outside a functional area of the pattern, for example, in an area of the pattern designated for a scribe line. In some embodiments, the adjusted location of the portion of the pattern causes the defect to be located underneath an absorption layer of photomask130. In some embodiments, the location is adjusted by translating at least the portion of the pattern in a plane parallel to the top surface of photomask130. In some embodiments, the location is adjusted by rotating at least the portion of the pattern about an axis perpendicular to the top surface of photomask130. FIG.2Ais a plan view of a photomask200in accordance with some embodiments. In some embodiments, photomask200is usable as photomask130in photolithography arrangement100(FIG.1). Photomask200includes first fiducial marks210a,210b,210cand210d(collectively called first fiducial marks210). Photomask200further includes second fiducial marks220a,220b,220cand220d(collectively called second fiducial marks220). A plurality of defects230are present in photomask200. First fiducial marks210and second fiducial marks220are located outside of a region240where a pattern for forming functional elements on a wafer are located. Photomask200includes a pattern in region240to be transferred to the wafer using a photolithography process. Region240is determined based on a boundary of a pattern to be transferred to the wafer using the photolithography process. In some embodiments, the photolithography process is an EUV photolithography process. In some embodiments, photomask200is a reflective photomask. In some embodiments, photomask200is a transmissive photomask. The pattern is defined by selectively removing portions of an absorption layer of photomask200. Areas where the absorption layer is removed correspond to locations on the wafer which are exposed to radiation from photomask200. Areas where the absorption layer remains correspond to locations on the wafer which are not exposed to radiation from photomask200. In some embodiments, the pattern includes a plurality of repeated sub-patterns. In some embodiments, the pattern includes an array, e.g., a two-dimensional array, of sub-patterns and each sub-pattern includes substantially identical features. In some embodiments, a spacing between a first sub-pattern and a second sub-pattern in a first direction is different from a spacing between a third sub-pattern and the second sub-pattern in the first direction. The first direction is parallel to a top surface of photomask200. In some embodiments, at least one sub-pattern is rotated about an axis perpendicular to the top surface of photomask200with respect to another sub-pattern. The variable spacing between sub-patterns and/or rotation of at least one sub-pattern is the result of defining the sub-patterns on photomask200in locations to avoid defects230. Defects230result from manufacturing variation during production of photomask200or latent defects in a substrate of photomask200. Defects230affect radiation transmitted/reflected by photomask200. For example, if a defect230is a hillock or bump, a direction of radiation reflected by the photomask200at the defect is different from a direction of radiation reflected by the photomask200in a defect-free area. The change in the direction of reflection causes an error in the pattern intended to be transferred to the wafer. Defects230occur in different levels of photomask200. For example, in some instances, a defect230is located on the top surface of photomask200. In some instances, a defect230is located below the top surface of photomask200. Defects230located below the top surface of photomask are difficult to fix and, in some instance, impossible to fix. Avoiding the effect of defects230by adjusting locations or orientations of sub-patterns prior to defining the sub-patterns on photomask200reduces or avoids errors in the pattern transferred to the wafer. First fiducial marks210are used to help identify photomask200. First fiducial marks210provide photomask200with a unique identification different from all other photomasks in the semiconductor manufacturing process. Using first fiducial marks210, a processor is able to identify photomask200and retrieve data related to photomask200. Photomask200includes first fiducial marks210in each corner of photomask200. In some embodiments, first fiducial marks210are omitted from at least one corner of photomask200. For example, in some embodiments, first fiducial marks210aand210dare omitted. In some embodiments, first fiducial marks210b,210cand210dare omitted. In some embodiments, at least one first fiducial mark210is positioned at a location other than a corner, such as along a side of photomask200. In some embodiments, photomask200includes more than four first fiducial marks210. Photomask200includes first fiducial marks210all having a same shape and size. In some embodiments, at least one first fiducial mark210has a different shape or size from at least another first fiducial mark210. For example, in some embodiments, first fiducial mark210ahas a first size and a first shape; first fiducial mark210bhas a second size different from the first size and the first shape; first fiducial mark210chas the first size and a second shape different from the first shape; and first fiducial mark210dhas a third size different from the first and second sizes and a third shape different from the first and second shape. First fiducial marks210have a cross shape. In some embodiments, at least one first fiducial mark210has a rectangular shape, a triangular shape, a circular shape, a free-form shape, a bar code, a Q code, a logo, text, or other suitable shapes. One of ordinary skill would recognize that additional modifications of first fiducial marks210is possible within the scope of this description. In some embodiments, photomask200is identifiable based on a combination of a number, location, size and shape of first fiducial marks210. In some embodiments, photomask200is identifiable based on information available at any single first fiducial mark210. Second fiducial marks220are used to help an e-beam writing tool identify photomask200and determine locations on photomask200to define the pattern in region240. That is, second fiducial marks220are usable as alignment marks for the e-beam writing tool. In some embodiments, second fiducial marks220are omitted. In some embodiments, second fiducial marks220are recognizable using a different wavelength from that used to recognize first fiducial marks210. The e-beam writing tool operates at a different wavelength from that used to perform photolithography using photomask200. Having second fiducial marks220recognizable using a wavelength of the e-beam writing tool, while first fiducial marks210are recognizable using a wavelength for performing photolithography, helps to avoid mistakes by inadvertently confusing first fiducial marks210with second fiducial marks220. Photomask200includes second fiducial marks220in each corner of photomask200. In some embodiments, second fiducial marks220are omitted from at least one corner of photomask200. For example, in some embodiments, second fiducial marks220aand220dare omitted. In some embodiments, second fiducial marks220b,220cand220dare omitted. In some embodiments, at least one second fiducial mark220is positioned at a location other than a corner, such as along a side of photomask200. In some embodiments, photomask200includes more than four second fiducial marks220. Photomask200includes second fiducial marks220all have a same shape and size. In some embodiments, at least one second fiducial mark220has a different shape or size from at least one other second fiducial mark220. For example, in some embodiments, second fiducial mark220ahas a first size and a first shape; second fiducial mark220bhas a second size different from the first size and the first shape; second fiducial mark220chas the first size and a second shape different from the first shape; and second fiducial mark220dhas a third size different from the first and second sizes and a third shape different from the first and second shape. Second fiducial marks220have a cross shape. In some embodiments, at least one second fiducial mark220has a rectangular shape, a triangular shape, a circular shape, a free-form shape, a bar code, a Q code, a logo, text, or other suitable shapes. Photomask200includes second fiducial marks220having a smaller size than first fiducial marks210. In some embodiments, at least one second fiducial mark220has a same or greater size than at least one first fiducial mark210. Photomask200includes second fiducial marks220have a same shape as first fiducial marks210. In some embodiments, at least one second fiducial mark220has a different shape from at least one first fiducial mark210. One of ordinary skill would recognize that additional modifications of second fiducial marks220is possible within the scope of this description. FIG.2Bis a plan view of a photomask200′ in accordance with some embodiments. Photomask200′ is similar to photomask200(FIG.2A). First fiducial marks and second fiducial marks are not shown in photomask200′ for clarity; however, the above description related to first fiducial marks and second fiducial marks is applicable to photomask200′. Photomask200′ includes defects230′a-230′d(collective called defects230′) at different locations from defects230in photomask200. Photomask200′ also includes sub-patterns250a-250f(collectively call sub-patterns250). Sub-patterns250are arranged in a two dimensional array. Each sub-pattern250includes a plurality of features255. Features255are portions of photomask200′ where the absorption layer is removed. Features255correspond to portions of the wafer which are to be contacted by radiation from photomask200′. For simplicity, portions of sub-patterns250other than features255are considered to include the absorption layer for the purpose of this discussion. Sub-pattern250ais separated from sub-pattern250bby a spacing S1. Sub-pattern250bis separated from sub-pattern250cby a spacing S2. Spacing S1is greater than spacing S2. By moving sub-pattern250a, the location of sub-pattern250ais adjusted to avoid a defect230a′. Avoiding defect230a′ helps to ensure precise transfer of sub-pattern250ato the wafer. In comparison with approaches that include an equal spacing between all sub-patterns, photomask200′ is able to increase production yield by adjusting the location of sub-pattern250ato avoid defect230a′. Sub-pattern250dis rotated about an axis perpendicular to the top surface of photomask200′ with respect to sub-pattern250e. By rotating sub-pattern250d, the location of sub-pattern250dis adjusted to avoid defect230b′. Avoiding defect230b′ helps to ensure precise transfer of sub-pattern250dto the wafer. In comparison with approaches that include all sub-patterns having a same orientation, photomask200′ is able to increase production yield by rotating sub-pattern250dto avoid defect230b′. Defect230c′ is located within sub-pattern250c. However, defect230c′ is located in a portion of sub-pattern250cwhich is covered by the absorption layer. In some embodiments, a location of a sub-pattern is not adjusted in order to avoid a defect which would be covered by the absorption layer. Avoiding adjusting the locations of sub-patterns helps to increase a number of sub-patterns definable on photomask200′; and reduces a complexity of manufacturing photomask200′. Defect230d′ is located at a periphery of sub-pattern250e. In some instances, a periphery of a sub-pattern does not include features255which are used to define functional elements on the wafer. For example, a scribe line on the wafer may be defined at a location corresponding to a periphery of sub-pattern250e. In some embodiments, a location of a sub-pattern is not adjusted in order to avoid a defect which would be located in a portion of the sub-pattern which does not include features255corresponding to functional elements on the wafer. Avoiding adjusting the locations of sub-patterns helps to increase a number of sub-patterns definable on photomask200′, and reduces a complexity of manufacturing photomask200′. FIG.3is a cross-sectional view of a photomask blank300in accordance with some embodiments. In some embodiments, photomask blank300is usable to form photomask130(FIG.1), photomask200(FIG.2A) or photomask200′ (FIG.2B). Photomask blank300includes a substrate302on a carrier layer304. A reflective layer306is on a surface of substrate302opposite carrier layer304. A buffer layer308over reflective layer306helps to protect the reflective layer during later processing of photomask blank300. An absorption layer310is over buffer layer308. Selectively removing portions of absorption layer310defines a pattern on photomask blank300to be transferred to a wafer. In some embodiments, substrate302includes a low thermal expansion material (LTEM). Exemplary low thermal expansion materials include quartz as well as LTEM glass, silicon, silicon carbide, silicon oxide, titanium oxide, Black Diamond® or other suitable low thermal expansion substances. To support the substrate302, a carrier layer304is attached to substrate302. In some embodiments, carrier layer304materials include chromium nitride, chromium oxynitride, chromium, TaBN, TaSi or other suitable materials. In some embodiments, reflective layer306includes a multilayer mirror (MLM). An MLM includes alternating material layers. In some embodiments, the number of pairs of alternating material layers ranges from 20 to 80. A material used for each layer of the alternating material layers are selected based on a refractive index to a wavelength of radiation to be received by the photomask. The layers are then deposited to provide the desired reflectivity for a particular wavelength and/or angle of incidence of the received radiation. For example, a thickness or material of layers within the MLM is tailored to exhibit maximum constructive interference of EUV radiation reflected at each interface of the alternating material layers while exhibiting a minimum absorption of EUV radiation. In some embodiments, the MLM includes alternating molybdenum and silicon (Mo—Si) layers. In some embodiments, the MLM includes alternating molybdenum and beryllium (Mo—Be) layers. Buffer layer308is over reflective layer306to help protect the reflective layer during removal processes performed on absorption layer310. In some embodiments, buffer layer308includes materials such as Ru, silicon dioxide, amorphous carbon or other suitable materials. In some embodiments, absorption layer310includes TaN, TaBN, TiN, chromium, combinations thereof, or other suitable absorptive materials. In some embodiments, absorption layer310contains multiple layers of absorptive material, for example a layer of chromium and a layer of tantalum nitride. Absorption layer310has a thickness sufficient to prevent penetration of incident radiation to reflective layer306and subsequent reflected light from exiting absorption layer310. In some embodiments, absorption layer310includes an anti-reflective coating (ARC). Suitable ARC materials include TaBO, Cr2O3, SiO2, SiN, TaO5, TaON, or other suitable ARC materials. Selectively removing portions of absorption layer310defines features, e.g., features255(FIG.2B), corresponding to functional elements on the wafer. Reflective layer306, buffer layer308and absorption layer310are formed by various methods, including physical vapor deposition (PVD) process such as evaporation and DC magnetron sputtering, a plating process such as electrode-less plating or electroplating, a chemical vapor deposition (CVD) process such as atmospheric pressure CVD (APCVD), low pressure CVD (LPCVD), plasma enhanced CVD (PECVD), or high density plasma CVD (HDP CVD), ion beam deposition, spin-on coating, or other suitable methods. In some embodiments, fiducial marks, e.g., first fiducial marks210or second fiducial marks220(FIG.2A), are formed by selectively removing a portion of absorption layer310. In some embodiments, fiducial marks are formed by selectively removing a portion of absorption layer310and buffer layer308. In some embodiments, fiducial marks are formed by selectively removing a portion of absorption layer310, buffer layer308and reflective layer306. In some embodiments, different types of fiducial marks are formed by removing portions of photomask300to different depths. For example, in some embodiments, first fiducial marks, e.g., first fiducial marks210, are formed by removing a portion of photomask300to expose buffer layer308; and second fiducial marks, e.g., second fiducial marks220, are formed by removing a portion of photomask300to expose substrate302. FIG.4is a flow chart of a method400of making a semiconductor device using a photomask in accordance with some embodiments. The description below relates method400to elements ofFIGS.2A,2B,3and5above for the sake of clarity; however, the current application should not be limited to the embodiments ofFIGS.2A,2B,3and5because one of ordinary skill in the art would understand how to modify method400for use with other photomasks. In operation410, at least one fiducial mark210and/or220(FIG.2A) is formed on a photomask. The at least one fiducial mark210and/or220includes a fiducial mark210used for identifying the photomask. In some embodiments, the at least one fiducial mark210and/or220includes a plurality of fiducial marks210for identifying the photomask. In some embodiments, the at least one fiducial mark210and/or220includes a fiducial mark210for identifying the photomask and a fiducial mark220for aligning an e-beam writing tool to define a pattern on the photomask. In some embodiments, the at least one fiducial mark210and/or220includes a plurality of fiducial marks210for identifying the photomask and a plurality of fiducial marks220for aligning an e-beam writing tool to define a pattern on the photomask. In some embodiments, the at least one fiducial mark is formed by selectively removing a portion of an absorption layer310(FIG.3). In some embodiments, a depth of a first fiducial mark of the at least one fiducial mark210and/or220is different form a depth of a second fiducial mark of the at least one fiducial mark210and/or220. In some embodiments, the removal process includes etching, laser drilling, e-beam writing, ion beam writing, or another suitable material removal process. In operation420, defects in the photomask are detected. The defects230a′-230d′ in the photomask are detected using an inspection system. The inspection system illuminates the photomask with radiation in order to identify areas of variation in reflection of the radiation by the photomask. The variation in reflection indicates a variation in topography, density, crystal structure or other types of defects. In some embodiments, the inspection system illuminates the photomask with a plurality of wavelengths in order to detect defects in the photomask. In some embodiments, the wavelength of the inspection system matches a wavelength of a photolithography process to be performed using the photomask. In some embodiments, the wavelength of the inspection system is EUV, deep ultraviolet (DUV), vacuum ultraviolet (VUV) or another suitable wavelength. In operation430, the locations of the detected defects are stored, e.g., in non-transitory computer readable medium504(FIG.5). The stored location information is correlated to identifying information of the photomask. The identifying information is based on the at least one fiducial mark210and/or220. The location information is stored on a non-transitory computer readable medium504for retrieval during a patterning process of the photomask. In some embodiments, the location information is stored in a table. In some embodiments, the location information is stored in a non-transitory computer readable medium in the inspection system. In some embodiments, the location information is transmitted, either wired or wireless, to a non-transitory computer readable medium separate from the inspection system. In operation440, reflective or transmissive patterns250a-250f(FIG.2B) are defined on the photomask based on the stored defect locations. The patterns250a-250fare defined on the photomask using an e-beam writing tool. In some embodiments, the e-beam writing tool uses at least one fiducial mark220on the photomask as an alignment mark for defining the patterns on the photomask. Initial locations for the patterns250a-250fare based on a hypothetical defect-free photomask. A processor, e.g., process502(FIG.5) connected to the e-beam writing tool is configured to provide instructions to the e-beam writing tool to adjust a location of at least one of the patterns250a-250fbased on the stored defect locations. The location is adjusted by rotating the pattern, e.g., pattern,250d, or by translating the pattern, e.g., pattern250a, in a plane parallel to the top surface of the photomask. The processor502is configured to retrieve the defect locations from a non-transitory computer readable medium504based on identifying information of the photomask. The identifying information is obtained based on the at least one fiducial mark210on the photomask. In some embodiments, an optical scanner reads the at least one fiducial mark210. The optical scanner is connected to the processor502; and the processor502is configured to compare the at least one fiducial mark210with other fiducial marks in order to identify the photomask. In some embodiments, the processor502automatically provides instructions to the e-beam writing tool for adjusting the location of at least one of the patterns250a-250f. In some embodiments, the processor502receives instructions from a user, e.g., through I/O interface510(FIG.5), for adjusting the location of at least one of the patterns250a-250f. In some embodiments, the processor502provides suggested locations adjustments to the user. In operation450, the pattern from the photomask is transferred to a wafer using the reflective or transmissive patterns. The pattern is transferred using a photolithography process, e.g., an EUV photolithography process. In some embodiments, the pattern is transferred to the wafer by sequentially scanning sub-patterns250a-250fon the photomask. In some embodiments, a processor is connected to the photolithography tool in order to provide instructions for locations of each of the patterns on the photomask. The instructions provided by the processor502help the photolithography tool, e.g., photolithography arrangement100(FIG.1), account for adjustments in locations of patterns from operation440. In some embodiments, the processor502is configured to provide instructions to the photolithography tool based on identifying information from the at least one fiducial mark210and/or220on the photomask. In some embodiments, an order of operations of method400is altered. For example, in some embodiments, operation420occurs prior to operation410. In some embodiments, at least one operation is omitted from method400. For example, in some embodiments, a manufacturer receives a photomask along with a defect map and operation420is omitted. In some embodiments, additional operations are added to method400. For example, in some embodiments, fiducial marks of different types are formed using different processes or at different times. FIG.5is a schematic diagram of a system500for making a semiconductor device in accordance with some embodiments. System500includes a hardware processor502and a non-transitory, computer readable storage medium504encoded with, i.e., storing, the computer program code506, i.e., a set of executable instructions. Computer readable storage medium504is also encoded with instructions507for interfacing with manufacturing machines for producing the semiconductor device. The processor502is electrically coupled to the computer readable storage medium504via a bus508. The processor502is also electrically coupled to an I/O interface510by bus508. A network interface512is also electrically connected to the processor502via bus508. Network interface512is connected to a network514, so that processor502and computer readable storage medium504are capable of connecting to external elements via network514. The processor502is configured to execute the computer program code506encoded in the computer readable storage medium504in order to cause system500to be usable for performing a portion or all of the operations as described in method400. In some embodiments, the processor502is a central processing unit (CPU), a multi-processor, a distributed processing system, an application specific integrated circuit (ASIC), and/or a suitable processing unit. In some embodiments, the computer readable storage medium504is an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device). For example, the computer readable storage medium504includes a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk. In some embodiments using optical disks, the computer readable storage medium504includes a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD). In some embodiments, the storage medium504stores the computer program code506configured to cause system500to perform method400. In some embodiments, the storage medium504also stores information needed for performing400as well as information generated during performing the method400, such as a defect locations parameter516, a pattern locations parameter518, mask identifying information parameter520, an e-beam writer information parameter522and/or a set of executable instructions to perform the operation of method400. In some embodiments, the storage medium504stores instructions507for interfacing with manufacturing machines. The instructions507enable processor502to generate manufacturing instructions readable by the manufacturing machines to effectively implement method400during a manufacturing process. System500includes I/O interface510. I/O interface510is coupled to external circuitry. In some embodiments, I/O interface510includes a keyboard, keypad, mouse, trackball, trackpad, and/or cursor direction keys for communicating information and commands to processor502. System500also includes network interface512coupled to the processor502. Network interface512allows system500to communicate with network514, to which one or more other computer systems are connected. Network interface512includes wireless network interfaces such as BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA; or wired network interface such as ETHERNET, USB, or IEEE-1394. In some embodiments, method400is implemented in two or more systems500, and information such as memory type, memory array layout, I/O voltage, I/O pin location and charge pump are exchanged between different systems500via network514. System500is configured to receive information related to defect locations through I/O interface510or network interface512. The information is transferred to processor502via bus508to determine locations of the defects. The locations of the defects are then stored in computer readable medium504as defect locations parameter516. System500is configured to generate information related to pattern locations. The information is stored in computer readable medium504as pattern locations parameter518. System500is configured to receive information related to mask identifying information through I/O interface510or network interface512. The information is stored in computer readable medium504as mask identifying information parameter520. System500is configured to receive information related to e-beam writing information through I/O interface510or network interface512. The information is stored in computer readable medium504as e-beam writer information parameter522. During operation, processor502executes a set of instructions507to identify a photomask; retrieve locations of defects of the identified photomask; and provide instructions to an e-beam writing tool for determining a location of patterns to be defined on the photomask. By executing instructions507, and storing and retrieving information from computer readable medium504, processor502is able to execute method400. FIG.6is a block diagram of an integrated circuit (IC) manufacturing system600, and an IC manufacturing flow associated therewith, in accordance with some embodiments. In general, system600generates a layout. Based on the layout, system600fabricates at least one of (A) one or more semiconductor masks or (b) at least one component in a layer of an inchoate semiconductor integrated circuit. InFIG.6, IC manufacturing system600includes entities, such as a design house620, a mask house630, and an IC manufacturer/fabricator (“fab”)650, that interact with one another in the design, development, and manufacturing cycles and/or services related to manufacturing an IC device660. In some embodiments, the manufacturing system600is usable to create a photomask, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B), based on a layout design and then transfer a pattern on the photomask to a wafer, e.g., using photolithography arrangement100(FIG.1). The entities in system600are connected by a communications network. In some embodiments, the communications network is a single network. In some embodiments, the communications network is a variety of different networks, such as an intranet and the Internet. The communications network includes wired and/or wireless communication channels. Each entity interacts with one or more of the other entities and provides services to and/or receives services from one or more of the other entities. In some embodiments, two or more of design house620, mask house630, and IC fab650is owned by a single larger company. In some embodiments, two or more of design house620, mask house630, and IC fab650coexist in a common facility and use common resources. Design house (or design team)620generates an IC design622in the form of a layout. IC design622is usable to determine a pattern to be formed on a photomask, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B). IC design622includes various geometrical patterns designed for an IC device660. The geometrical patterns correspond to patterns of metal, oxide, or semiconductor layers that make up the various components of IC device660to be fabricated. The various layers combine to form various IC features. For example, a portion of IC design622includes various IC features, such as an active region, gate electrode, source and drain, metal lines or vias of an interlayer interconnection, and openings for bonding pads, to be formed in a semiconductor substrate (such as a silicon wafer) and various material layers disposed on the semiconductor substrate. Design house620implements a proper design procedure to form IC design622. The design procedure includes one or more of logic design, physical design or place and route. IC design622is presented in one or more data files having information of the geometrical patterns. For example, IC design622can be expressed in a GDSII file format or DFII file format. Mask house630includes data preparation632and mask fabrication644. Mask house630uses IC design622to manufacture one or more masks, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B), to be used for fabricating the various layers of IC device660according to IC design622. Mask house630performs mask data preparation632, where IC design622is converted into a representative data file (“RDF”). Mask data preparation632provides the RDF to mask fabrication644. In some embodiments, mask fabrication644modifies a photomask blank, e.g., photomask blank300(FIG.3), to form a photomask, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B), which includes at least one pattern, e.g., sub-patterns250a-250f(FIG.2B). Mask fabrication644includes a mask writer. A mask writer converts the RDF to an image on a substrate, such as a mask (reticle) or a semiconductor wafer. The design layout is manipulated by mask data preparation632to comply with particular characteristics of the mask writer and/or requirements of IC fab650. InFIG.6, mask data preparation632and mask fabrication644are illustrated as separate elements. In some embodiments, mask data preparation632and mask fabrication644can be collectively referred to as mask data preparation. In some embodiments, method400(FIG.4) is implemented by mask house630. In some embodiments, mask house630outputs a mask, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B). In some embodiments, mask data preparation632includes optical proximity correction (OPC) which uses lithography enhancement techniques to compensate for image errors, such as those that can arise from diffraction, interference, other process effects or the like. OPC adjusts IC design622. In some embodiments, mask data preparation632includes further resolution enhancement techniques (RET), such as off-axis illumination, sub-resolution assist features, phase-shifting masks, other suitable techniques, or the like or combinations thereof. In some embodiments, inverse lithography technology (ILT) is also used, which treats OPC as an inverse imaging problem. In some embodiments, mask data preparation632includes a mask rule checker (MRC) that checks the IC design layout that has undergone processes in OPC with a set of mask creation rules which contain certain geometric and/or connectivity restrictions to ensure sufficient margins, to account for variability in semiconductor manufacturing processes, or the like. In some embodiments, the MRC modifies the IC design layout to compensate for limitations during mask fabrication644, which may undo part of the modifications performed by OPC in order to meet mask creation rules. In some embodiments, mask data preparation632includes lithography process checking (LPC) that simulates processing that will be implemented by IC fab650to fabricate IC device660. LPC simulates this processing based on IC design622to create a simulated manufactured device, such as IC device660. The processing parameters in LPC simulation can include parameters associated with various processes of the IC manufacturing cycle, parameters associated with tools used for manufacturing the IC, and/or other aspects of the manufacturing process. LPC takes into account various factors, such as aerial image contrast, depth of focus (“DOF”), mask error enhancement factor (“MEEF”), other suitable factors, or the like or combinations thereof. In some embodiments, after a simulated manufactured device has been created by LPC, if the simulated device is not close enough in shape to satisfy design rules, OPC and/or MRC are be repeated to further refine IC design622. It should be understood that the above description of mask data preparation632has been simplified for the purposes of clarity. In some embodiments, data preparation632includes additional features such as a logic operation (LOP) to modify the IC design layout according to manufacturing rules. Additionally, the processes applied to IC design622during data preparation632may be executed in a variety of different orders. After mask data preparation632and during mask fabrication644, a mask, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B), or a group of masks are fabricated, e.g., using photomask blank300(FIG.3), based on the modified IC design622. In some embodiments, an electron-beam (e-beam) or a mechanism of multiple e-beams is used to form a pattern on a mask (photomask or reticle) based on the modified IC design622. In some embodiments, the e-beam using at least one fiducial mark, e.g., fiducial marks220, to determine a location to form a pattern on the mask. The mask can be formed in various technologies. In some embodiments, the mask is formed using binary technology. In some embodiments, a mask pattern includes opaque regions and transparent regions. A radiation beam, such as an ultraviolet (UV) beam, used to expose the image sensitive material layer (e.g., photoresist) which has been coated on a wafer, is blocked by the opaque region and transmits through the transparent regions. In one example, a binary mask includes a transparent substrate (e.g., fused quartz) and an opaque material (e.g., chromium) coated in the opaque regions of the mask. In another example, the mask is formed using a phase shift technology. In the phase shift mask (PSM), various features in the pattern formed on the mask are configured to have proper phase difference to enhance the resolution and imaging quality. In various examples, the phase shift mask can be attenuated PSM or alternating PSM. The mask(s) generated by mask fabrication644is used in a variety of processes. For example, such a mask(s) is used in an ion implantation process to form various doped regions in the semiconductor wafer, in an etching process to form various etching regions in the semiconductor wafer, and/or in other suitable processes. IC fab650is an IC fabrication business that includes one or more manufacturing facilities for the fabrication of a variety of different IC products. In some embodiments, IC Fab650is a semiconductor foundry. For example, there may be a manufacturing facility for the front end fabrication of a plurality of IC products (front-end-of-line (FEOL) fabrication), while a second manufacturing facility may provide the back end fabrication for the interconnection and packaging of the IC products (back-end-of-line (BEOL) fabrication), and a third manufacturing facility may provide other services for the foundry business. IC fab650uses, e.g., in photolithography arrangement100(FIG.1), the mask (or masks) fabricated by mask house630, e.g., photomask200(FIG.2A) or photomask200′ (FIG.2B), to fabricate IC device660. Thus, IC fab650at least indirectly uses IC design622to fabricate IC device660. In some embodiments, a semiconductor wafer652is fabricated by IC fab650using the mask (or masks) to form IC device660. Semiconductor wafer652includes a silicon substrate or other proper substrate having material layers formed thereon. Semiconductor wafer further includes one or more of various doped regions, dielectric features, multilevel interconnects, or the like (formed at subsequent manufacturing steps). Details regarding an integrated circuit (IC) manufacturing system (e.g., system600ofFIG.6), and an IC manufacturing flow associated therewith are found, e.g., in U.S. Pat. No. 9,256,709, granted Feb. 9, 2016, U.S. Pre-Grant Publication No. 20150278429, published Oct. 1, 2015, U.S. Pre-Grant Publication No. 20140040838, published Feb. 6, 2014, and U.S. Pat. No. 7,260,442, granted Aug. 21, 2007, the entireties of each of which are hereby incorporated by reference. An aspect of this description relates to a method of making a semiconductor device. The method includes defining a pattern including a plurality of sub-patterns on the photomask in the pattern region based on the identifying information. The defining of the pattern includes defining a first sub-pattern of the plurality of sub-patterns having a first spacing from a second sub-pattern of the plurality of sub-patterns, wherein the first spacing is different from a second spacing between the second sub-pattern and a third sub-pattern of the plurality of sub-patterns, or rotating the first sub-pattern about an axis perpendicular to a top surface of the photomask relative to the second sub-pattern. In some embodiments, defining the pattern includes rotating the first sub-pattern about the axis perpendicular to the top surface of the photomask. In some embodiments, defining the pattern includes defining the first spacing different from the second spacing. In some embodiments, defining the pattern includes positioning the first sub-pattern to surround a defect. In some embodiments, the positioning the first sub-pattern includes positioning the first sub-pattern to have the defect between features of the first sub-pattern. In some embodiments, defining the pattern includes positioning the first sub-pattern completely separated from a defect. In some embodiments, defining the pattern includes defining the plurality of sub-patterns along multiple rows of the photomask. An aspect of this description relates to a photomask. The photomask includes a plurality of defects in a pattern region. The photomask further includes a first sub-pattern in the pattern region, wherein the first sub-pattern has a first set of features. The photomask further includes a second sub-pattern in the pattern region, wherein the second sub-pattern has the first set of features, and the second sub-pattern is rotated with respect to the first sub-pattern. In some embodiments, none of the plurality of defects are inside of the second sub-pattern. In some embodiments, a first defect of the plurality of defects is inside of the first sub-pattern. In some embodiments, the photomask further includes a third sub-pattern having the first set of features, wherein the third sub-pattern is a first distance from the first sub-pattern; and a fourth sub-pattern having the first set of features, wherein the fourth sub-pattern is a second distance from the third sub-pattern. In some embodiments, the first distance is different from the second distance. In some embodiments, a first defect of the plurality of defects is between the first sub-pattern and the third sub-pattern. In some embodiments, the third sub-pattern is spaced from the first sub-pattern in a first direction, and the second sub-pattern is spaced from the first sub-pattern in a second direction perpendicular to the first direction. In some embodiments, a first defect of the plurality of defects is between features of the first set of features within the first sub-pattern. An aspect of this description relates to a photomask. The photomask includes a plurality of defects in a pattern region. The photomask further includes a first sub-pattern in the pattern region, wherein the first sub-pattern has a first set of features. The photomask further includes a second sub-pattern in the pattern region, wherein the second sub-pattern has the first set of features, a first distance between a bottom of the first sub-pattern to a bottom of the second sub-pattern is different from a second distance between a top of the first sub-pattern and a top of the second sub-pattern. In some embodiments, the photomask further includes a third sub-pattern having the first set of features, wherein a third distance between a bottom of the first sub-pattern to a bottom of the third sub-pattern is equal to a fourth distance between the top of the first sub-pattern and a top of the third sub-pattern. In some embodiments, the third distance is different from the first distance. In some embodiments, the third distance is different from the second distance. In some embodiments, the photomask further includes a third sub-pattern having the first set of features, wherein the first sub-pattern is spaced from the second sub-pattern in a first direction, and the first sub-pattern is spaced from the third sub-pattern in a second direction perpendicular to the first direction. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 51,956 |
11860533 | DETAILED DESCRIPTION Before describing several exemplary embodiments of the disclosure, it is to be understood that the disclosure is not limited to the details of construction or process steps set forth in the following description. The disclosure is capable of other embodiments and of being practiced or being carried out in various ways. The term “horizontal” as used herein is defined as a plane parallel to the plane or surface of a mask blank, regardless of its orientation. The term “vertical” refers to a direction perpendicular to the horizontal as just defined. Terms, such as “above”, “below”, “bottom”, “top”, “side” (as in “sidewall”), “higher”, “lower”, “upper”, “over”, and “under”, are defined with respect to the horizontal plane, as shown in the figures. The term “on” indicates that there is direct contact between elements. The term “directly on” indicates that there is direct contact between elements with no intervening elements. Those skilled in the art will understand that the use of ordinals such as “first” and “second” to describe process regions do not imply a specific location within the processing chamber, or order of exposure within the processing chamber. As used in this specification and the appended claims, the term “substrate” refers to a surface, or portion of a surface, upon which a process acts. It will also be understood by those skilled in the art that reference to a substrate can refer to only a portion of the substrate, unless the context clearly indicates otherwise. Additionally, reference to depositing on a substrate can mean both a bare substrate and a substrate with one or more films or features deposited or formed thereon. Referring now toFIG.2, an exemplary embodiment of an extreme ultraviolet lithography system100is shown. The extreme ultraviolet lithography system100includes an extreme ultraviolet light source102for producing extreme ultraviolet light112, a set of reflective elements, and a target wafer110. The reflective elements include a condenser104, an EUV reflective mask106, an optical reduction assembly108, a mask blank, a mirror, or a combination thereof. The extreme ultraviolet light source102generates the extreme ultraviolet light112. The extreme ultraviolet light112is electromagnetic radiation having a wavelength in a range of 5 to 50 nanometers (nm). For example, the extreme ultraviolet light source102includes a laser, a laser produced plasma, a discharge produced plasma, a free-electron laser, synchrotron radiation, or a combination thereof. The extreme ultraviolet light source102generates the extreme ultraviolet light112having a variety of characteristics. The extreme ultraviolet light source102produces broadband extreme ultraviolet radiation over a range of wavelengths. For example, the extreme ultraviolet light source102generates the extreme ultraviolet light112having wavelengths ranging from 5 to 50 nm. In one or more embodiments, the extreme ultraviolet light source102produces the extreme ultraviolet light112having a narrow bandwidth. For example, the extreme ultraviolet light source102generates the extreme ultraviolet light112at 13.5 nm. The center of the wavelength peak is 13.5 nm. The condenser104is an optical unit for reflecting and focusing the extreme ultraviolet light112. The condenser104reflects and concentrates the extreme ultraviolet light112from the extreme ultraviolet light source102to illuminate the EUV reflective mask106. Although the condenser104is shown as a single element, it is understood that the condenser104can include one or more reflective elements such as concave mirrors, convex mirrors, flat mirrors, or a combination thereof, for reflecting and concentrating the extreme ultraviolet light112. For example, the condenser104can be a single concave mirror or an optical assembly having convex, concave, and flat optical elements. The EUV reflective mask106is an extreme ultraviolet reflective element having a mask pattern114. The EUV reflective mask106creates a lithographic pattern to form a circuitry layout to be formed on the target wafer110. The EUV reflective mask106reflects the extreme ultraviolet light112. The mask pattern114defines a portion of a circuitry layout. The optical reduction assembly108is an optical unit for reducing the image of the mask pattern114. The reflection of the extreme ultraviolet light112from the EUV reflective mask106is reduced by the optical reduction assembly108and reflected on to the target wafer110. The optical reduction assembly108can include mirrors and other optical elements to reduce the size of the image of the mask pattern114. For example, the optical reduction assembly108can include concave mirrors for reflecting and focusing the extreme ultraviolet light112. The optical reduction assembly108reduces the size of the image of the mask pattern114on the target wafer110. For example, the mask pattern114can be imaged at a 4:1 ratio by the optical reduction assembly108on the target wafer110to form the circuitry represented by the mask pattern114on the target wafer110. The extreme ultraviolet light112can scan the EUV reflective mask106synchronously with the target wafer110to form the mask pattern114on the target wafer110. Referring now toFIG.3, an embodiment of an extreme ultraviolet reflective element production system200is shown. The extreme ultraviolet reflective element includes a EUV mask blank204, an extreme ultraviolet mirror205, or other reflective element such as an EUV reflective mask106. The extreme ultraviolet reflective element production system200can produce mask blanks, mirrors, or other elements that reflect the extreme ultraviolet light112ofFIG.2. The extreme ultraviolet reflective element production system200fabricates the reflective elements by applying thin coatings to source substrates203. The EUV mask blank204is a multilayered structure for forming the EUV reflective mask106ofFIG.2. The EUV mask blank204can be formed using semiconductor fabrication techniques. The EUV reflective mask106can have the mask pattern114ofFIG.2formed on the EUV mask blank204by etching and other processes. The extreme ultraviolet mirror205is a multilayered structure reflective in a range of extreme ultraviolet light. The extreme ultraviolet mirror205can be formed using semiconductor fabrication techniques. The EUV mask blank204and the extreme ultraviolet mirror205can be similar structures with respect to the layers formed on each element, however, the extreme ultraviolet mirror205does not have the mask pattern114. The reflective elements are efficient reflectors of the extreme ultraviolet light112. In an embodiment, the EUV mask blank204and the extreme ultraviolet mirror205has an extreme ultraviolet reflectivity of greater than 60%. The reflective elements are efficient if they reflect more than 60% of the extreme ultraviolet light112. The extreme ultraviolet reflective element production system200includes a wafer loading and carrier handling system202into which the source substrates203are loaded and from which the reflective elements are unloaded. An atmospheric handling system206provides access to a wafer handling vacuum chamber208. The wafer loading and carrier handling system202can include substrate transport boxes, loadlocks, and other components to transfer a substrate from atmosphere to vacuum inside the system. Because the EUV mask blank204is used to form devices at a very small scale, the source substrates203and the EUV mask blank204are processed in a vacuum system to prevent contamination and other defects. The wafer handling vacuum chamber208can contain two vacuum chambers, a first vacuum chamber210and a second vacuum chamber212. The first vacuum chamber210includes a first wafer handling system214and the second vacuum chamber212includes a second wafer handling system216. Although the wafer handling vacuum chamber208is described with two vacuum chambers, it is understood that the system can have any number of vacuum chambers. The wafer handling vacuum chamber208can have a plurality of ports around its periphery for attachment of various other systems. The first vacuum chamber210has a degas system218, a first physical vapor deposition system220, a second physical vapor deposition system222, and a pre-clean system224. The degas system218is for thermally desorbing moisture from the substrates. The pre-clean system224is for cleaning the surfaces of the wafers, mask blanks, mirrors, or other optical components. The physical vapor deposition systems, such as the first physical vapor deposition system220and the second physical vapor deposition system222, can be used to form thin films of conductive materials on the source substrates203. For example, the physical vapor deposition systems can include vacuum deposition system such as magnetron sputtering systems, ion sputtering systems, pulsed laser deposition, cathode arc deposition, or a combination thereof. The physical vapor deposition systems, such as the magnetron sputtering system, form thin layers on the source substrates203including the layers of silicon, metals, alloys, compounds, or a combination thereof. The physical vapor deposition system forms reflective layers, capping layers, and absorber layers. For example, the physical vapor deposition systems can form layers of silicon, molybdenum, titanium oxide, titanium dioxide, ruthenium oxide, niobium oxide, ruthenium tungsten, ruthenium molybdenum, ruthenium niobium, chromium, antimony, nitrides, compounds, or a combination thereof. Although some compounds are described as an oxide, it is understood that the compounds can include oxides, dioxides, atomic mixtures having oxygen atoms, or a combination thereof. The second vacuum chamber212has a first multi-cathode source226, a chemical vapor deposition system228, a cure chamber230, and an ultra-smooth deposition chamber232connected to it. For example, the chemical vapor deposition system228can include a flowable chemical vapor deposition system (FCVD), a plasma assisted chemical vapor deposition system (CVD), an aerosol assisted CVD, a hot filament CVD system, or a similar system. In another example, the chemical vapor deposition system228, the cure chamber230, and the ultra-smooth deposition chamber232can be in a separate system from the extreme ultraviolet reflective element production system200. The chemical vapor deposition system228can form thin films of material on the source substrates203. For example, the chemical vapor deposition system228can be used to form layers of materials on the source substrates203including mono-crystalline layers, polycrystalline layers, amorphous layers, epitaxial layers, or a combination thereof. The chemical vapor deposition system228can form layers of silicon, silicon oxides, silicon oxycarbide, tantalum, tungsten, silicon carbide, silicon nitride, titanium nitride, metals, alloys, and other materials suitable for chemical vapor deposition. For example, the chemical vapor deposition system can form planarization layers. The first wafer handling system214is capable of moving the source substrates203between the atmospheric handling system206and the various systems around the periphery of the first vacuum chamber210in a continuous vacuum. The second wafer handling system216is capable of moving the source substrates203around the second vacuum chamber212while maintaining the source substrates203in a continuous vacuum. The extreme ultraviolet reflective element production system200can transfer the source substrates203and the EUV mask blank204between the first wafer handling system214, the second wafer handling system216in a continuous vacuum. Referring now toFIG.4, an embodiment of an extreme ultraviolet reflective element302is shown. In one or more embodiments, the extreme ultraviolet reflective element302is the EUV mask blank204ofFIG.3or the extreme ultraviolet mirror205ofFIG.3. The EUV mask blank204and the extreme ultraviolet mirror205are structures for reflecting the extreme ultraviolet light112ofFIG.2. The EUV mask blank204can be used to form the EUV reflective mask106shown inFIG.2. The extreme ultraviolet reflective element302includes a substrate304, a multilayer stack306of reflective layers, and a capping layer308. In one or more embodiments, the extreme ultraviolet mirror205is used to form reflecting structures for use in the condenser104ofFIG.2or the optical reduction assembly108ofFIG.2. The extreme ultraviolet reflective element302, which can be a EUV mask blank204, includes the substrate304, the multilayer stack306of reflective layers, the capping layer308, and an absorber310. The extreme ultraviolet reflective element302can be a EUV mask blank204, which is used to form the EUV reflective mask106ofFIG.2by patterning the absorber310with the layout of the circuitry required. In the following sections, the term for the EUV mask blank204is used interchangeably with the term of the extreme ultraviolet mirror205for simplicity. In one or more embodiments, the EUV mask blank204includes the components of the extreme ultraviolet mirror205with the absorber310added in addition to form the mask pattern114ofFIG.2. The EUV mask blank204is an optically flat structure used for forming the EUV reflective mask106having the mask pattern114. In one or more embodiments, the reflective surface of the EUV mask blank204forms a flat focal plane for reflecting the incident light, such as the extreme ultraviolet light112ofFIG.2. The substrate304is an element for providing structural support to the extreme ultraviolet reflective element302. In one or more embodiments, the substrate304is made from a material having a low coefficient of thermal expansion (CTE) to provide stability during temperature changes. In one or more embodiments, the substrate304has properties such as stability against mechanical cycling, thermal cycling, crystal formation, or a combination thereof. The substrate304according to one or more embodiments is formed from a material such as silicon, glass, oxides, ceramics, glass ceramics, or a combination thereof. The multilayer stack306is a structure that is reflective to the extreme ultraviolet light112. The multilayer stack306includes alternating reflective layers of a first reflective layer312and a second reflective layer314. The first reflective layer312and the second reflective layer314form a reflective pair316ofFIG.4. In a non-limiting embodiment, the multilayer stack306includes a range of 20-60 of the reflective pairs316for a total of up to 120 reflective layers. The first reflective layer312and the second reflective layer314can be formed from a variety of materials. In an embodiment, the first reflective layer312and the second reflective layer314are formed from silicon and molybdenum, respectively. Although the layers are shown as silicon and molybdenum, it is understood that the alternating layers can be formed from other materials or have other internal structures. The first reflective layer312and the second reflective layer314can have a variety of structures. In an embodiment, both the first reflective layer312and the second reflective layer314are formed with a single layer, multiple layers, a divided layer structure, non-uniform structures, or a combination thereof. Because most materials absorb light at extreme ultraviolet wavelengths, the optical elements used are reflective instead of the transmissive as used in other lithography systems. The multilayer stack306forms a reflective structure by having alternating thin layers of materials with different optical properties to create a Bragg reflector or mirror. In an embodiment, each of the alternating layers has dissimilar optical constants for the extreme ultraviolet light112. The alternating layers provide a resonant reflectivity when the period of the thickness of the alternating layers is one half the wavelength of the extreme ultraviolet light112. In an embodiment, for the extreme ultraviolet light112at a wavelength of 13 nm, the alternating layers are about 6.5 nm thick. It is understood that the sizes and dimensions provided are within normal engineering tolerances for typical elements. The multilayer stack306can be formed in a variety of ways. In an embodiment, the first reflective layer312and the second reflective layer314are formed with magnetron sputtering, ion sputtering systems, pulsed laser deposition, cathode arc deposition, or a combination thereof. In an illustrative embodiment, the multilayer stack306is formed using a physical vapor deposition technique, such as magnetron sputtering. In an embodiment, the first reflective layer312and the second reflective layer314of the multilayer stack306have the characteristics of being formed by the magnetron sputtering technique including precise thickness, low roughness, and clean interfaces between the layers. In an embodiment, the first reflective layer312and the second reflective layer314of the multilayer stack306have the characteristics of being formed by the physical vapor deposition including precise thickness, low roughness, and clean interfaces between the layers. The physical dimensions of the layers of the multilayer stack306formed using the physical vapor deposition technique can be precisely controlled to increase reflectivity. In an embodiment, the first reflective layer312, such as a layer of silicon, has a thickness of 4.1 nm. The second reflective layer314, such as a layer of molybdenum, has a thickness of 2.8 nm. The thickness of the layers dictates the peak reflectivity wavelength of the extreme ultraviolet reflective element. If the thickness of the layers is incorrect, the reflectivity at the desired wavelength 13.5 nm can be reduced. In an embodiment, the multilayer stack306has a reflectivity of greater than 60%. In an embodiment, the multilayer stack306formed using physical vapor deposition has a reflectivity in a range of 66%-67%. In one or more embodiments, forming the capping layer308over the multilayer stack306formed with harder materials improves reflectivity. In some embodiments, reflectivity greater than 70% is achieved using low roughness layers, clean interfaces between layers, improved layer materials, or a combination thereof. In one or more embodiments, the capping layer308is a protective layer allowing the transmission of the extreme ultraviolet light112. In an embodiment, the capping layer308is formed directly on the multilayer stack306. In one or more embodiments, the capping layer308protects the multilayer stack306from contaminants and mechanical damage. In one embodiment, the multilayer stack306is sensitive to contamination by oxygen, tantalum, hydrotantalums, or a combination thereof. The capping layer308according to an embodiment interacts with the contaminants to neutralize them. In one or more embodiments, the capping layer308is an optically uniform structure that is transparent to the extreme ultraviolet light112. The extreme ultraviolet light112passes through the capping layer308to reflect off of the multilayer stack306. In one or more embodiments, the capping layer308has a total reflectivity loss of 1% to 2%. In one or more embodiments, each of the different materials has a different reflectivity loss depending on thickness, but all of them will be in a range of 1% to 2%. In one or more embodiments, the capping layer308has a smooth surface. For example, the surface of the capping layer308can have a roughness of less than 0.2 nm RMS (root mean square measure). In another example, the surface of the capping layer308has a roughness of 0.08 nm RMS for a length in a range of 1/100 nm and 1/1 μm. The RMS roughness will vary depending on the range it is measured over. For the specific range of 100 nm to 1 micron that roughness is 0.08 nm or less. Over a larger range the roughness will be higher. The capping layer308can be formed in a variety of methods. In an embodiment, the capping layer308is formed on or directly on the multilayer stack306with magnetron sputtering, ion sputtering systems, ion beam deposition, electron beam evaporation, radio frequency (RF) sputtering, atomic layer deposition (ALD), pulsed laser deposition, cathode arc deposition, or a combination thereof. In one or more embodiments, the capping layer308has the physical characteristics of being formed by the magnetron sputtering technique including precise thickness, low roughness, and clean interfaces between the layers. In an embodiment, the capping layer308has the physical characteristics of being formed by the physical vapor deposition including precise thickness, low roughness, and clean interfaces between the layers. In one or more embodiments, the capping layer308is formed from a variety of materials having a hardness sufficient to resist erosion during cleaning. In one embodiment, ruthenium is used as a capping layer material because it is a good etch stop and is relatively inert under the operating conditions. However, it is understood that other materials can be used to form the capping layer308. In specific embodiments, the capping layer308has a thickness in a range of 2.5 and 5.0 nm. In one or more embodiments, the absorber310is a layer that absorbs the extreme ultraviolet light112. In an embodiment, the absorber310is used to form the pattern on the EUV reflective mask106by providing areas that do not reflect the extreme ultraviolet light112. The absorber310, according to one or more embodiments, comprises a material having a high absorption coefficient for a particular frequency of the extreme ultraviolet light112, such as about 13.5 nm. In an embodiment, the absorber310is formed directly on the capping layer308, and the absorber310is etched using a photolithography process to form the pattern of the EUV reflective mask106. According to one or more embodiments, the extreme ultraviolet reflective element302, such as the extreme ultraviolet mirror205, is formed with the substrate304, the multilayer stack306, and the capping layer308. The extreme ultraviolet mirror205has an optically flat surface and can efficiently and uniformly reflect the extreme ultraviolet light112. According to one or more embodiments, the extreme ultraviolet reflective element302, such as the EUV mask blank204, is formed with the substrate304, the multilayer stack306, the capping layer308, and the absorber310. The mask blank204has an optically flat surface and can efficiently and uniformly reflect the extreme ultraviolet light112. In an embodiment, the mask pattern114is formed with the absorber310of the EUV mask blank204. According to one or more embodiments, forming the absorber310over the capping layer308increases reliability of the EUV reflective mask106. The capping layer308acts as an etch stop layer for the absorber310. When the mask pattern114ofFIG.2is etched into the absorber310, the capping layer308beneath the absorber310stops the etching action to protect the multilayer stack306. In one or more embodiments, the absorber310is etch selective to the capping layer308. In some embodiments, the capping layer308comprises ruthenium, and the absorber310is etch selective to ruthenium. Referring now toFIG.5, an extreme ultraviolet mask blank400is shown as comprising a substrate414, a multilayer stack of reflective layers412on the substrate414, the multilayer stack of reflective layers412including a plurality of reflective layer pairs. In one or more embodiments, the plurality of reflective layer pairs are made from a material selected from a molybdenum (Mo) containing material and silicon (Si) containing material. In some embodiments, the plurality of reflective layer pairs comprise alternating layers of molybdenum and silicon. The extreme ultraviolet mask blank400further includes a capping layer422on the multilayer stack of reflective layers412, and there is an absorber420on the capping layer422. In one embodiment, the absorber420comprises a first layer420aand a second layer420b, which provides an absorber layer pair. In one or more embodiment, the plurality of reflective layers412are selected from a molybdenum (Mo) containing material and a silicon (Si) containing material and the capping layer422comprises ruthenium. In specific embodiments, there is plurality of absorber layer pairs, which provides a multilayer stack420of absorber layers including a plurality of absorber layer pairs420a,420b,420c,420d,420e,420f, each pair comprised of (first layer420a/second layer420b, first layer420c/second layer420d, first layer420e/second layer420f). In one or more embodiments, the thickness of the first layer and the second layer is optimized for different materials and applications, and typically in the range of 10-100 nm. In one or more embodiments, there is an absorber420on the capping layer, the absorber comprising a first layer (e.g.420a) selected from the group consisting of Mo, Nb, V, alloys of Mo, Nb and V, oxides of Mo, oxides of Nb, oxides of V, nitrides of Mo, nitrides of Nb and nitrides of V and a second layer (e.g.,420b) selected from the group consisting of TaSb, CSb, SbN, TaNi, TaCu and TaRu. In specific embodiments each of the second layer materials TaSb, CSb, SbN, TaNi, TaCu and TaRu is an alloy, and in more specific embodiments, each of the materials is an amorphous alloy. In one or more embodiments, the absorber420comprising the first layer420aand the second layer420bforms a bi-layer structure in which the first layer is a phase-shifter, and the second layer is an additional absorber. This could achieve an etchable phase shift mask stack with “tunable” phase shift and reflectance values to optimize the performance of the absorber for different type of applications. In one or more embodiments, the first layers described herein, when used together with a second layer absorber, such as TaSb, CSb, SbN, TaNi, TaCu or TaRu, achieve close to 215 degrees of phase shift with a reflectance between 6 and 15%. This could lead to significant improvement in the performance of the mask in terms of depth of focus (DOF), normalized image log slope (NILS) and telecentricity error (TCE) compared with the state-of-art Ta based absorbers. DOF is associated with the process window size for a lithography process. NILS is a measure of the quality of aerial image in a lithography process. TCE is a measure of image shift with defocus in the lithography process. In one or more embodiments, mask blanks as described herein provide the highest possible DOF and NILS with the lowest possible TCE. In embodiments, the first layer of the bilayer is a metal selected from Mo, Nb, V and alloys thereof and oxides and nitrides thereof. A range of materials when used together with the absorber of the bilayer, such as TaSb, could achieve close to 215 degrees of phase shift with a reflectance between 6 and 15%. This could lead to significant improvement in the performance of the mask in terms of Depth of Focus (DOF), Normalized Image Log-Slope (NILS) and Telecentricity error (TCE) comparing with the state-of-art Ta based absorbers. Other materials that can be used instead of TaSb include any material that is etchable with a high k value, such as CSb, SbN, TaNi, TaCu and TaRu. In specific embodiments, the first layer420acomprises Molybdenum (Mo), or oxides and/or nitrides of Mo. In specific embodiments, the first layer420acomprises Niobium (Nb), or oxides and/or nitrides of Nb. In specific embodiments, the first layer420acomprises Vanadium (V), or oxides and/or nitrides of V. In specific embodiments, the second layer comprises TaSb. In more specific embodiments, the TaSb comprises a range of from about 21.9 wt. % to about 78.2 wt. % tantalum and a range of from about 21.8 wt. % to about 78.1 wt. % antimony. In specific embodiments, the second layer comprises CSb. In more specific embodiments, the CSb comprises a range of from about 0.3 wt. % to about 3.6 wt. % carbon and a range of from about 96.4 wt. % to about 99.7 wt. % antimony, or a range of from about 5.0 wt. % to about 10.8 wt. % carbon and a range of from about 89.2 wt. % to about 95.0 wt. % antimony. In specific embodiments, the second layer comprises SbN. In more specific embodiments, the SbN comprises a range of from about 78.8 wt. % to about 99.8 wt. % antimony and a range of from about 0.2 wt. % to about 21.2 wt. % nitrogen. In specific embodiments, the second layer comprises TaNi. In more specific embodiments, the TaNi comprises a range of from about 56.9 wt. % to about 94.6 wt. % tantalum and a range of from about 5.4 wt. % to about 43.1 wt. % nickel. In specific embodiments, the second layer comprises TaCu. In more specific embodiments, the TaCu comprises a range of from about 74.0 wt. % to about 94.2 wt. % tantalum and a range of from about 5.8 wt. % to about 26.0 wt. % copper, or a range of from about 13.0 wt. % to about 65.0 wt. % tantalum and a range of from about 35.0 wt. % to 87.0 wt. % copper. In specific embodiments, the second layer comprises TaRu. In more specific embodiments, the TaRu comprises a range of from about 30.9 wt. % to about 80.7 wt. % tantalum and a range of from about 19.3 wt. % to about 69.1 wt. % ruthenium. According to one or more embodiments, the absorber layer pairs comprise a first layer (420a,420c,420e) and a second absorber layer (420b,420d,420f) each of the first absorber layers (420a,420c,420e) and second absorber layer (420b,420d,420f) have a thickness in a range of 0.1 nm and 10 nm, for example in a range of 1 nm and 5 nm, or in a range of 1 nm and 3 nm. In one or more specific embodiments, the thickness of the first layer420ais 0.5 nm, 0.6 nm, 0.7 nm, 0.8 nm, 0.9 nm, 1 nm, 1.1 nm, 1.2 nm, 1.3 nm, 1.4 nm, 1.5 nm, 1.6 nm, 1.7 nm, 1.8 nm, 1.9 nm, 2 nm, 2.1 nm, 2.2 nm, 2.3 nm, 2.4 nm, 2.5 nm, 2.6 nm, 2.7 nm, 2.8 nm, 2.9 nm, 3 nm, 3.1 nm, 3.2 nm, 3.3 nm, 3.4 nm, 3.5 nm, 3.6 nm, 3.7 nm, 3.8 nm, 3.9 nm, 4 nm, 4.1 nm, 4.2 nm, 4.3 nm, 4.4 nm, 4.5 nm, 4.6 nm, 4.7 nm, 4.8 nm, 4.9 nm, and 5 nm. In one or more embodiments, the thickness of the first absorber layer and second absorber layer of each pair is the same or different. For example, the first absorber layer and second absorber layer have a thickness such that there is a ratio of the first absorber layer thickness to second absorber layer thickness of 1:1, 1.5:1, 2:1, 2.5:1, 3:1, 3.5:1, 4:1, 4.5:1, 5:1, 6:1, 7:1, 8:1, 9:1, 10:1, 11:1, 12:1, 13:1, 14:1, 15:1, 16:1, 17:1, 18:1, 19:1, or 20:1, which results in the first absorber layer having a thickness that is equal to or greater than the second absorber layer thickness in each pair. Alternatively, the first absorber layer and second absorber layer have a thickness such that there is a ratio of the second absorber layer thickness to first absorber layer thickness of 1.5:1, 2:1, 2.5:1, 3:1, 3.5:1, 4:1, 4.5:1, 5:1, 6:1, 7:1, 8:1, 9:1, 10:1, 11:1, 12:1, 13:1, 14:1, 15:1, 16:1, 17:1, 18:1, 19:1, or 20:1 which results in the second absorber layer having a thickness that is equal to or greater than the first absorber layer thickness in each pair. According to one or more embodiments, the different absorber materials and thickness of the absorber layers are selected so that extreme ultraviolet light is absorbed due to absorbance and due to a phase change caused by destructive interfere with light from the multilayer stack of reflective layers. While the embodiment shown inFIG.5shows three absorber layer pairs,420a/420b,420c/420dand420e/420f, the claims should not be limited to a particular number of absorber layer pairs. According to one or more embodiments, the EUV mask blank400can include in a range of 1 and 60 absorber layer pairs or in a range of 10 and 40 absorber layer pairs. According to one or more embodiments, the absorber layers have a thickness which provides less than 2% reflectivity and other etch properties. A supply gas can be used to further modify the material properties of the absorber layers, for example, nitrogen (N2) gas can be used to form nitrides of the materials provided above. The multilayer stack of absorber layers according to one or more embodiments is a repetitive pattern of individual thickness of different materials so that the EUV light not only gets absorbed due to absorbance but by the phase change caused by multilayer absorber stack, which will destructively interfere with light from multilayer stack of reflective materials beneath to provide better contrast. Another aspect of the disclosure pertains to a method of manufacturing an extreme ultraviolet (EUV) mask blank comprising forming a multilayer stack of reflective layers on the substrate, the multilayer stack including a plurality of reflective layer pairs, forming a capping layer on the multilayer stack of reflective layers, and forming an absorber on the capping layer, the absorber comprising a first layer selected from the group consisting of Mo, Nb, V, alloys of Mo, Nb and V, oxides of Mo, oxides of Nb, oxides of V, nitrides of Mo, nitrides of Nb and nitrides of V and a second layer selected from the group consisting of TaSb, CSb, SbN, TaNi, TaCu and TaRu. In some embodiments, the first layer is formed by deposition of metallic Mo, Nb, V and their alloys by magnetron sputtering with Ar or Kr. Nitrides or oxides of Mo, Nb, V can be formed by reactive sputtering using metallic targets of Mo, Nb, and V, which can be sputtered by gases of Ar+N2or Ar+O2+N2to for nitride deposition with or without oxygen doping. Alternatively, gas phase nitridation or oxidation can be utilized. Such nitridation or oxidation can be performed by depositing a thin layer of metallic Mo, Nb or V (e.g., 1 nm-2 nm), stopping power and flow of N2gas or N2+O2gas of ˜2 mT for 5 s in a PVD chamber, and, this forms 1 cycle. Nitridation conditions can involve no power to the PVD chamber using 2 mT of N2pressure. A cycle can be repeated until the expected metal nitride thickness is achieved The EUV mask blank can have any of the characteristics of the embodiments described above with respect toFIG.4andFIG.5, and the method can be performed in the system described with respect toFIG.3. In another specific method embodiment, the different absorber layers are formed in a physical deposition chamber having a first cathode comprising a first absorber material and a second cathode comprising a second absorber material. Referring now toFIG.6an upper portion of a multi-cathode chamber500is shown in accordance with an embodiment. The multi-cathode chamber500includes a base structure501with a cylindrical body portion502capped by a top adapter504. The top adapter504has provisions for a number of cathode sources, such as cathode sources506,508,510,512, and514, positioned around the top adapter504. In one or more embodiments, the method forms an absorber layer that has a thickness in a range of 5 nm and 60 nm. In one or more embodiments, the absorber layer has a thickness in a range of 51 nm and 57 nm. In one or more embodiments, the materials used to form the absorber layer are selected to effect etch properties of the absorber layer. In one or more embodiments, the alloy of the absorber layer is formed by co-sputtering an alloy absorber material formed in a physical deposition chamber, which can provide much thinner absorber layer thickness (less than 30 nm) and achieving less than 2% reflectivity and desired etch properties. In an embodiment, the etch properties and other desired properties of the absorber layer can be tailored to specification by controlling the alloy percentage of each absorber material. In an embodiment, the alloy percentage can be precisely controlled by operating parameters such voltage, pressure, flow etc., of the physical vapor deposition chamber. In an embodiment, a process gas is used to further modify the material properties, for example, N2and/or O2gas is used to form nitrides and/or oxides of Mo, Nb, or V or nitrides or oxides of TaSb, CSb, SbN, TaNi, TaCu or TaRu. In one or more embodiments, the alloys described herein comprise a dopant. The dopant may be selected from one or more of nitrogen or oxygen. In an embodiment, the dopant comprises oxygen. In an alternative embodiment, the dopant comprises nitrogen. In an embodiment, the dopant is present in the alloy in an amount in the range of about 0.1 wt. % to about 5 wt. %, based on the weight of the alloy. In other embodiments, the dopant is present in the alloy in an amount of about 0.1 wt. %, 0.2 wt. %, 0.3 wt. %, 0.4 wt. %, 0.5 wt. %, 0.6 wt. %, 0.7 wt. %. 0.8 wt. %, 0.9 wt. %, 1.0 wt. %, 1.1 wt. %, 1.2 wt. %, 1.3 wt. %, 1.4 wt. %, 1.5 wt. %, 1.6 wt. %, 1.7 wt. %. 1.8 wt. %, 1.9 wt. %, 2.0 wt. % 2.1 wt. %, 2.2 wt. %, 2.3 wt. %, 2.4 wt. %, 2.5 wt. %, 2.6 wt. %, 2.7 wt. %, 2.8 wt. %, 2.9 wt. %, 3.0 wt. %, 3.1 wt. %, 3.2 wt. %, 3.3 wt. %, 3.4 wt. %, 3.5 wt. %, 3.6 wt. %, 3.7 wt. %. 3.8 wt. %, 3.9 wt. %, 4.0 wt. %, 4.1 wt. %, 4.2 wt. %, 4.3 wt. %, 4.4 wt. %, 4.5 wt. %, 4.6 wt. %, 4.7 wt. %, 4.8 wt. %, 4.9 wt. %, or 5.0 wt. %. In one or more embodiments, the alloy of the absorber layer is a co-sputtered alloy absorber material formed in a physical deposition chamber, which can provide much thinner absorber layer thickness (less than 30 nm) while achieving less than 2% reflectivity and suitable etch properties. In one or more embodiments, the alloy of the absorber layer can be co-sputtered by gases selected from one or more of argon (Ar), oxygen (O2), or nitrogen (N2). In an embodiment, the alloy of the absorber layer can be co-sputtered by a mixture of argon and oxygen gases (Ar+O2). In some embodiments, co-sputtering by a mixture of argon and oxygen forms an oxide of each metal of an alloy. In other embodiments, co-sputtering by a mixture of argon and oxygen does not form an oxide of each metal of an alloy. In an embodiment, the alloy of the absorber layer can be co-sputtered by a mixture of argon and nitrogen gases (Ar+N2). In some embodiments, co-sputtering by a mixture of argon and nitrogen forms a nitride of each metal of an alloy. In other embodiments, co-sputtering by a mixture of argon and nitrogen does not form a nitride of a metal alloy. In an embodiment, the alloy of the absorber layer can be co-sputtered by a mixture of argon and oxygen and nitrogen gases (Ar+O2+N2). In some embodiments, co-sputtering by a mixture of argon and oxygen and nitrogen forms an oxide and/or nitride of each metal. In other embodiments, co-sputtering by a mixture of argon and oxygen and nitrogen does not form an oxide or a nitride of a metal. In an embodiment, the etch properties and/or other properties of the absorber layer can be tailored to specification by controlling the alloy percentage(s), as discussed above. In an embodiment, the alloy percentage(s) can be precisely controlled by operating parameters such voltage, pressure, flow, etc., of the physical vapor deposition chamber. In an embodiment, a process gas is used to further modify the material properties, for example, N2gas is used to form nitrides of the materials described herein. In one or more embodiments, as used herein “co-sputtering” means that two targets, one target comprising a first metal and the second target comprising a second metal are sputtered at the same time using one or more gas selected from argon (Ar), oxygen (O2), or nitrogen (N2) to deposit/form an absorber layer comprising an alloy of the materials described herein. The multi-cathode chamber500can be part of the system shown inFIG.3. In an embodiment, an extreme ultraviolet (EUV) mask blank production system comprises a substrate handling vacuum chamber for creating a vacuum, a substrate handling platform, in the vacuum, for transporting a substrate loaded in the substrate handling vacuum chamber, and multiple sub-chambers, accessed by the substrate handling platform, for forming an EUV mask blank, including a multilayer stack of reflective layers on the substrate, the multilayer stack including a plurality of reflective layer pairs, a capping layer on the multilayer stack of reflective layers, and an absorber on the capping layer, the absorber layer made from the material described herein. The system can be used to make the EUV mask blanks shown with respect toFIG.4orFIG.5and have any of the properties described with respect to the EUV mask blanks described with respect toFIG.4orFIG.5above. Processes may generally be stored in the memory as a software routine that, when executed by the processor, causes the process chamber to perform processes of the present disclosure. The software routine may also be stored and/or executed by a second processor (not shown) that is remotely located from the hardware being controlled by the processor. Some or all of the method of the present disclosure may also be performed in hardware. As such, the process may be implemented in software and executed using a computer system, in hardware as, e.g., an application specific integrated circuit or other type of hardware implementation, or as a combination of software and hardware. The software routine, when executed by the processor, transforms the general purpose computer into a specific purpose computer (controller) that controls the chamber operation such that the processes are performed. Reference throughout this specification to “one embodiment,” “certain embodiments,” “one or more embodiments” or “an embodiment” means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Thus, the appearances of the phrases such as “in one or more embodiments,” “in certain embodiments,” “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Although the disclosure herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the method and apparatus of the present disclosure without departing from the spirit and scope of the disclosure. Thus, it is intended that the present disclosure include modifications and variations that are within the scope of the appended claims and their equivalents. | 42,562 |
11860534 | DETAILED DESCRIPTION It is to be understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific embodiments or examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, dimensions of elements are not limited to the disclosed range or values, but may depend upon process conditions and/or desired properties of the device. Moreover, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed interposing the first and second features, such that the first and second features may not be in direct contact. Various features may be arbitrarily drawn in different scales for simplicity and clarity. In the accompanying drawings, some layers/features may be omitted for simplification. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. In addition, the term “made of” may mean either “comprising” or “consisting of.” Further, in the following fabrication process, there may be one or more additional operations in between the described operations, and the order of operations may be changed. In the present disclosure, the phrase “at least one of A, B and C” means either one of A, B, C, A+B, A+C, B+C or A+B+C, and does not mean one from A, one from B and one from C, unless otherwise explained. Materials, configurations, structures, operations and/or dimensions explained with one embodiment can be applied to other embodiments, and detained description thereof may be omitted. EUV lithography is one of the crucial techniques for extending Moore's law. However, due to wavelength scaling from 193 nm (ArF) to 13.5 nm, the EUV light source suffered from strong power decay due to environment adsorption. Even though a stepper/scanner chamber is operated under vacuum to prevent the strong EUV adsorption by gas, maintaining a high EUV transmittance from the EUV light source to a wafer is still an important factor in EUV lithography. A pellicle generally requires a high transparency and a low reflectivity. In UV or DUV lithography, the pellicle film is made of a transparent resin film. In EUV lithography, however, a resin based film would not be acceptable, and a non-organic material, such as a polysilicon, silicide or metal film, is used. Carbon nanotubes (CNTs) are one of the materials suitable for a pellicle for an EUV reflective photo mask, because CNTs have a high EUV transmittance of more than 96.5%. Generally, a pellicle for an EUV reflective mask requires the following properties: (1) Long life time in a rich hydrogen radical operation environment in an EUV stepper/scanner; (2) Strong mechanical strength to minimize the sagging effect during vacuum pumping and venting operations; (3) A high or perfect blocking property for particles larger than about 20 nm (killer particles); and (4) A good heat dissipation to prevent the pellicle from being burnt out by EUV radiation. In the present disclosure, a pellicle for an EUV photo mask includes a network membrane having a plurality of nanotubes and a two-dimensional material layer covering the network membrane. Such a pellicle has a high EUV transmittance, improved mechanical strength, blocks killer particles from falling on an EUV mask, and/or has improved durability. FIGS.1A,1B and1Cshow an EUV pellicle10mounted on an EUV reflective mask5in accordance with an embodiment of the present disclosure.FIG.1Ais a cross sectional view in the X direction,FIG.1Bis a cross sectional view in the Y direction, andFIG.1Cis a top (plan) view. In some embodiments, a pellicle10for an EUV reflective mask includes a first cover layer20, a second cover layer30and a main network membrane100disposed between the first cover layer20and the second cover layer30. In some embodiments, the main network layer includes a plurality of nanomaterials, such as nanotubes and/or nano-flakes. In some embodiments, a support frame15is attached to the first cover layer20to maintain a space between the membrane of pellicle and the EUV mask5(pattern area) when mounted on the EUV mask5. One of or both of the first cover layer20and the second cover layer30include a two-dimensional material in which one or more two-dimensional layers are stacked. Here, a “two-dimensional” layer generally refers to one or a few crystalline layers of an atomic matrix or a network having thickness within the range of about 0.1-5 nm, in some embodiments. The support frame15of the pellicle is attached to the surface of the EUV photo mask5with an appropriate bonding material. In some embodiments, the bonding material is an adhesive, such as acrylic or silicon based glue or an A-B cross link type glue. The size of the frame structure is larger than the area of the black borders of the EUV photo mask so that the pellicle covers not only the circuit pattern area of the photo mask but also the black borders. In some embodiments, the two-dimensional materials of the first cover layer20and the second cover layer30are the same or different from each other. In some embodiments, the first cover layer includes a first two-dimensional material and the second cover layer includes a second two-dimensional material. In some embodiments, the two-dimensional material for the first cover layer20and/or the second cover layer30includes at least one of boron nitride (BN), graphene, and/or transition metal dichalcogenides (TMDs), represented by MX2, where M=Mo, W, Pd, Pt, and/or Hf, and X=S, Se and/or Te. In some embodiments, a TMD is one of MoS2, MoSe2, WS2or WSe2. In some embodiments, a total thickness of each of the first cover layer20and the second cover layer30is in a range from 0.3 nm to 3 nm and is in a range from about 0.5 nm to about 1.5 nm in other embodiments. In some embodiments, a number of the two-dimensional layers of each of the two-dimensional materials of the first and/or second cover layers is 1 to about 20, and is 2 to about 10 in other embodiments. When the thickness and/or the number of layers is greater than these ranges, EUV transmittance of the pellicle10may be decreased and when the thickness and/or the number of layers is smaller than these ranges, mechanical strength of the pellicle may be insufficient. In some embodiments, as shown inFIGS.1A and1B, the first cover layer20and the second cover layer30are sealed at the periphery thereof to fully encapsulate the main network membrane100. In some embodiments, the first cover layer20and the second cover layer30form a vacuum sealed structure. In some embodiments, a pressure inside the vacuum sealed structure is about 0.01 Pa to about 100 Pa. If the inside pressure is too high, for example, higher than an inside pressure of an EUV lithography apparatus in operation, the pellicle may rupture due to the pressure difference. In some embodiments, one or more vent holes are formed at the first cover layer20and/or the second cover layer30. In some embodiments, a protection layer40is further disposed over the first cover layer20, the second cover layer30and the support frame15, as shown inFIGS.1A and1B. In some embodiments, the protection layer40includes at least one layer of an oxide, such as HfO2, Al2O3, ZrO2, Y2O3, or La2O3. In some embodiments, the protection layer40includes at least one layer of non-oxide compounds, such as B4C, YN, Si3N4, BN, NbN, RuNb, YF3, TiN, or ZrN. In some embodiments, the protection layer40includes at least one metal layer made of, for example, Ru, Nb, Y, Sc, Ni, Mo, W, Pt, or Bi. In some embodiments, the protection layer40is a single layer, and in other embodiments, two or more layers of these materials are used as the protection layer40. In some embodiments, a thickness of the protection layer is in a range from 0.1 nm to 5 nm, and is in a range from about 0.2 nm to about 2.0 nm in other embodiments. When the thickness of the protection layer40is greater than these ranges, EUV transmittance of the pellicle10may be decreased and when the thickness of the protection layer40is smaller than these ranges, the mechanical strength of the pellicle may be insufficient. By using the first and/or second cover layer and/or the protection layer, which do not have holes, such as opening and/or spaces greater than about 10-20 nm, it is possible to fully block killer particles larger than about 20 nm from passing through the main network membrane100and falling on the surface of the EUV mask5. FIGS.2A,2B,2C,2D,2E,2F and2Gshow various network membranes of a pellicle for an EUV photo mask in accordance with embodiments of the present disclosure. In some embodiments, the network membrane100includes a plurality of nanotubes. In some embodiments, the plurality of nanotubes are randomly arranged to form a network structure. In some embodiments, a diameter of each of the plurality of nanotubes is in a range from 0.5 nm to 20 nm and is in a range from about 1 nm to about 10 nm in other embodiments. In some embodiments, a length of each of the plurality of nanotubes is in a range from about 0.5 μm to about 50 μm and is in a range from about 1.0 μm to about 20 μm in other embodiments. In some embodiments, the plurality of nanotubes are carbon nanotubes, boron nitride nanotubes, and/or TMD nanotubes, where TMD is represented by MX2, where M=Mo, W, Pd, Pt, and/or Hf, and X=S, Se and/or Te. In some embodiments, the plurality of nanotubes are MoS2nanotubes, MoSe2nanotubes, WS2nanotubes or WSe2nanotubes. In some embodiments, the plurality of nanotubes include only one type of nanotubes in terms of material and structure. In some embodiments, the plurality of nanotubes include nanotubes of the same material. In some embodiments, the main network membrane100only includes single wall nanotubes111as shown inFIG.2A. In other embodiments, the main network membrane100only includes multiwall (e.g., double wall) nanotubes113as shown inFIG.2B. A multiwall nanotube includes an inner tube and one or more outer tubes coaxially disposed around the inner tube. In some embodiments, the outer tube is movable along the axial direction with respect to the inner tube and in other embodiments, the outer tube is fixed on the outer surface of the inner tube. In some embodiments, a diameter of each of the single wall nanotubes is in a range from about 0.5 nm to about 5 nm and is in a range from about 1 nm to about 2 nm in other embodiments. In some embodiments, a diameter of each of the multiwall nanotubes is in a range from about 3 nm to about 20 nm and is in a range from about 5 nm to about 10 nm in other embodiments. In some embodiments, the plurality of nanotubes include two or more types of nanotubes in terms of material and structure. In some embodiments, the plurality of nanotubes include single wall nanotubes made of two or more materials (mixture of different material nanotubes). For example, in some embodiments, the plurality of nanotubes include a plurality of first nanotubes and a plurality of second nanotubes made of different material from the plurality of first nanotubes, and both of them are single wall nanotubes. In some embodiments, the main network layer100includes a plurality of nanotubes111which are single wall nanotubes, and a plurality of nanotubes113which are multiwall (e.g., double wall) nanotubes as shown inFIG.2C. In some embodiments, an amount (weight) of the single wall nanotubes111is greater than an amount of the multiwall nanotubes113. In some embodiments, an amount (weight) of the single wall nanotubes111is greater than an amount of the multiwall nanotubes113. In some embodiments, the plurality of single wall nanotubes111are made of a same material as the plurality of multiwall nanotubes113. For example, the plurality of single wall nanotubes111are single wall carbon nanotubes, and the plurality of multiwall nanotubes are multiwall carbon nanotubes. In other embodiments, the plurality of single wall nanotubes111are made of a different material as the plurality of multiwall nanotubes113. For example, the plurality of single wall nanotubes111are single wall TMD nanotubes, and the plurality of multiwall nanotubes are multiwall carbon nanotubes. In some embodiments, the plurality of nanotubes are multiple nanotubes made of two or more different materials (mixture of two types of multiwall nanotubes). In some embodiments, the main network membrane100includes a plurality of nanotubes111and a plurality of flakes121(nano-flakes) made of a two-dimensional material in which one or more two-dimensional layers are stacked, as shown inFIGS.2D-2F. In some embodiments, the two-dimensional material flakes121include at least one of boron nitride (BN), graphene, and/or transition metal dichalcogenides (TMDs), represented by MX2, where M=Mo, W, Pd, Pt, and/or Hf, and X=S, Se and/or Te. In some embodiments, a TMD is one of MoS2, MoSe2, WS2or WSe2. In some embodiments, a thickness of two-dimensional material flakes121is in a range from 0.3 nm to 3 nm and is in a range from about 0.5 nm to about 1.5 nm in other embodiments. In some embodiments, a number of the two-dimensional layers of two-dimensional material flakes121is 1 to about 20, and is 2 to about 10 in other embodiments. When the thickness and/or the number of layers is greater than these ranges, EUV transmittance of the pellicle10may be decreased and when the thickness and/or the number of layers is smaller than these ranges, mechanical strength of the pellicle may be insufficient. In some embodiments, the shape of the two-dimensional material flakes121is random. In other embodiments, the shape of the two-dimensional material flakes121is triangular or hexagonal. In certain embodiments, the shape of the two-dimensional material flakes121is a triangle formed by three atoms or a hexagon formed by six atoms. In some embodiments, a size (area) of each of the two-dimensional material flakes121is in a range from about 10 nm2to about 10 μm2and is in a range from about 100 nm2to about 1 μm2in other embodiments. In some embodiments, the two-dimensional material flakes121are embedded in or mixed with a plurality of single wall nanotubes111as shown inFIG.2D. In some embodiments, the two-dimensional material flakes121are embedded in or mixed with a plurality of multiwall nanotubes113as shown inFIG.2E. In some embodiments, the two-dimensional material flakes121are embedded in or mixed with a plurality of single wall nanotubes111and a plurality of multiwall nanotubes113, as shown inFIG.2F. In some embodiments, an amount (weight) of the two-dimensional material flakes121is in a range from about 5% to about 30% with respect to a total weight of the network membrane100, and is in a range from about 10% to about 20% in other embodiments. When the amount of two-dimensional material flakes is greater than these ranges, the EUV transmittance of the pellicle10may be decreased and when the amount of two-dimensional material flakes is smaller than these ranges, the mechanical strength of the pellicle may be insufficient. In some embodiments, the network membrane100includes multiwall nanotubes117each having an inner tube and one or more outer tubes, where the inner tubes and the outer tubes are made of different materials, as shown inFIG.2G. In some embodiments, each the multiwall nanotubes117include an inner tube formed of carbon nanotubes, boron nitride nanotubes, and/or TMD nanotubes, and a coating layer as the outer tubes. In some embodiments, the coating layer includes one of an oxide, such as HfO2, Al2O3, ZrO2, Y2O3, or La2O3; a non-oxide compound, such as B4C, YN, Si3N4, BN, NbN, RuNb, YF3, TiN, or ZrN; and/or a metal, such as, Ru, Nb, Y, Sc, Ni, Mo, W, Pt, or Bi. In some embodiments, the coating layer is made of the same material as the protection layer40. FIGS.3A,3B,3C,3D,3E,3F,3G,3H,3I and3Jshow various views of network membranes of a pellicle for an EUV photo mask in accordance with embodiments of the present disclosure. In some embodiments, the network membrane100has a single layer structure or a multilayer structure. In some embodiments, the network membrane100has a single layer110of a plurality of single or multi wall nanotubes as shown inFIG.3A. In some embodiments, the network membrane100has two layers of different type nanotubes110and112, as shown inFIG.3B. The thickness of the layer110and layer112are the same or different from each other. In some embodiments, the network membrane100has three layers of nanotubes110,112and114, as shown inFIG.3C. At least adjacent layers are different types in some embodiments. The thickness of the layers110,112and114are the same or different from each other. In some embodiments, the network membrane100has a single layer115of a mixture of different type nanotubes, as shown inFIG.3D. In some embodiments, the network membrane100has a nanotube layer110and a two-dimensional flake layer120, as shown inFIGS.3E and3F. The thickness of the layer110and layer120are the same or different from each other. The layer110can be a mixed layer115as shown inFIG.3D. In some embodiments, the network membrane100has a two-dimensional flake layer120disposed between a first nanotube layer110and a second nanotube layer112, as shown inFIG.3G. In some embodiments, the first and second nanotube layers are of the same type or different types. In some embodiments, the network membrane100has a nanotube layer110disposed between a first two-dimensional flake layer120and a second two-dimensional flake layer122, as shown inFIG.3H. In some embodiments, the first and second two-dimensional flake layers are made of the same material or different materials from each other. In some embodiments, the network membrane100has a nanotube layer110, a first two-layer dimensional flake layer120over the nanotube layer110and a second two-dimensional flake layer122disposed over the first two-dimensional flake layer120as shown inFIG.3I. In some embodiments, the network membrane100has one or more nanotube layers of the same type or different types and one or more two-dimensional flake layers of the same material or different materials. In some embodiments, the network membrane100has a single layer125of a mixture of nanotubes and two-dimensional flakes, as shown inFIG.3J. FIGS.4A,4B,4C,4D,4E and4Fshow various views of pellicles for an EUV photo mask in accordance with an embodiment of the present disclosure. FIG.4Ais the same asFIGS.1A-1C. In some embodiments, a pellicle does not include a protection layer as shown inFIG.4B. In some embodiments, the first cover layer is not used and only the second cover layer30is disposed over the main network membrane100as shown inFIG.4C. In some embodiments, the second cover layer is not used and only the first cover layer20is disposed over the main network membrane100as shown inFIG.4D. In some embodiments, the first cover layer is not used and only the second cover layer30is disposed over the main network membrane100and a protection layer40is formed on the second cover layer and network membrane100forming covered network layer119as shown inFIG.4E. In some embodiments, the second cover layer is not used and only the first cover layer30is disposed over the main network membrane100and a protection layer40is formed on the second cover layer and network membrane100forming covered network layer119as shown inFIG.4F. FIG.5Ashows a manufacturing process of a network membrane andFIG.5Bshows a flow chart thereof in accordance with an embodiment of the present disclosure. In some embodiments, nanotubes are dispersed in a solution as shown inFIG.5A. In some embodiments, nanotubes are carbon nanotubes formed by various methods, such as arc-discharge, laser ablation or chemical vapor deposition (CVD) methods. Similarly, BN nanotubes and TMD nanotubes are also formed by a CVD process. The solution is a solvent including water or an organic solvent such as sodium dodecyl sulfate (SDS). The nanotubes are one type or two or more types of nanotubes (material and/or wall structure). As shown inFIG.5A, a support membrane is placed between a chamber or a cylinder in which the nanotube dispersed solution is disposed and a vacuum chamber. In some embodiments, the support membrane is an organic or inorganic porous or mesh material. In some embodiments, the support membrane is woven or non-woven fabric. In some embodiments, the support membrane has a circular shape in which a pellicle size or a 150 mm×150 mm square (the size of an EUV mask) can be placed. As shown inFIG.5A, the pressure in the vacuum chamber is reduced so that a pressure is applied to the solvent in the chamber or cylinder. Since the mesh or pore size of the support membrane is sufficiently smaller than the size of the nanotubes, the nanotubes are captured by the support membrane while the solvent passes through the support membrane. The support membrane on which the nanotubes are deposited is detached from the filtration apparatus ofFIG.5Aand then is dried. In some embodiments, the deposition by filtration is repeated so as to obtain a desired thickness of the nanotube network layer as shown inFIG.5B. In some embodiments, after the deposition of the nanotubes in the solution, other nanotubes are dispersed in the same or new solution and the filter-deposition is repeated. In other embodiments, after the nanotubes are dried, another filter-deposition is performed. In the repetition, the same type of nanotubes is used in some embodiments, and different types of nanotubes are used in other embodiments. FIG.6shows a manufacturing process of a network membrane in accordance with an embodiment of the present disclosure. When the main network membrane100includes nanotubes and two-dimensional material flakes, the deposition by filtration for nanotubes and the deposition by filtration for the flakes are repeated as shown inFIG.6. In some embodiments, a mixture of nanotubes and flakes are dispersed in the solvent, and the deposition by filtration is performed to form a mixed network layer of nanotubes and two-dimensional material flakes. Two-dimensional material layer(s) are formed over a substrate by a CVD method, and then the deposited layer is peeled off from the substrate. After the two-dimensional material layer is peeled off, the layer is crushed into flakes in some embodiments. FIGS.7A and7B to12A and12Bshow cross sectional views (the “A” figures) and plan (top) views (the “B” figures) of the various stages for manufacturing a pellicle for an EUV photo mask in accordance with an embodiment of the present disclosure. It is understood that additional operations can be provided before, during, and after the processes shown byFIGS.7A-12B, and some of the operations described below can be replaced or eliminated, for additional embodiments of the method. The order of the operations/processes may be interchangeable. As shown inFIG.5A,5B or6, a main network membrane100is formed on a support membrane80by deposition by filtering. The main network membrane100is then detached from a deposition apparatus, as shown inFIGS.7A and7B. Then, as shown inFIGS.8A and8B, a first cover layer20is formed over the main network membrane100in some embodiments. The first cover layer20, which is a two-dimensional material, is formed by, for example, a CVD method on a substrate, and then the deposited two-dimensional layer(s) is peeled off from the substrate. The peeled two-dimensional layer(s) is subsequently transferred over the main network layer100formed on the support substrate80, as shown inFIGS.8A and8B. In some embodiments, a TMD layer represented by MX2is formed by CVD. In some embodiments, a MoS2layer is formed by CVD using source gases, such as a Mo(CO)6gas, a MoCl5gas, and/or a MoOCl4gas as a Mo source; and a H2S gas and/or a dimethyl sulfide gas as a S source. In other embodiments, a MO3gas sublimed from a solid MO3or a MoCl5source and/or S gas is sublimed from a solid S source. Solid sources of Mo and S are placed in a reaction chamber and a carrier gas containing inert gas, such as Ar, N2and/or He flows in the reaction chamber. The solid sources are heated to generate gaseous sources by sublimation, and the generated gaseous sources react to form MoS2molecules. The MoS2molecules are then deposited on the substrate. The substrate is appropriately heated in some embodiments. In other embodiments, the entire reaction chamber is heated by induction heating. Other TMD layers can also be formed by CVD using suitable source gases. For example, metal oxides, such as WO3, PdO2and PtO2can be used as a sublimation source for W, Pd and Pt, respectively, and metal compounds, such as W(CO)6, WF6, WOCl4, PtCl2and PdCl2can also be used as a metal source. In some embodiments, the substrate on which the TMD two-dimensional layer is formed, includes one of Si (110), γ-Al2O3(110), Ga2O3(010) or MgO (110). In other embodiments, a layer of hexagonal boron nitride (h-BN) or graphene is formed as the first cover layer20over a substrate by CVD. In some embodiments, the substrate includes one of SiC (0001), Si (111), or Ge (111). Then, as shown inFIGS.9A and9B, a support frame15is attached to the first cover layer20. In some embodiments, the support frame15is formed of one or more layers of crystalline silicon, polysilicon, silicon oxide, silicon nitride, ceramic, metal or organic material. In some embodiments, as shown inFIG.9B, the support frame15has a rectangular (including square) frame shape, which is larger than the black border area of an EUV mask and smaller than the substrate of the EUV mask. Next, as shown inFIGS.10A and10B, the first cover layer20, the main network membrane100and the support membrane80are cut into a rectangular shape having the same size as or slightly larger than the support frame15, and then the support substrate80is detached or removed, in some embodiments. When the support substrate80is made of an organic material, the support substrate80is removed by wet etching using an organic solvent. Further, as shown inFIGS.11A and11B, a second cover layer30is formed over the main network membrane100. The operations for forming the second cover layer30, which is a two-dimensional material, is the same as or similar to those for the first cover layer20as set forth above. In some embodiments, the first cover layer20and the second cover layer30are sealed at the periphery thereof to fully encapsulate the main network membrane100. In some embodiments, the second cover layer30has a flange portion at which the second cover layer30is fixed or bonded to the first cover layer20, as shown inFIG.11C. In other embodiments, the second cover layer30is attached to the sides of the first cover layer20and the support frame15as shown inFIG.11D. Further, as shown inFIGS.12A and12B, a protection layer40is formed over the first cover layer20, the second cover layer and the support frame15. In some embodiments, the protection layer40is formed by CVD, physical vapor deposition (PVD) or atomic layer deposition (ALD). FIG.13Ashows a flowchart of a method of making a semiconductor device, andFIGS.13B,13C,13D and13Eshow a sequential manufacturing operation of the method of making a semiconductor device in accordance with embodiments of present disclosure. A semiconductor substrate or other suitable substrate to be patterned to form an integrated circuit thereon is provided. In some embodiments, the semiconductor substrate includes silicon. Alternatively or additionally, the semiconductor substrate includes germanium, silicon germanium or other suitable semiconductor material, such as a Group III-V semiconductor material. At S801ofFIG.13A, a target layer to be patterned is formed over the semiconductor substrate. In certain embodiments, the target layer is the semiconductor substrate. In some embodiments, the target layer includes a conductive layer, such as a metallic layer or a polysilicon layer; a dielectric layer, such as silicon oxide, silicon nitride, SiON, SiOC, SiOCN, SiCN, hafnium oxide, or aluminum oxide; or a semiconductor layer, such as an epitaxially formed semiconductor layer. In some embodiments, the target layer is formed over an underlying structure, such as isolation structures, transistors or wirings. At S802, ofFIG.13A, a photo resist layer is formed over the target layer, as shown inFIG.13B. The photo resist layer is sensitive to the radiation from the exposing source during a subsequent photolithography exposing process. In the present embodiment, the photo resist layer is sensitive to EUV light used in the photolithography exposing process. The photo resist layer may be formed over the target layer by spin-on coating or other suitable technique. The coated photo resist layer may be further baked to drive out solvent in the photo resist layer. At S803ofFIG.13A, the photo resist layer is patterned using an EUV reflective mask with a pellicle as set forth above, as shown inFIG.13B. The patterning of the photo resist layer includes performing a photolithography exposing process by an EUV exposing system using the EUV mask. During the exposing process, the integrated circuit (IC) design pattern defined on the EUV mask is imaged to the photo resist layer to form a latent pattern thereon. The patterning of the photo resist layer further includes developing the exposed photo resist layer to form a patterned photo resist layer having one or more openings. In one embodiment where the photo resist layer is a positive tone photo resist layer, the exposed portions of the photo resist layer are removed during the developing process. The patterning of the photo resist layer may further include other process steps, such as various baking steps at different stages. For example, a post-exposure-baking (PEB) process may be implemented after the photolithography exposing process and before the developing process. At S804ofFIG.13A, the target layer is patterned utilizing the patterned photo resist layer as an etching mask, as shown inFIG.13D. In some embodiments, the patterning the target layer includes applying an etching process to the target layer using the patterned photo resist layer as an etch mask. The portions of the target layer exposed within the openings of the patterned photo resist layer are etched while the remaining portions are protected from etching. Further, the patterned photo resist layer may be removed by wet stripping or plasma ashing, as shown inFIG.13E. The pellicles according to embodiments of the present disclosure provide a higher strength and thermal conductivity (dissipation) as well as higher EUV transmittance than conventional pellicles. In the foregoing embodiments, two or more types of nanotubes are used as a main network membrane to increase the mechanical strength of the pellicle and obtain a high EUV transmittance. Further, a two-dimensional material layer is used as a cover layer (first and/or second cover layers) and/or used together with nanotubes to increase the mechanical strength of a pellicle. In addition, by using a two-dimensional material layer and/or a protection layer enclose the main network membrane, it is possible to increase the mechanical strength of the pellicle and provide a high or perfect blocking property of killer particles. Moreover, the use of the two-dimensional material improves heat dissipation to prevent a pellicle from being burnt out by EUV radiation in some embodiments. It will be understood that not all advantages have been necessarily discussed herein, no particular advantage is required for all embodiments or examples, and other embodiments or examples may offer different advantages. In accordance with one aspect of the present disclosure, a pellicle for an EUV photo mask includes a first layer; a second layer; and a main layer disposed between the first layer and second layer and including a plurality of nanotubes. At least one of the first layer or the second layer includes a two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the first layer includes a first two-dimensional material and the second layer includes a second two-dimensional material. In one or more of the foregoing and following embodiments, each of the first and second two-dimensional materials includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, the first two-dimensional material is different from the second two-dimensional material. In one or more of the foregoing and following embodiments, a thickness of each of the first layer and the second layer is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of each of the first and second two-dimensional materials is 1 to 20. In one or more of the foregoing and following embodiments, the first layer and the second layer are sealed to fully encapsulate the main layer. In one or more of the foregoing and following embodiments, the pellicle further includes a protection layer disposed over the first layer and the second layer. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of HfO2, Al2O3, ZrO2, Y2O3, and La2O3. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of B4C, YN, Si3N4, BN, NbN, RuNb, YF3, TiN, and ZrN. In one or more of the foregoing and following embodiments, the protection layer includes a metal layer made of at least one selected from the group consisting of Ru, Nb, Y, Sc, Ni, Mo, W, Pt, and Bi. In one or more of the foregoing and following embodiments, a thickness of the protection layer is in a range from 0.1 nm to 5 nm. In one or more of the foregoing and following embodiments, a diameter of each of the plurality of nanotubes is in a range from 0.5 nm to 20 nm. In accordance with another aspect of the present disclosure, a pellicle for an extreme ultraviolet (EUV) reflective mask includes a first layer; a second layer; and a main layer disposed between the first layer and second layer. The main layer includes a plurality of first nanotubes and a plurality of second nanotubes different from the plurality of first nanotubes. In one or more of the foregoing and following embodiments, at least one of the first layer or the second layer includes a two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a same material as the plurality of second nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a different material than the plurality of second nanotubes. In one or more of the foregoing and following embodiments, each of the plurality of first nanotubes and the plurality of second nanotubes are one or more selected from the group consisting of carbon nanotubes, boron nitride nanotubes, MoS2nanotubes, MoSe2nanotubes, WS2nanotubes and WSe2nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are single wall nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In accordance with another aspect of the present disclosure, a pellicle for an extreme ultraviolet (EUV) reflective mask includes a first layer; a second layer; and a main layer disposed between the first layer and second layer. The main layer includes a plurality of nanotubes and a plurality of flakes comprising two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the two-dimensional material includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, a size of each of the plurality of flakes is in a range from 100 nm2to 100 μm2. In one or more of the foregoing and following embodiments, a thickness of each of the plurality of flakes is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of each of the plurality of flakes is 1 to 20. In accordance with another aspect of the present disclosure, a pellicle for an extreme ultraviolet (EUV) reflective mask includes a first membrane; a support frame attached to the first membrane; and a main layer disposed over the first layer and including a plurality of nanotubes. The first membrane includes a two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the two-dimensional material of the first membranes includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, a thickness of the first membrane is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of the first membrane is 1 to 20. In one or more of the foregoing and following embodiments, the first membrane is disposed between the support frame and the main layer. In one or more of the foregoing and following embodiments, a part of the main layer is disposed between the first membrane and the support frame. In one or more of the foregoing and following embodiments, the pellicle further includes a protection layer disposed over both sides of the first membrane. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of HfO2, Al2O3, ZrO2, Y2O3, and La2O3. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of B4C, YN, Si3N4, BN, NbN, RuNb, YF3, TiN, and ZrN. In one or more of the foregoing and following embodiments, the protection layer includes a metal layer made of at least one selected from the group consisting of Ru, Nb, Y, Sc, Ni, Mo, W, Pt, and Bi. In one or more of the foregoing and following embodiments, a thickness of the protection layer is in a range from 0.1 nm to 5 nm. In one or more of the foregoing and following embodiments, the protection layer is also formed to cover the plurality of nanotubes of the main layer. In one or more of the foregoing and following embodiments, the plurality of nanotubes include a plurality of first nanotubes and a plurality of second nanotubes different from the plurality of first nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a same material as the plurality of second nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a different material than the plurality of second nanotubes. In one or more of the foregoing and following embodiments, each of the plurality of first nanotubes and the plurality of second nanotubes are one or more selected from the group consisting of carbon nanotubes, boron nitride nanotubes, MoS2nanotubes, MoSe2nanotubes, WS2nanotubes and WSe2nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are single wall nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the main layer further includes a plurality of flakes comprising two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the two-dimensional material includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, a size of each of the plurality of flakes is in a range from 100 nm2to 100 μm2. In one or more of the foregoing and following embodiments, a thickness of each of the plurality of flakes is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of each of the plurality of flakes is 1 to 20. In accordance with another aspect of the present disclosure, in a method of manufacturing a pellicle for an extreme ultraviolet (EUV) reflective mask, a nanotube layer is formed over a support substrate, a first cover layer is formed over the nanotube layer, a pellicle frame is attached over the first cover layer, the nanotube layer and the first cover layer are cut to form a cut pellicle membrane, a second layer is formed to fully encapsulate the nanotube layer of the cut pellicle membrane by sealing with the first layer of the cut pellicle membrane, and a protection layer is formed over the first cover layer, the second cover layer and a pellicle frame. In one or more of the foregoing and following embodiments, at least one of the first cover layer or the second cover layer includes a two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the first cover layer includes a first two-dimensional material and the second cover layer includes a second two-dimensional material. In one or more of the foregoing and following embodiments, each of the first and second two-dimensional materials includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, the first two-dimensional material is different from the second two-dimensional material. In one or more of the foregoing and following embodiments, a thickness of each of the first cover layer and the second cover layer is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of each of the first and second two-dimensional materials is 1 to 20. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of HfO2, Al2O3, ZrO2, Y2O3, and La2O3. In one or more of the foregoing and following embodiments, the protection layer includes at least one selected from the group consisting of B4C, YN, Si3N4, BN, NbN, RuNb, YF3, TiN, and ZrN. In one or more of the foregoing and following embodiments, the protection layer includes a metal layer made of at least one selected from the group consisting of Ru, Nb, Y, Sc, Ni, Mo, W, Pt, and Bi. In one or more of the foregoing and following embodiments, a thickness of the protection layer is in a range from 0.1 nm to 5 nm. In one or more of the foregoing and following embodiments, the nanotube layer includes a plurality of nanotubes having a diameter in a range from 0.5 nm to 20 nm. In one or more of the foregoing and following embodiments, the plurality of nanotubes include a plurality of first nanotubes and a plurality of second nanotubes different from the plurality of first nanotubes. In one or more of the foregoing and following embodiments, at least one of the first layer or the second layer includes a two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a same material as the plurality of second nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are made of a different material than the plurality of second nanotubes. In one or more of the foregoing and following embodiments, each of the plurality of first nanotubes and the plurality of second nanotubes are one or more selected from the group consisting of carbon nanotubes, boron nitride nanotubes, MoS2nanotubes, MoSe2nanotubes, WS2nanotubes and WSe2nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are single wall nanotubes. In one or more of the foregoing and following embodiments, both the plurality of first nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the plurality of first nanotubes are single wall nanotubes and the plurality of second nanotubes are multiwall nanotubes. In one or more of the foregoing and following embodiments, the nanotube layer includes a plurality of nanotubes and a plurality of flakes comprising two-dimensional material in which one or more two-dimensional layers are stacked. In one or more of the foregoing and following embodiments, the two-dimensional material includes at least one selected from the group consisting of boron nitride (BN), graphene, MoS2, MoSe2, WS2, and WSe2. In one or more of the foregoing and following embodiments, a size of each of the plurality of flakes is in a range from 100 nm2to 100 m2. In one or more of the foregoing and following embodiments, a thickness of each of the plurality of flakes is in a range from 0.3 nm to 3 nm. In one or more of the foregoing and following embodiments, a number of the one or more two-dimensional layers of each of the plurality of flakes is 1 to 20. In accordance with another aspect of the present disclosure, in a method of manufacturing a EUV pellicle, a network structure of nanotubes is formed, a two-dimensional (2D) material layer is formed over the network structure of nanotubes, and a protection layer is formed over the network structure of nanotubes. In one or more of the foregoing and following embodiments, when the network structure of nanotubes is formed, a first network structure of first nanotubes is formed, and a second network structure of second nanotubes is formed over the first network structure of first nanotubes. In one or more of the foregoing and following embodiments, the first nanotubes are different from the second nanotubes. In one or more of the foregoing and following embodiments, when the network structure of nanotubes is formed, a third network structure of 2D material flakes is formed over the first network structure of first nanotubes. In one or more of the foregoing and following embodiments, the 2D material flakes are different from the first nanotubes and the second nanotubes. In one or more of the foregoing and following embodiments, the third network structure of 2D material flakes is formed over the first network structure of first nanotubes prior to forming the second network structure of second nanotubes over the first network structure of first nanotubes. In one or more of the foregoing and following embodiments, the forming the third network structure of 2D material flakes over the first network structure of first nanotubes is after forming the second network structure of second nanotubes over the first network structure of first nanotubes. In one or more of the foregoing and following embodiments, forming the 2D material layer over the network structure of nanotubes is prior to forming the protection layer over the network structure of nanotubes. In one or more of the foregoing and following embodiments, forming the 2D material layer over the network structure of nanotubes is after forming the protection layer over the network structure of nanotubes. In accordance with another aspect of the present disclosure, in a method of manufacturing a EUV pellicle, nanostructures are formed, the nanostructures are dispersed into a solution, a main membrane is formed by filtering the nanostructures by a support membrane, a first two-dimensional layer is formed over a first side of the main membrane, the support membrane is removed, and a second two-dimensional layer is formed over a second side of the main membrane from which the support membrane is removed. In one or more of the foregoing and following embodiments, the nanostructures include nanotubes. In one or more of the foregoing and following embodiments, the nanostructures further include flakes of one or more two-dimensional materials. In one or more of the foregoing and following embodiments, the support membrane is porous. In one or more of the foregoing and following embodiments, a support frame is attached on the first two-dimensional layer. In one or more of the foregoing and following embodiments, a protection layer is formed over the first two-dimensional layer, second two-dimensional layer and the support frame. The foregoing outlines features of several embodiments or examples so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments or examples introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 51,533 |
11860535 | DETAILED DESCRIPTION OF THE EMBODIMENTS In the context of the present application, some of the terms have the meaning below. The term elastomer has its usual meaning within the field of chemistry. In general, an elastomer is an amorphous polymer existing above its glass transition temperature and having a viscoelasticity and weak inter-molecular forces, generally having low Young's modulus and high failure strain compared with other materials. The term, includes rubbers. Elastomers are usually thermosets (requiring vulcanization) but may also be thermoplastic. The long polymer chains are usually cross-linked (e.g. during curing, i.e., vulcanization). The elasticity is derived from the ability of the long chains to reconfigure themselves to distribute an applied stress. The covalent cross-linkages ensure that the elastomer will return to its original configuration when the stress is removed. The inclusion of a basic organic group (amine) within the stamps interior (volume) to stimulate (catalyze or assist) solidification of imprinting solutions such as sol-gel based imprinting solutions has a number of distinct advantages. For example, the imprinting composition no longer requires the presence of a polymerization catalyst, e.g. an activator such as a PAG or PBG, which significantly reduces the cost of such compositions and improves their shelf life because accidental activation of the polymerization activator and subsequent solidification of the composition is avoided. In addition, due to the absence of such an activator, the formation of decomposition products of such an activator upon its activation is avoided. The absence of such decomposition products extends the life of the stamp as degradation of the stamp material by reaction with such decomposition products is avoided. Where reference is made to a basic organic group, this is intended to cover groups or functionalities that are part of the stamp body via chemical bonding, or that are part of separate substances including one or more of such groups, which substances are present in the volume of the stamp body. Thus, where reference is made to a elastomer stamp loaded with a basic organic group, this is intended to cover embodiments in which the basic organic group is physically or chemically bound within the stamp body, i.e. in the volume of the stamp. This chemical bonding includes at least one of absorption or adsorption, e.g. through intermolecular forces, hydrogen bonding, ionic bonding and covalent bonding or a combination thereof. In some embodiments, the basic organic group (amine) may be reversibly bound to the stamp to facilitate reversible (partial) desorption of the basic organic amine from the stamp material to accelerate stimulation or catalysis of a polymerization reaction in an imprinting composition brought into contact with the stamp. A fluid-permeable elastomer stamp typically is permeable to gases and liquids. It may be a porous stamp (i.e. having open holes or open channels through it) but this need not be the case. Examples of (fluid-permeable) elastomer stamp materials include polydimethylsiloxanes (PDMS) and PFPE perfluoropolyether although embodiments of the present invention are not limited to these examples. The term basic organic group is intended to include organic groups or functionalities that act as Arrhenius bases or Lewis bases. They may be conjugated bases of acids. Preferably the groups have no formal charge. The group can be a basic organic amine. The term lewis base has the meaning as usual in chemistry. Thus by way of guidance, a Lewis base is a chemical species that reacts with a Lewis acid to form a Lewis adduct. A Lewis base, then, is any species that donates a pair of electrones (lone pair) to a Lewis acid to form a Lewis adduct. In the adduct, the Lewis acid and base share an electron pair furnished by the Lewis base. For example, OH and NW are Lewis bases, because they can donate a lone pair of electrons. While OW is an example of a lewis base having a formal negative charge, NH; is an example of un uncharge lewis base not having a formal charge. The lewis base may have its pair of electrons available for donation located on a donor atom. Many examples of donor atoms exist: Oxygen in e.g. ethers; Nitrogen in e.g. amines or alkylamines or pyridines. Sulfur in e.g. sulfides and Phosphorous in e.g. phosphines or alkyl phosphines. The term amount is an effective amount in the sense that it means that there is enough of the basic organic group to influence the solidification upon contacting of the stamp with an imprinting composition. The amount may be specified as number of (moles) basic organic groups (each capable of binding1equivalent of protons) per weight of stamp body. For example the amount of groups in Mole per gram of stamp body: >1−6, or >0.5−5, or >1−5, or >0.5−4or >1−4or even >0.5−3. In general, the larger the amount, the faster the solidification can work. If stamps having basic groups therein are prepared using any of the in situ inclusion procedures as described herein below (as opposed to the impregnation procedures), then the amount may have an upper boundary because otherwise the mass fraction may become too large for a suitable rubber body to result. Suitable amounts can be <1−3or <5−4(for groups with a molar weight of 30 to 100), or <5−4or <2.5−4(for groups with a molar weight of 100 to 200), or <3−4or <1.3−5(for groups with a molar weight of 200 to 300), <2.5−4or <1.2−4(for groups with a molar weight of 300 to 400), or <2−4or <1−4(for groups with a molar weight of 400 to 500). Using the molar weight of a group can be used to calculate different amounts from weight percentages. If the basic organic group is present as part of a separate substance, then the amount may be given weight of the substance per weight of the stamp body taking account of the number basic organic groups per substance molecule. For example, the organic polymeric bulk portion may comprise at least 0.1% by weight of substance based on the total weight of the organic polymeric bulk portion. Thus, at least some embodiments of the present invention are concerned with providing a stamp comprising (loaded with) a basic organic group in an amount of at least 0.1% by weight based on the total weight of the elastomeric stamp for catalyzing a sol-gel reaction in an alkoxysilane-based imprinting composition used in an imprint lithography technique. In this case one preferred stamp is one wherein the organic polymeric bulk portion is a polysiloxane. To exemplify how the stamp of the invention may work for an imprinting composition, an embodiment of an imprint lithography technique known as conformal imprint lithography based on soft (flexible) imprinting stamps, such as Substrate Conformal Imprint Lithography (SCIL), in which a flexible and fluid-permeable mold or stamp is used will be described herein below. The reader is referred to the following disclosures for more detailed information on apparatus (WO2003/099463), methods and resists (WO2008/053418 and WO2014/097096). The main advantage of conformal imprint lithography is that fine structures can be replicated on irregular (e.g. non-flat) and curved surfaces, due to the fact that the flexibility in the stamp allows for a complete contact between the stamp pattern and the surface. FIG.1schematically depicts an example embodiment of an imprint lithography process according to the present invention. In step A, a major surface of a substrate10, which may be any suitable carrier such as a silicon substrate, a silicon on insulator substrate, a GaAs substrate, a GaN substrate, an AlGaN substrate, a metal substrate such as an Al or Cr substrate, a polymer substrate such as a PMMA substrate, a glass substrate, a ceramic substrate such as sapphire and so on, is covered with a layer of a curable imprintable composition12. This covering can be done with for example spin coating, but other techniques can be used according to need. The chemistry of the imprinting composition12is not particularly limited. For example, any suitable sol-gel imprinting composition may be used for this purpose. Particularly suitable are alkoxysilane-based sol-gel imprinting compositions as such imprinting compositions are known to provide solidified patterned layers having particularly desirable properties as well as exhibit desired properties for facile deposition of the imprinting composition on a substrate10, e.g. using spin-coating, doctor blading, ink jetting and so on, and in which the polycondensation of the silane compounds in the imprinting composition12can be activated on demand to ensure that the process window for imprinting the imprinting composition with an imprinting stamp is not reduced by premature polycondensation of the silane compounds in the imprinting composition. Such compositions have been described in e.g. WO2008/053418 or WO2014/097096. In an embodiment, the imprinting composition may be based on silane monomers of Formula 1, Formula 2 or a combination thereof: wherein R1-R8 are individually selected from the group consisting of C1-C6 linear or branched alkyl groups and a phenyl group. Particularly suitable examples of such silane compounds are defined by the compounds of Formula 4-7: The imprinting composition may be based on a first silane compound of Formula 1 and a second silane compound of Formula 2. This has the advantage that the amount of crosslinking can be controlled by varying the ratio between the first and second curable compound. Typically, an increase in the ratio towards the first curable compound reduces the crosslinking density in the network formed in the polycondensation reaction. In order to obtain the most desirable cross-linking density, the molar ratio of the first silane compound and the second silane compound is in the range of 5:1-1:5. In a particularly suitable embodiment, the first silane compound is MTMS. It has been found that when combining MTMS with a fully inorganic silane compound, i.e. a silane compound according to Formula 2, unwanted shrinkage of the ink composition upon solidification can be largely avoided. Particularly suitable embodiments of the second silane compounds to be used in combination with MTMS are TMOS and TEOS. In order to achieve the desired degree of polymerization in the imprinting composition prior to its deposition, i.e. to tune the viscosity of the imprinting composition to facilitate its deposition on a substrate10, the pH of the ink composition may be set in a range of 3-5, preferably 3.5-4.5. Particularly preferable is a pH of about 4. The pH may be set using any suitable protic acid, e.g. an organic acid such as acetic acid or formic acid, or an inorganic acid such as hydrochloric acid. It is noted that the presence of such an acid typically does not interfere with the base-loaded elastomer stamps of the present invention because most of the acid content typically evaporates from the imprinting composition upon drying, which takes place following to layer formation and prior to application of the stamp. Also, it is not necessary to bring the pH of the ink composition up to the pH of the organic base in the stamp. As long as the pH of the ink composition is raised to alkaline pH (i.e. a pH >7), rapid polycondensation of the alkoxysilane content of the imprinting can be achieved. The ink composition may further comprise a polycondensation inhibitor according to Formula 3 that competes with the silane compounds in the polycondensation reaction as shown in Reaction Schedule I: wherein R9is selected from the group consisting of C1-C6linear or branched alkyl groups and a phenyl group, and wherein n is a positive integer having a value of at least 2. In a particular advantageous embodiment, n is 2, 3, 4 or 5. Particularly advantageous examples of the polyethylene glycol monoether of Formula 3 include diethylene glycol monomethyl ether (EEOL), diethylene glycol monoethyl ether, triethylene glycol monomethyl ether, triethylene glycol monoethyl ether, tetraethylene glycol monomethyl ether and tetraethylene glycol monoethyl ether. At a pH of 3-5, preferably a pH of 3-5-4.5 and more preferably a pH of around 4, it has been found that a compound of Formula 3 reduces the level of completion of the polycondensation reaction between the silane compounds in the ink or composition, i.e. shifts the equilibrium of the polycondensation reaction more towards the oligomer/monomer side of the equilibrium. In particular, silane oligomers are formed that include the compound of Formula 3. Such polycondensation retarders can for example be used to increase the shelf life and tune the viscosity of the imprinting composition, such that the imprinting composition can be more effectively applied on the substrate surface to be imprinted. It has been found that the basic organic amine-loaded stamps according to embodiments of the present invention are capable of rapidly shifting the equilibrium reaction in such imprinting compositions towards polycondensation in the presence of such inhibitors. In an embodiment, the ink or composition may further comprise one or more additives that do not take part in the polycondensation reaction but may be used to improve the characteristics of the ink or composition. For instance, the ink or composition may contain additives that improve the film forming properties of the ink. A non-limiting example of such an additive is 1-ethoxy-2-(2-ethoxyethoxy)ethane (EEE): Such an imprinting composition may, by way of non-limiting example, have a composition selected from the ranges as specified in Table I. In Table I, where reference is made to weight percentages (wt %), this is relative to the total weight of the imprinting composition unless otherwise specified. TABLE ICompoundConcentration RangeSilane monomer(s) of Formula 11-20 wt %and/or Formula 2 based on theweight of the silanes when fullycondensatedWater5-20 mole per mole of silicon(or 2-40 wt %)Solvent system (may contain25-98 wt %multiple solvents)Polymerization inhibitor of0-10 wt %Formula 3Protic acid0.001-0.1 wt % (depending on thepKa of the acid - the amount ofacid should set the pH of thecomposition to around 3-5, e.g. 4)Film forming agent (e.g. EEE)0-10 wt % Other suitable compositions will be apparent to the skilled person as those that can be solidified with change of pH or with change of another constituent (such as e.g. metal or other ions that participate in a solidification reaction) by the stamps basic organic group Advantageously, the imprinting composition12does not require the presence of a polycondensation activator such as a PAG or PBG. This is because a stamp14is used that is loaded with an organic base capable of catalyzing the sol-gel condensation reaction of the alkoxysilanes-based imprinting, as will be explained in more detail below. Consequently, the imprinting composition is largely immune to premature polycondensation reactions that increase the viscosity of the imprinting composition and consequently reduce the ability of the stamp14to imprint the imprinting composition with the desired pattern. The flexible stamp14carries a relief feature pattern formed by spatially separated protrusions16is imprinted into the curable imprintable medium12, thereby transferring the inverse pattern into this layer, as shown in step B. Although not explicitly shown, the imprinting of the deposited curable imprintable medium12with the flexible stamp14may be preceded by a drying step in which some of the solvent system is evaporated in order to increase the viscosity of the deposited curable imprintable medium12if desired. Next, as depicted in step C, the imprinting stack (substrate+imprinting+flexible fluid-permeable elastomer stamp) is kept in contact such that the organic (amine) base in the flexible fluid-permeable elastomer stamp14can catalyze the polycondensation reaction and cause solidification (sol-forming) of the imprinting composition. Optionally, the imprinting stack may be heated to further accelerate the polycondensation reaction. Upon completion of this polymerization reaction, the stamp14is removed in step D, leaving behind the pattern portions18on the substrate10. Due to the fact that such curable imprinting compositions can be used to achieve a cured patterned layer with a high inorganic content and high crosslinking density, the curable imprinting compositions may also be used for the formation of multi-layer structures, such as three-dimensional structures that have tuned optical properties, e.g. light emitting diodes, interferometers, photonic crystals and so on. Such three-dimensional structures may be produced by filling or planarizing the patterned layer20by depositing a planarization material22over the patterned layer20, as shown inFIG.2, and removing excess material if necessary, e.g. by etching or polishing. The planarization material22may be any thermally degradable material such as a thermally degradable polymer (TDP). A non-limiting example of a TDP is polynorbornene or polystyrene. Alternatively, the planarization material22may be soluble in a particular solvent. In general, any planarization material22that can be selectively removed from a formed multi-layer structure without damaging the patterned layers formed from the curable imprintable composition may be used. A non-limiting example of a method of manufacturing such a three-dimensional structure is shown inFIG.3. In step A, a planarized layer30is formed on a substrate or carrier10, as previously explained. The patterned portions20of the planarized layer30may be produced by imprint lithography (e.g. Substrate Conformal Imprint Lithography, SCIL) using the curable imprinting composition12in accordance with the method shown inFIG.1. The pattern20is filled, i.e. planarized with a filling material22. In step B, a next layer of the curable imprinting composition12is applied over the planarized layer30of step A in any suitable manner, e.g. by spincoating, dispensing or doctor blading. The curable imprinting composition12deposited in step B is subsequently embossed by a suitably patterned fluid-permeable elastomer stamp14after alignment of the stamp with respect to the substrate10, as shown in step C. In step C, the imprint orientation of the stamp14with the substrate10has been rotated 90° with respect to the imprint orientation used to form the first patterned layer20. It will be appreciated that other orientation rotation angles are equally feasible. The curable imprinting composition12is subsequently solidified (densified), e.g. as shown inFIG.1to form solidified portions20′ as shown in step D. Obviously, the formation of the solidified portions20′ may be completed after removal of the stamp14, i.e. by completing the inorganic polymerization reaction as previously discussed. Removing the stamp14leaves the densified portions20′ on the planarized layer30of step A. The newly formed patterned layer may again be planarized as shown in step E, after which additional layers may be formed by repeating the steps B-E. The height of the patterned portions of the patterned layer may be reduced using an additional processing step, e.g. by means of reactive ion etching. The filling material22can be removed afterwards by e.g. dissolving the filling material22in a suitable solvent or by thermal decomposition, thus yielding a stacked structure as shown in step F. Alkoxysilane-based sol-gel systems are particularly mentioned for their suitability as imprintings for application in this method because in their sol state, they can withstand most solvents required to dissolve the planarization material22, as well as withstand high temperatures up to 600 or even 1000° C., thereby making it particularly suitable for use with thermally degradable compounds such as a TDP. It may be necessary to remove residual imprint structures from e.g. the substrate10, for instance when a layer on the substrate10has been patterned using the imprint structures as a mask. The imprint structures may be removed by any suitable etching technique, e.g. reactive ion etching. FIG.4schematically depicts an example embodiment of (e.g. fluid-permeable elastomer) stamp14in more detail. The stamp14typically comprises a body110which preferably is made of a polysiloxane-based material. The below description is with respect of a stamp with polyslioxane bulk portions, but other elastomer or rubber materials can be used for bulk portions. Those skilled in the art will be able to apply such other materials and employ the invention without problems. In the below examples the description will beheld with regard to the polysiloxane bulk portions. An example of such a polysiloxane-based material is PDMS (polydimethylsiloxane) although it should be understood that similar polysiloxane-based materials, e.g. a polysiloxane in which at least some of the methyl groups are replaced with larger alkyl groups, e.g. ethyl, propyl, isopropyl, butyl groups and so on, may also be contemplated. Alternatively, the polysiloxane-based material may include a T-branched and/or a Q-branched polysiloxane-based rubber-like material as for instance disclosed in WO2009/147602 A2. It is noted for the avoidance of doubt that a T-branched polysiloxane comprises 3-way branching chains, i.e. networks, for instance when crosslinked by linear polysiloxanes. Likewise, a Q-branched polysiloxane comprises 4-way branching chains, i.e. networks, for instance when crosslinked by linear polysiloxanes. Such branched materials can be used to make stamp body parts that have a higher youngs modulus as will be indicated herein below. The stamp14further comprises a (e.g. fluid-permeable) surface layer120carrying the pattern16as shown in the blown up inset inFIG.4. This layer is not needed however for the implementation of the invention. The surface layer120may be made of the same material as the polysiloxane-based body110, e.g. may form an integral part of the polysiloxane-based body110or may be a separate surface layer made of a different material than that of the polysiloxane-based body110. In embodiments in which the surface layer120is made of such a different material, the surface layer120may be grafted onto the polysiloxane-based body110or adhered to the polysiloxane-based body110. In an embodiment, the polysiloxane-based body110may be adhered to the surface layer120, e.g. prior to curing, such that the body110may act as an adhesive or glue between the surface layer120and the carrier130. A different surface layer120for example may be contemplated in embodiments in which a more rigid surface layer120compared to the polysiloxane-based body110is desired, e.g. to prevent distortion or collapse of the features of the feature pattern16. Such distortion of collapse for instance can occur in soft surface layers120if the feature sizes of the feature pattern16are particularly small, e.g. smaller than 0.5-1 micron. In an embodiment, the surface layer120may have a higher Young's modulus than the polysiloxane-based body110. The dimensions of the features16can e.g. be in the range of 200 nm-2 micron, a rubbery material having a Young's modulus in the range of 7-11 MPa, such as a hard PDMS may be contemplated, whereas for a stamp having dimensions of the features16in the range of 1 nm-200 nm, a rubbery material having a Young's modulus in the range of 40-80 MPa, such as an extra hard PDMS (sometimes referred to as X-PDMS) may be contemplated. Another example of a suitable material for the surface layer120is PFPE (per-fluoro-poly-ether). Other suitable polymer materials for the surface layer120having the desired Young's modulus will be immediately apparent to the skilled person. For the avoidance of doubt, it is noted that the reported Young's moduli have been determined by a standardized hardness test according to the ASTM D1415-06(2012) standard by penetrating the rubber material with a rigid ball under the conditions mandated by the standard. The surface layer120typically will have a thickness of no more than a few mm, e.g. 1 mm or less to ensure that the stamp layer120has the desired pliability characteristics. In some embodiments, the surface layer120may have a thickness ranging from 20-50 micron. It will be understood that the suitable thickness of the surface layer120will depend on the material chosen for the surface layer120. The polysiloxane-based body110may be thicker than the surface layer120to give the stamp14its flexibility, in particular when the polysiloxane-based body110has a lower Young's modulus than the surface layer120. For example, the polysiloxane-based body110may have a thickness ranging from 0.1-5 mm, such as 0.5-2 mm. The features of the feature pattern16may have a feature size ranging from several microns to a few nanometers, i.e. the features122may define a nanopattern, although it is also feasible to use larger feature sizes. The surface layer120may have a Young's modulus that is tailored to the intended sizes of the features122of the stamp to be manufactured. For instance, for relatively large feature sizes, e.g. feature sizes of 500 nm up to several microns, e.g. 2 micron or 5 micron, a relatively soft rubbery material may be used, e.g. a rubbery material having a Young's modulus in the range of 2.5-5 MPa, such as a soft PDMS. This is because the relatively large sized features are relatively insensitive to collapse due to surface tension during the stamp manufacturing process or an imprinting process. Such collapse is typically related to the inter-feature distance, with small inter-feature distances causing overly flexible features to stick together under the influence of surface energy. It is noted that the inter-feature distance is typically but not necessarily correlated to the feature size. Hence, when smaller sizes of the features (and/or smaller inter-feature distances) are required, more rigid rubbery materials may be contemplated to prevent collapse of the smaller size features due to the aforementioned surface tension. The feature pattern16may be formed in the surface layer120in any suitable manner. Known techniques such as electron beam patterning (and reactive ion etching) etching or interference lithography (and subsequent etching) may be used to form the feature pattern16. The stamp14may be mounted, e.g. adhered to a carrier or support130to improve the stability of the stamp14. Preferably this carrier or support130is rigid or has a limited degree of flexibility. Thus, this support is at least more rigid than the stamp or stamp body so that it can support the stamp. The support130preferably is fluid-impermeable such that the substance cannot diffuse out of the stamp14through the support130. The support130for example may be made of glass, a suitable (co-)polymer such as polymethylmethoacrylate (PMMA), polyethylenetereftalate (PET), other plastic, a metal or metal alloy, and so on. The support130may comprise one or more rubber seals140in the edge regions of the rigid support130to prevent spillage of the imprinting composition12when the substrate10is brought into contact with the stamp14, which may force some of the imprinting composition12from the substrate10due to the feature pattern16being pressed into the imprinting composition12. The elastomer stamp14may be loaded with the basic organic group in any suitable manner. A first example method is by impregnation. In this method, the stamp14may be soaked in a solution of a substance comprising the basic organic group (such as a basic organic amine) in an organic solvent for a period of time, such as a period ranging from 30 minutes to 12 hours, preferably a period ranging from 1 hour to 6 hours. The substance preferably is present in the organic solvent in an amount of at least 1% by weight based on the total weight of the organic solvent. For example, the basic organic amine may be present in an amount ranging from 1-5% by weight based on the total weight of the organic solvent. The amount of the substance in the organic solvent preferably is such that upon loading the substance in the stamp14, the amount of the substance in the stamp14after removal of the organic solvent is at least 0.1% by weight based on the total weight of the stamp14(i.e. the total weight of the bulk material110and the surface layer120). If the amount of substance in the stamp14is below 0.1% by weight based on the total weight of the stamp14, polymerization of the imprinting composition as catalysed by the basic organic groups may be too slow. Although there is no particular requirement for an upper limit to the total amount of substance in the stamp14, it has been found that amounts of the substance in excess of about 3% by weight based on the total weight of the stamp14no longer affect the polymerization rate of the imprinting composition12, such that in some embodiments the amount of substance in the stamp14may be in the range of 0.1-3% by weight based on the total weight of the stamp14. Many organic solvents may be used to dissolve the substance and subsequently soak the stamp14. Polar solvents are particularly preferred. Of such polar solvents, alcohols such as methanol, ethanol, propanol, iso-propanol, butanol, pentanol, n-hexanol and cyclohexanol are preferred because they cause modest swelling (of e.g. a polysiloxane bulk portion) only of the stamp14. Such swelling should be avoided as much as possible because it can distort the feature pattern16, whereas excessive swelling may not be fully reversible upon drying the stamp14, thus leading to a distorted stamp14that may not be suitable for use in the above described imprinting methods. Following the soaking step, the soaked stamp14may be rinsed with water to remove residual solvent from the stamp. This rinsing may be repeated any suitable number of times. Following the optional rinsing step, the soaked stamp14is dried to remove the organic solvent from the stamp, thereby leaving the substance loaded into the stamp14. Such drying may be performed at elevated temperatures, e.g. in an oven or the like to reduce the drying time of the elastomer stamp14. However, in an embodiment, the stamp14is dried by leaving it exposed to an ambient atmosphere or under reduced pressure at room temperature (25° C.) for at least 12 hours, e.g. 16 hours, 24 hours or even 48 hours, to allow for the evaporation of the residual organic solvent from the stamp14. Embodiments of this impregnation method may be used to impregnate the entirety of the stamp14with the substance. In alternative embodiments, only the (polysiloxane) bulk portion110of the stamp14is impregnated in this manner, with the separate surface layer120being grafted or adhered to the (polysiloxane) bulk portion110after impregnation. The surface layer120may be manufactured prior to the manufacturing of the polysiloxane bulk portion110, which may be used as an adhesive between the surface layer120and the carrier130. In this embodiment, the substance may diffuse into the surface layer120through the polysiloxane bulk portion110and the surface layer120. At least with the impregnation methods described herein it is particularly advantageous if the bulk portion110and/or the surface layer120or even the entire stamp are fluid-permeable. Impregnation can then occur faster. It has been found that a wide variety of substances having basic organic groups (and especially lews bases with Nitrogen donors) may be used to impregnate the elastomer stamp14using the above impregnation method and subsequently catalyse the polymerization of the imprinting composition12in a lithographic imprinting process such as the process described with the aid ofFIG.1-3. The example substances 1-36, listed in Table 2 have been successfully loaded into a PDMS stamp by impregnation. The PDMS stamp loaded with the substances 1-18 and 20 to 39 have been successfully used to solidify an alkoxysiloxane-based imprinting composition at room temperature at a faster rate compared to the rate achieved with a same stamp without a corresponding substance loaded. It is noted that compound 19 is not a base and this one die did not result in acceleration of imprinting composition. This base is included in the table for another purpose as explained hereinbelow. All compounds cause solidification of 55 nm to 150 nm thick imprinting solutions on silicon substrate within 1-5 minutes at room temperature (RT) after contacting of the stamp with the imprinting solutions. For comparison, similar thickness imprinting solutions on silicon cured with a stamp without base at around 40 min at RT and 10 minutes at 50 degrees celsius. TABLE 21.2.3.4.5.6.7.8.9.HN(n-C2H5)210.HN(n-C2H5OH)211.HN(n-C5H11)212.H2N(n-C5H11)13.N(n-C2H5)314.N(n-C2H5OH)315.16.N(n-C8H17)317.N(n-C12H23)318.19.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.HN(C2H4NH2)(C3H6Si(OCH3)3)35.36. This clearly demonstrates that the substances may be selected from a wide variety of bases having a pKa of higher than 7. In general it was observed that the stronger the base (higher pKa or lower pKb) the faster the solidification of the alkoxysiloxane-based imprinting composition occurred. Preferably the base has a pKa between 8 and 13. More preferably the pKa is between 10 and 13. Based on their pKA values as known in the art, the person skilled in the art will be able to choose appropriate bases for the invention. The wide variety of suitable substances includes, but is not limited to compounds having a Formula 8: In Formula 8, R1-R3may be individually selected from hydrogen, an unsubstituted or substituted C2-C20alkyl group, an unsubstituted or substituted C2-C20alkenyl group, an unsubstituted or substituted C2-C20alkynyl group, an unsubstituted or substituted C3-C20cycloalkyl group, an unsubstituted or substituted C4-C20cycloalkenyl group, an unsubstituted or substituted C3-C20heterocyclic group, an unsubstituted or substituted C6-C30aryl group, an unsubstituted or substituted C6-C30alkylaryl group, an unsubstituted or substituted C4-C30heteroaryl group, provided that R1-R3are not all hydrogen. At least two of R1-R3 may form part of the same unsubstituted or substituted C3-C20cycloalkyl group, unsubstituted or substituted C4-C20cycloalkenyl group, unsubstituted or substituted C3-C20heterocyclic group, unsubstituted or substituted C6-C30aryl group or unsubstituted or substituted C4-C30 heteroaryl group, i.e. may form part of the same ring structure. The term “substituted” may refer to a hydrogen of a compound or group being substituted with a substituent selected from a halogen (F, Br, Cl, or I), a hydroxy group, a nitro group, a cyano group, an amino group, an azido group, an amidino group, a hydrazino group, a hydrazono group, a carbonyl group, a carbamyl group, a thiol group, an ester group, a carboxyl group or a salt thereof, a sulfonic acid group or a salt thereof, a phosphoric acid group or a salt thereof, a C1to C20alkyl group, a C1to C20alkoxy group, a C2to C20alkenyl group, a C2to C20alkynyl group, a C6to C30aryl group, a C7to C30arylalkyl group, a C1to C20heteroalkyl group, a C1to C20heteroaryl group, a C3to C20heteroarylalkyl group, a C3to C30cycloalkyl group, a C3to C15cycloalkenyl group, a C6to C15cycloalkynyl group, a C2to C20heterocycloalkyl group, and a combination thereof. In an embodiment, only one of R1-R3is hydrogen, i.e. the substance is a secondary amine. In yet another embodiment, none of R1-R3is hydrogen, i.e. the basic organic amine is a tertiary amine. In the invention and the embodiments described herein the stamp, the stamp body and/or the surface layer can be fluid-permeable. As indicated herein before, e.g. for the stamps made with impregnation methods such permeability can be advantageous. An alternative example method of loading the stamp14with the basic organic group is by enclosing the basic organic group in the stamp body during its formation. In this example such formation may be through curing or cross-linking of e.g. a base material with a further material to form the polysiloxane bulk portion110of the stamp body. The polysiloxane-based bulk portion110in this method is thus formed by reacting (this preferably includes crosslinking) a polysiloxane base material with a reactive, e.g. capable of cross-linking with the base material. This alternative enclosure method and the stamp made with it has the advantage that swelling of the stamp14due to solvent impregnation is avoided, such that it is more straightforward to produce a stamp14with a high-quality feature pattern16, i.e. a feature pattern16that is not distorted by such swelling effects. It also may save stamp manufacture time. For a polysiloxane bulk portion stamp, a suitable base material is one having free alkyne or alkene (unsaturated bond) groups such as for example vinyl groups that are capable of reacting with hydrosilane moieties of a hydrosilane modified polysiloxane. Others can be used such as aldehydes and ketones, and although these will lead to different crosslink chemistry the principle of the method is not different from the one employing the alkenes Examples of such cross-linking polysiloxanes include HMS-301 and HMS-501, which both are trimethylsiloxy-terminated methylhydrosiloxane-dimethylsiloxane copolymers marketed by the company Gelest, Incorporated. Examples of free alkene group modified polysiloxane bulk portions are the Sylgard® family of products as marketed by the Dow Corning Corporation which are polydimethylsiloxane oligomer or polymers having vinyl end-groups (an example is given in Formula 11 representing a linear vinyl end functionalised polysiloxane wherein n may be chosen according to need. Thus, the vinyl groups of a vinyl substituted part can react with the hydrosilane moiety according to e.g. Reaction scheme II which describes the so called hydrosyllilation reaction catalysed by a hydrosililation catalyst. Hydrosililation involves addition of a hydrosilane group over the unsaturated bond of the vinyl group in this case. Hydrosililation will not be described in detail here as it is known perse. Useful catalysts are described herein below. It will be clear that the reverse situation in which the base material is hydrosilane modified and the reactive polysiloxane is an alkene modified polysiloxane can also be used. In accordance with this method, the substance (comprising the basic organic group) may be dissolved in a polysiloxane base material in an amount of at least 0.2% by weight based on the total weight of the polysiloxane base material, e.g. in an amount ranging from 0.2% to 5% by weight based on the total weight of the polysiloxane base material. The base material is one of the precursors for the bulk portion polysiloxane of the stamp body. An example of a suitable polysiloxane bulk portion base material is soft-PDMS (having a low Young's modulus of below 10) which is for example marketed by the Dow Corning Corporation under the trade name Sylgard®, or X-PDMS which is specially designed for having a higher Young's modulus of e.g. above 20) as described in WO2009/147602, which reference is herewith incorporated by reference. The base materials referred to all have vinyl groups as reactive groups to polymerise with. In the Sylgard the base material has linear polymer or oligomer polysiloxane chains while in the X-PDMS precursor base materials there are branch points where multiple of the branches (three or four branches) include vinyl groups. The polysiloxane bulk portion base material with the substance dissolved therein subsequently may be left to stand for a period of time, e.g. at least six hours such as between 8-12 hours. The polysiloxane base material with the substance dissolved therein may be left to stand for this period of time at an elevated temperature to promote distribution of the substance through the polysiloxane base material. A suitable temperature may be chosen in the range from 30° C. to 100° C., e.g. 50° C. Next, the cross-linking polysiloxane is added to the bulk portion base material in any suitable ratio, for example 2-5 parts by weight of the polysiloxane bulk portion base material per one part of the cross-linking polysiloxane. The cross-linking polysiloxane is a second precursor that is capable to polymerise or react with the first precursor. The mixture may be positioned in a suitable mold or master comprising a negative of the feature pattern16in case the feature pattern16is to be reproduced in the polysiloxane bulk portion110of the stamp14. The mixture is subsequently cured (solidified) for a suitable period of time, e.g. for at least 4 hours, e.g. for 6-12 hours, to form the cured (cross-linked) polysiloxane bulk portion of the stamp14, which curing may be performed at elevated temperatures to increase the reaction speed of the curing reaction. For example, the elevated temperature at which the curing reaction takes place may be chosen in the range from 30° C. to 100° C., e.g. in the range from 50° C. to 100° C. It was found that a stamp14produced in this manner releases without problems from a mold or master comprising the negative of the feature pattern16. The curing reaction is used to tune the (Young's) modulus of the polysiloxane bulk portion110of the stamp14. The inclusion of linear cross-linking polysiloxane chains tunes the modulus of the polysiloxane bulk portion110such that the reaction conditions of the curing reaction, e.g. amount of cross-linking polysiloxane, duration and/or temperature of the curing reaction can be used to tailor the modulus of the polysiloxane bulk portion110to a specific modulus, i.e. to tailor the flexibility or compressibility of the polysiloxane bulk portion110. More specifically, the low modulus siloxanes may be composed of linear chains with reactive vinyl groups at the ends, which reactive vinyl groups are cross-linked by linear hydrosiloxanes which are functionalized along the chains to adjust the modulus of the polysiloxane bulk portion110. At this point, it is noted that the aforementioned cross-linking reaction (see Reaction Scheme II) is of the metal complex catalyzed hydroslylation of vinyl with hydrosiloxanes type. Hydrosililation reactions like the one described hereinabove are typically catalyzed by a platinum or rhodium catalyst, as is well-known per se (see for example a review article entitled Metal complex catalyzed hydrosilylation of vinyl with hydrosiloxanes (A review) by D. A. Vekki and N. K. Skvortsov in CHEMISTRY AND CHEMICAL TECHNOLOGY⋅TECHNOLOGY OF ORGANIC SUBSTANCES. Suitable catalysts that were used in experiments described herein are for example Platinum-cyclovinylmethyl siloxane complex (C3H6OSi)3-5Pt0; 2% Pt in cyclomethylvinylsiloxanes (CAS No 68585-32-0) or Platinum divinyltetramethyldisiloxane comples (C24H54O3Pt2Si6; 2% Pt in xylene (CAS 68478-92-2) as marketed by company Gelest, Incorporated. But others can be used. In all cases care has to be taken that the catalyst is not poisoned by the substance and in particular its basic organic group as added to the polysiloxane base material. It is known that lewis acids generally are capable of poisoning the catalysts. Thus, also the basic organic groups like the amines may poison such platinum or rhodium catalysts rendering them ineffective for the crosslinking reaction. In case of the substances as indicated herein above, it appears that steric hindrance within the parts of the basic organic group is of importance for whether a substance is compatible with the catalyst or not. Generally, the tendency is that the more sterically hindered the basic group or the donor atom of the lewis base, the more compatible the base is with the platinum catalyst. Those skilled in the art will know how to select catalysts and/or basic organic groups (such as the amines) that are compatible with each other by experimentation possibly in combination with the steric hindrance criterion. All the compounds in Table 2 were subjected to the above described enclosure method in which the substance was dissolved in an uncured PDMS in an amount between 0.5-3% by weight based on the total weight of the uncured PDMS and left to stand overnight after which a 1:1 mixture of HMS-301 and HMS-501 as obtained from Gelest, Incorporated was added in ratios of 1-5 by weight based on the total weight of the uncured PDMS, and left to stand overnight at a temperature of 50° C. Out of the compounds in Table 2, compounds 14, 17, 18-34 allowed for the platinum-catalyzed curing reaction to satisfactorily take place and hence made it into a rubber body stamp. The other compounds either prevented curing of the stamp material precursor (compounds 1, 2, 4-6, 9, 10, 12, 13, 16, 35 and 36) apparently causing catalyst poisoning, or did not mix with the precursor base material (compounds 3, 7, 8, 11, 15). Without wishing to be bound by theory, it appears that a suitable secondary amine for use in the above described enclosure method may be an amine according to Formula 8 in which R1is hydrogen and R2and R3may be individually selected from an unsubstituted or substituted branched C3-C20alkyl group, an unsubstituted or substituted C2-C20alkenyl group, an unsubstituted or substituted C2-C20alkynyl group, an unsubstituted or substituted C3-C20cycloalkyl group, an unsubstituted or substituted C4-C20cycloalkenyl group, an unsubstituted or substituted C3-C20heterocyclic group, an unsubstituted or substituted C6-C30aryl group, an unsubstituted or substituted C6-C30alkylaryl group, an unsubstituted or substituted C4-C30heteroaryl group. In an embodiment of this secondary amine, R2and R3may be individually selected from an unsubstituted or substituted branched C3-C20alkyl group and an unsubstituted or substituted C6-C30alkylaryl group. Preferably at least one of and more preferably at least two of R2and R3have a carbon atom attached to the nitrogen that bears at least two other atoms different from H, such as two carbon atoms. These R2 and R3 may thus be e.g. isoalkyl with a less than 20 C atoms (e.g. isopropyl, isobutyl, isopentyl isohexyl, isoheptyl, iso octyl isononyl, isodecyl, iso undecyl or iododecyl or isobutyl). Thus, apparently a number of the secondary amines can have that introduce sufficient steric hindrance to prevent the poisoning of the hydrosililation catalyst. Tertiary amines also fulfill this characteristic. Thus, a suitable tertiary amine for use in the above described enclosure method may be an amine according to Formula 8 in which R1-R3may be individually selected from an unsubstituted or substituted linear C2-C20alkyl group or branched C3-C20alkyl group, an unsubstituted or substituted C2-C20alkenyl group, an unsubstituted or substituted C2-C20alkynyl group, an unsubstituted or substituted C3-C20cycloalkyl group, an unsubstituted or substituted C4-C20cycloalkenyl group, an unsubstituted or substituted C3-C20heterocyclic group, an unsubstituted or substituted C6-C30aryl group, an unsubstituted or substituted C6-C30alkylaryl group, an unsubstituted or substituted C4-C30heteroaryl group. In an embodiment of this tertiary amine, R1-R3may be individually selected from an unsubstituted or substituted linear C2-C20alkyl group or branched C3-C20alkyl group and an unsubstituted or substituted C6-C30alkylaryl group. A suitable tertiary amine for use in the above described enclosure method may be an amine according to Formula 10: In Formula 10, R1-R4may be individually selected from an unsubstituted or substituted linear C2-C20alkyl group or branched C3-C20alkyl group. For example, R1-R4may be individually selected from methyl, ethyl or propyl. In an embodiment, R1-R4are all the same, such as methyl or ethyl. More general, the basic organic group preferably includes a structure according to Formula 9: Herein N can be the donor atom of a lewis acid, X is chosen from the group consisting of oxygen, nitrogen, sulfur and phosphorous, and X may thus also operate as a lewis acid donor atom. R1-R4are individually selected from an unsubstituted or substituted aryl group, a linear C2-C20alkyl group or branched C3-C20alkyl group, preferably wherein R1-R4are individually selected from methyl, ethyl or propyl. R5-R7can be individual organic groups or one and the same organic group comprising one or more hydrogen, carbon, oxygen, nitrogen and sulfur atoms with less than 20 carbon atoms. Preferably there are less than 10 carbon atoms. Preferably the R5 to R7 form a group with a conjugated system comprising at least the two olefinic bonds of Formula 9 and one other double bond (C═N, C═O or C═C) and or at least one other carbon-carbon triple bond. Preferably, the conjugated system is an aromatic or heteroaromatic system. Preferably the heteroaromatic system comprises at least two benzene rings. These compounds effectively have two donor atoms reinforcing their retainment of protons or other lewis acids. Hence they provide strong bases without having formal charge. They are often referred to as superbasis. Since monoalkylamines and some of the dialkylamines could not be incorporated directly into stamps with the above method (see e.g. compounds 9 to 13, 16) since they apparently poison the catalyst, a further method makes use of protected amines as part of the substances. One method of protecting amines is through reaction with tert-butyloxycarbonyl (Boc) group. The chemistry for protection and deprotection of the amine with such group is well known per se and will not be described in detail. One method can be heating a mixture of the amine to be protected and di-tert-butyl dicarbonate (Boc2O) in tetrahydrofuran (THF) at around 40° C. Deprotection (providing the free amine group) can be done by exposing the protected amine (a carbamate) to e.g. 3M hydrochloric acid at ambient temperature. Other methods for protection with Boc and deprotection may be used. The method may be used to protect one or two hydrogen atoms born by an amine and works for primary and secondary amines. See for example: R. Varala, S. Nuvula, S. R. Adapa, J. Org. Chem., 2006, 71, 8283-8286. The protected amine can then be used during the stamp material precursor curing such as the polysiloxane materials described herein before. Compound 30 was used to test the Boc protection method with a stamp. It was bought as is and as described herein before it was successfully built in during curing as it did not result in poisoning of the hydrosilylation catalyst. The Boc groups were removed by exposure of the stamp including compound 30 to 3M HCl in water at room temp for 60 minutes. The Boc group provides carbon dioxide and t-butanol as byproducts upon deprotection which do not interfere with the stamps function. The test was performed several times and some stamps were, after exposure to HCl rinsed with isopropanol and water to be dried in an oven at 90 C. Some of the stamps were exposed to a solution of base of a period of time after their exposure to HCl. The base can be sodium bicarbonate or brine solution or even sodium hydroxide solution. This may be done to remove HCL bound to free amine groups. Thus with this method effectively all amines can be incorporated using the in situ curing method. For PDMS stamps formed by the above enclosure method, it was found that upon bringing this stamp into contact with a 1:1 (by weight) of a TMOS-MTMS sol-gel layer of 70-250 nm thickness on a silicon substrate with and without EEOL having a pH of about 4.5, all stamps effectively solidified the TMOS-MTMS sol-gel layer in less than 60 seconds. For some of the compounds (all tertiary amines) such as e.g. 14, 22-24 this could be repeated at least 50 times without observable degradation in the fidelity of the feature pattern formed in the solidified TMOS-MTMS sol-gel layer and without a significant increase in the curing time of the TMOS-MTMS sol-gel layer, thus demonstrating that the basic organic amine is substantially retained within the PDMS stamp without degrading its feature pattern16. As a comparative example, a PDMS stamp without an enclosed substance was used to imprint of a TMOS-MTMS sol-gel layer of 70-250 nm thickness on a silicon substrate with and without EEOL having a pH of about 4.5. In this comparative example, complete cross-linking of the sol-gel system took about 20 minutes. This potentially can be explained by the fact that under acidic conditions, predominantly linear chains are formed in the polycondensation reaction whereas under basic conditions predominantly cross-linked chains are formed in the polycondensation reaction, such that catalysis of the polycondensation reaction by a base reduces the time required to form the cross-linked polysiloxane network. More generally, base-catalysed alkoxysilane polycondensation reactions are faster than acid-catalysed alkoxysilane polycondensation reactions. The excellent retention of the basic compounds within the stamp14is furthermore demonstrated by the fact that in the above experiments, residues of the sol-gel system remaining on the stamp after the imprinting process could be removed by immersing the stamp into a 1% HF solution for two minutes, rinsing with deionized water, after which the stamp was baked for 15 minutes on a hot plate at 70° C. Following this cleaning procedure, the catalytic activity of the stamp in subsequent imprinting cycles was unaffected. In an embodiment, the stamp14may comprise a separate surface layer120as previously explained. In this embodiment, the surface layer120may be grafted or adhered to the polysiloxane bulk material into which the basic organic amine is dissolved prior to curing of the polysiloxane bulk material, after a partial curing of the polysiloxane bulk material or post-curing of the polysiloxane bulk material. This has the advantage that the surface layer120including the feature pattern16is not exposed to the basic organic amine during its manufacture, e.g. its curing. This therefore safeguards the desired elastomeric properties of this surface layer because interference of the basic organic amine in particular in the curing reaction of the surface layer is avoided. Instead, the basic organic amine may diffuse into the surface layer120following its grafting or adhesion to the polysiloxane bulk110. For this reason, it is preferred that the surface layer120is grafted or adhered to the polysiloxane bulk110prior to (completion of) the curing reaction of the polysiloxane bulk110, such that the elevated temperatures at which the curing reaction may be performed, e.g. temperatures in the range of 30-100° C., can accelerate the diffusion process, thereby ensuring that sufficient basic organic amine is diffused into the surface layer120to effectively catalyse the polymerization reactions in the imprinting composition layer12when the elastomer stamp14is brought into contact with this imprinting composition layer. In an experiment, compound 22 was enclosed in a soft-PDMS stamp layer (Young's modulus of about 2-3 MPa) in amounts of 2%, 1%, 0.5%, 0.25 and 0.125% after which a X-PDMS layer (Young's modulus of about 70-80 MPa) was adhered to the soft-PDMS stamp layer. The layer stack was subsequently cured at 50° C. for 2-24 h hours. Immediately after curing, the thus obtained stamp having a total thickness of 500-1000 micron was brought into contact with a TMOS-MTMS sol-gel layer of 60-150 nm thickness on a silicon substrate with desired cross-linking of the sol-gel layer achieved within 30 seconds. In the above embodiments, the basic organic amine preferably has a vapour pressure not exceeding 0.2 mbar at 25° C. to avoid excessive evaporation of the basic organic substance from the stamp14. Such evaporation in the vicinity of the imprinting composition12, e.g. during positioning of the stamp14over the substrate10, can cause the undesired transfer of the basic organic amine from the stamp14into the imprinting composition12, which undesired transfer can cause premature polymerization of the imprinting composition12. Such premature solidification of the imprinting composition12can prohibit (high fidelity) reproduction of the feature pattern16in the imprinting composition12as previously explained. To avoid substantial evaporation in general larger mol weight substances are preferred. Thus, the substituents of the successfully included compounds, such as e.g. mono-, di- and trialkyl amines in stamps may have more than 5 carbon atoms. Preferably more than 10 more preferably more than 15 or even more than 20. For example the above mentioned secondary or tertiary amines with such number of compounds can be used. The person skilled in the art can pick e.g. the relevant ones form table 2 according to these concepts. Other reactive crosslinking materials and reaction schemes than the ones described herein above can be used to arrive at a stamp of the invention having bases incorporated either through impregnation or build-in via reaction. Yet another method of manufacture of stamps according to the invention builds on the method where curing of the stamp body material is done in presence of a substance carrying the organic basic group as described herein above for example for stamp bodys of polysiloxanes and organic amine compounds. This may be referred to as the incorporation method. In this method, the substances are chosen such as to comprise a reactive group that may react with the elastomer body precursor material for example during the curing of the elastomer body precursor material to form the organic polymeric bulk portion and/or the surface layer. Using this approach, the substance and therewith the basic organic groups can be permanently linked (through covalent bonding) to the elastomer stamp body. The method will be exemplified for the polysiloxane stamp as described herein above. Thus a PDMS base material (precursor) having vinyl end-groups was mixed with any one of the compounds 30 to 36 of Table 2. Concentrations or amounts used were the same as used in the curing method described herein before. Most of these compounds also have vinyl end-groups except for compounds 35 and 36 which have trialkoxysilane groups as reactive groups. While compounds 30 to 32 were added to the total curable mixtures in amounts of 1 weight %, these amounts are 3% for compounds 33 to 36. To the mixture was added a crosslinking polysiloxane having hydrosilane groups and a hydrosililation catalyst as was done for the earlier described curing experiments The amounts of the base material (first precursor) and crosslinker polysiloxane (second precursor) are as used in the inclusion method described herein before. All, of the precursor (curable) mixtures except for the one having compounds 35 and 36, resulted in an elastomer solidified polysiloxane bulk portion having the compound added attached to the crosslinked polysiloxane backbone of the elastomer stamp body. The stiffness of stamps with compounds 33 and 34 was somewhat high and that resulted in less useful stamps. The vinyl end-groups participated well in the hydrosililation reaction, but the trialkoxy silane compounds 35 and 36 did not crosslink via the alkoxysilanes. Again as described herein before, the compound 30 is a protected amine and this one was after curing of the stamp material subjected to deprotection as described hereinabove. The PDMS in the latter method can be a soft PDMS (such as Sylgard based as described hereinabove and based on linear base material) or a somewhat stiffer PDMS such as the X-PDMS based on branched base material. Either one of them can be mixed with a linear hydrosilane containing second precursor. If the stamp is designated to have the organic surface layer, then this is the layer that preferably includes the bound base compounds. Hence, to achieve this, the method as used for the inclusions method described herein before can be used for the incorporation method. It will be clear that also for this method, the nature of the basic organic group must be chosen such as not to interfere with the curing reaction. Thus the sterically hindered amines will work best if hydrosililation is used for the curing and crosslinking reaction. Also protect amines or protected other lewis acid bases can be used with the invention. The stamps obtained with the attached bases were tested in imprint processes using test situation similar to the ones used herein above. While all of them showed an increased solidification speed upon imprinting compared to an analogous stamp not having the compounds incorporated, compound 31 to 34 provided faster solidification than stamps with compound 35 and 36. While compound 32 provided good speed, compound 31 gave best curing and stamp characteristics. Although. The invention of having an elastomer stamp including a base that is suitable for stimulating the solidification of an imprinting solution has been elucidated in detail with regard to elastomer stamp bodys including polysiloxane bulk portion. Without departing from the inventive concept it will be clear that other elastomers and rubbers may also be used. Many such materials are available in the prior art and commercially. Indeed, for the impregnation method, such rubbers must be permeable. Otherwise inclusion methods with curing (polymerisation) of bulk portion precursors with in situ presence of bases or protected bases needs to be performed. Although these are not explicitly described herein, the invention as implemented can still work. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” does not exclude the presence of elements or steps other than those listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements. In the device claim enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. | 62,543 |
11860536 | DETAILED DESCRIPTION Preferred embodiments of the inventive concept will be described with reference to the accompanying drawings for sufficient understanding of the configuration and effects of the present disclosure. The inventive concept may, however, be embodied in various forms and diverse changes and should not be constructed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. A person skilled in the art might understand that the inventive concept may be conducted under what kind of suitable environment. The terms used in the disclosure are for explaining embodiments but are not intend to limit the present disclosure. In the disclosure, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated materials, elements, steps, operations, and/or devices, but do not preclude the presence or addition of one or more other materials, elements, steps, operations, and/or devices. In the disclosure, an alkyl group may be a linear alkyl group, a branched alkyl group, or a cyclic alkyl group. The carbon number of the alkyl group is not specifically limited but may be an alkyl group of 1 to 30 carbon atoms. Examples of the alkyl group may include a methyl group, an ethyl group, an n-propyl group, an isopropyl group, an n-butyl group, a t-butyl group, an i-butyl group, a 2-ethylbutyl group, a 3,3-dimethylbutyl group, an n-pentyl group, an i-pentyl group, a neopentyl group, a t-pentyl group, and a cyclopentyl group, but an embodiment of the inventive concept is not limited thereto. In the disclosure, the carbon number of an alkoxy group is not specifically limited but may be 1 to 30. The alkoxy group may include an alkyl alkoxy group and an aryl alkoxy group. In the disclosure, the alkenyl group may be a linear alkenyl group or a branched alkenyl group. The carbon number of the alkenyl group is not specifically limited but may be 1 to 30. Examples of the alkenyl group may include a vinyl group, a 1-butenyl group, a 1-pentenyl group, a 1,3-butadienyl aryl group, a styrenyl group, and a styrylvinyl group, but an embodiment of the inventive concept is not limited thereto. In addition, in the disclosure, the alkenyl group may include an allyl group. In the disclosure, a hydrocarbon ring includes an aliphatic hydrocarbon ring and an aromatic hydrocarbon ring. The hydrocarbon ring may be a monocycle or a polycycle. A heterocycle includes an aliphatic heterocycle and an aromatic heterocycle. The heterocycle may be a monocycle or a polycycle. In the disclosure, examples of the halogen may include fluorine (F), chlorine (Cl), bromine (Br), and iodine (I), but an embodiment of the inventive concept is not limited thereto. In the disclosure, the term “substituted or unsubstituted” corresponds to substituted or unsubstituted with one or more substituents selected from the group consisting of a hydrogen atom, a deuterium atom, a halogen atom, a hydroxyl group, an alkoxy group, an ether group, a halogenated alkyl group, a halogenated alkoxy group, a halogenated ether group, an alkyl group, an alkenyl group, an aryl group, a hydrocarbon ring group and a heterocyclic group. In addition, each of the substituents may be substituted or unsubstituted. For example, the halogenated alkoxy group may be construed as the alkoxy group. In the disclosure, in case of not drawing a chemical bond at a position where a chemical bond is required, it means that a hydrogen atom is bonded at that position, unless otherwise defined. In the disclosure, like reference numerals refer to like elements throughout. Hereinafter, in order to explain the inventive concept more particularly, embodiments according to the inventive concept will be explained in more detail with reference to attached drawings. FIG.1toFIG.3are cross-sectional views for explaining a method of manufacturing an electronic device using a crosslinker composition according to embodiments of the inventive concept. Referring toFIG.1, a substrate100may be provided. On the substrate100, a lower layer200may be formed. For example, the formation of the lower layer200may be performed by spin coating. The lower layer200may be a single layer or a stacked multiple layers. Though not shown, layers may be further provided between the substrate100and the lower layer200. The lower layer200may be an etching target layer. The lower layer200may include any one among a semiconductor material (for example, organic semiconductor material), a metal material (for example, metal nanoparticle and/or quantum dot nanoparticle), and an insulating material (for example, polymer insulating material). The semiconductor material may include a polymer or an organic semiconductor material. For example, the semiconductor material may include at least one among P(DPP2DT-TVT), P(DPP2DT-T2), P(DPP2DT-F2T2), P(DPP2DT-TT), P(NDI3OT-Se2), P(NDI3OT-Se), and P(NDI2OD-Se2). However, these are only illustrations, and an embodiment of the inventive concept is not limited thereto. In an embodiment, the polymer insulating material may include at least one among PMMA, PS and PVDF-HFP. In an embodiment, the metal nanoparticle may include a metal nanoparticle such as a silver nanoparticle (AgNP) and a gold nanoparticle (AuNP). However, these are only illustrations, and an embodiment of the inventive concept is not limited thereto. In addition, the lower layer200may include a crosslinker composition. The crosslinker composition may have a three-dimensional structure. The concentration of the crosslinker composition may be about 0.1 wt % to about 50 wt %, more preferably, about 0.1 wt % to about 5 wt %. According to some embodiments of the inventive concept, the crosslinker composition may be represented by Formula 1 below. In Formula 1, R1is a linear type or branch type alkyl group of C1to C30, or a linear type or branch type alkoxy group of C1to C30, R2is O, S, NH, CH2or R3is O, S, NH or CH2, R4is hydrogen, halogen, a linear type or branch type alkyl group of C1to C30, a linear type or branch type alkoxy group of C1to C30, a linear type or branch type alkyl group of C1to C30, where at least one hydrogen is substituted with halogen, or a linear type or branch type alkoxy group of C1to C30, where at least one hydrogen is substituted with halogen, R5is hydrogen or halogen, R6is CO, CH2, or SO2, “l” and “m” are each independently an integer of 1 to 30, and “n” is an integer of 3 to 12. The halogen may include F or Cl. For example, R4may include any one selected among Formula R-1 to Formula R-97 below. In Formula R-1 to Formula R-97, “a”, “b” and “c” are each independently an integer of 1 to 30. The compound represented by Formula 1 may include any one selected from Formula 1-1 to Formula 1-6 below. In Formula 1-1 to Formula 1-6, R2is O, S, NH, CH2or R3is O, S, NH or CH2, R4is hydrogen, halogen, a linear type or branch type alkyl group of C1to C30, a linear type or branch type alkoxy group of C1to C30, a linear type or branch type alkyl group of C1to C30, where at least one hydrogen is substituted with halogen, or a linear type or branch type alkoxy group of C1to C30, where at least one hydrogen is substituted with halogen, R5is hydrogen or halogen, R6is CO, CH2, or SO2, and “l” and “m” are each independently an integer of 1 to 30. In some embodiments of the inventive concept, the crosslinker composition may be represented by Formula 2 below. In Formula 2, R1is a linear type or branch type alkyl group of C1to C30, or a linear type or branch type alkoxy group of C1to C30, R2is O, S, NH, CH2or R3is O, S, NH or CH2, R4is hydrogen, halogen, a linear type or branch type alkyl group of C1to C30, a linear type or branch type alkoxy group of C1to C30, a linear type or branch type alkyl group of C1to C30, where at least one hydrogen is substituted with halogen, or a linear type or branch type alkoxy group of C1to C30, where at least one hydrogen is substituted with halogen, R5is hydrogen or halogen, “l” and “m” are each independently an integer of 1 to 30, and “n” is an integer of 3 to 12. The halogen may include F or Cl. For example, R4may include any one selected among Formula R-1 to Formula R-97 above. The compound represented by Formula 2 may include any one selected from Formula 2-1 to Formula 2-6 below. In Formula 2-1 to Formula 2-6, R2is O, S, NH, CH2or R3is O, S, NH or CH2, R4is hydrogen, halogen, a linear type or branch type alkyl group of C1to C30, a linear type or branch type alkoxy group of C1to C30, a linear type or branch type alkyl group of C1to C30, where at least one hydrogen is substituted with halogen, or a linear type or branch type alkoxy group of C1to C30, where at least one hydrogen is substituted with halogen, R5is hydrogen or halogen, and “l” and “m” are each independently an integer of 1 to 30. The compound represented by Formula 2 may include any one selected from Formula 3-1 to Formula 3-9 below. Referring toFIG.2, a photomask300may be disposed on the lower layer200. The photomask300may expose a portion of the lower mask200. On the photomask300, light400may be irradiated. By the light400, the lower layer200may be exposed. The light400may be electron beam or extreme ultraviolet rays. On the first part210of the lower layer200exposed by the photomask300, the light400may be directly irradiated. By the light400, a crosslinker composition in the first part210of the lower layer200may be crosslinked and cured. Particularly, by the light400, a portion of the chemical bonds of the crosslinker composition may be cleaved to form radicals. The radicals may be free radicals. For example, as shown below, N2is removed from the crosslinker composition represented by Formula 1, and N radicals (i.e., nitrene) may be formed. Due to the radicals, intermolecular bonding reaction between materials of Formula 1 may occur. The second part220of the lower layer200, unexposed by the photomask300may not be exposed to the light400. That is, the crosslinker composition in the second part220of the lower layer200may be uncured. Referring toFIG.3, the photomask300may be removed. The second part220of the lower layer200may be removed using a solution, and the first part210of the lower layer200may remain to form a lower pattern. Hereinafter, the first part210of the lower layer200may be referred to as a lower pattern. The lower pattern may be the constituent element of an electronic device. For example, the lower pattern may be any one among an electrode layer, a charge transport layer, an active layer and an insulating layer constituting an electronic device. For example, the electronic device may be an organic thin film transistor, a logic electronic device, an organic light-emitting device (OLED), or a quantum dot light-emitting diode (QD-LED). The three-dimensional crosslinker composition according to embodiments of the inventive concept may be used for forming a pattern or for manufacturing an electronic device. For example, the crosslinker composition may be used for a patterning process for manufacturing an electronic device. According to the inventive concept, a photoresist layer for forming a pattern or for manufacturing an electronic device may not be required. Accordingly, a manufacturing process may be simplified, and at the same time, the damage of the constituent elements of an electronic device may be prevented. Further, the electric properties and stability of an electronic device manufactured using a three-dimensional crosslinker composition may be improved. Therefore, the generating phenomenon of contaminations due to a metal element in a manufacturing process of a semiconductor device may be prevented. Hereinafter, a method of preparing the three-dimensional crosslinker composition according to embodiments of the inventive concept will be explained. Example 1 Synthesis of 2-(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5,6-tetrafluorobenzoate) (3Bx) To a 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 4-azido-2,3,5,6-tetrafluorobenzoic acid (0.30 g, 1.28 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (SOCl2) (0.21 mL, 2.84 mmol) and anhydrous dichloromethane (DCM) (6 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 26 hours. The mixture was cooled to room temperature (about 25° C.), organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (6 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 2-(hydroxymethyl)propane-1,3-diol (37.40 mg, 0.35 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, anhydrous dichloromethane (8 mL) and triethylamine (0.18 mL, 1.27 mmol) were added in order to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 22 hours, and distilled water (15 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel and dichloromethane (20 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=3:1), and a pure white solid product was obtained by a recrystallization method using n-hexane. The solid product was 0.14 g, and the yield was analyzed as 52%. Example 2 Synthesis of 2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5,6-tetrafluorobenzoate) (4Bx) To a 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 4-azido-2,3,5,6-tetrafluorobenzoic acid (1.00 g, 4.25 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (0.62 mL, 8.50 mmol) and anhydrous dichloromethane (25 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 16 hours. The mixture was cooled to room temperature (about 25° C.), organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (6 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, pentaerythritol (0.12 g, 0.88 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, anhydrous dichloromethane (15 mL) and triethylamine (0.59 mL, 4.25 mmol) were added in order to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 26 hours, and distilled water (10 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel, distilled water (20 mL) and dichloromethane (20 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=3:1), and a pure white solid product was obtained by a recrystallization method using n-hexane. The solid product was 0.79 g, and the yield was analyzed as 89%. Example 3 Synthesis of 2-4(4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)-2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)propane-1,3-diylbis(4-azido-2,3,5,6-tetrafluorobenzoate) (6Bx) To a 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 4-azido-2,3,5,6-tetrafluorobenzoic acid (1.50 g, 6.38 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (0.66 mL, 9.12 mmol) and anhydrous dichloromethane (25 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 28 hours. The mixture was cooled to room temperature (about 25° C.), organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (15 mL) to prepare a first mixture, and the first mixture prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, dipentaerythritol (0.19 g, 0.76 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, anhydrous pyridine (25 mL) was added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 22 hours, and distilled water (15 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel, distilled water (35 mL) and dichloromethane (30 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (dichloromethane:n-hexane=7:1), and a pure white solid product was obtained by a recrystallization method using n-hexane. The solid product was 0.86 g, and the yield was analyzed as 73%. Example 4 Synthesis of hexane-1,2,3,4,5,6-hexayl hexakis(4-azido-2,3,5,6-tetrafluorobenzoate) (6Bx_M) To a 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 4-azido-2,3,5,6-tetrafluorobenzoic acid (1.00 g, 4.25 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (0.49 mL, 6.8 mmol) and anhydrous dichloromethane (20 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 23 hours. The mixture was cooled to room temperature (about 25° C.), organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (12 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, D-mannitol (0.86 g, 0.47 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, anhydrous pyridine (15 mL) was added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 22 hours, and distilled water (15 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel, distilled water (30 mL) and dichloromethane (25 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (dichloromethane:n-hexane=7:1), and a pure white solid product was obtained by a recrystallization method using n-hexane. The solid product was 0.50 g, and the yield was analyzed as 71%. Example 5 Synthesis of 2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)-2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5,6-tetrafluorobenzoate) (8Bx) To a 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 4-azido-2,3,5,6-tetrafluorobenzoic acid (2.00 g, 8.51 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (0.99 mL, 13.62 mmol) and anhydrous dichloromethane (28 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 17 hours. The mixture was cooled to room temperature (about 25° C.), organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (18 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 100 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, tripentaerythritol (0.26 g, 0.71 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, anhydrous pyridine (38 mL) was added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 17 hours, and distilled water (18 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel, distilled water (40 mL) and dichloromethane (35 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (dichloromethane:n-hexane=7:1), and a pure white solid product was obtained by a recrystallization method using n-hexane. The solid product was 0.40 g, and the yield was analyzed as 26%. Example 6 Synthesis of 4-(4-azido-2,3,5-trifluoro-6-isopropylphenyl)-4-oxobutyl4-azido-2,3,5-trifluoro-6-isopropylbenzoate (IP-2Bx) Example 6-1: Synthesis of 1,2,3,4-tetrafluoro-5-isopropylbenzene To a 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 1,2,3,4-tetrafluorobenzene (1.47 g, 9.60 mmol) and AlCl3(0.38 g, 2.89 mmol) were added and stirred under 0° C. conditions for about 30 minutes under an argon atmosphere. To the flask, isopropyl chloride (1.14 mL, 12.48 mmol) was added dropwisely. After about 1 hour, the temperature was raised to room temperature, and stirring was performed at room temperature for about 3 hours to prepare a mixture. The reaction of the mixture was terminated using distilled water, and an extracting process was performed using an extraction funnel and chloroform three times. Moisture in the organic layer obtained by the extracting process was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was filtered using silica gel and dichloromethane and dried through a rotary evaporator to obtain a turbid yellow oil state product. The product was 1.40 g, and the yield was analyzed as 76%. Example 6-2: Synthesis of 2,3,4,5-tetrafluoro-6-isopropylbenzoic Acid To a 250 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 1,2,3,4-tetrafluoro-5-isopropylbenzene (10.87 g, 56.58 mmol) and THF (110 mL) were added and stirred at about −78° C. for about 30 minutes under an argon atmosphere. To the flask, n-BuLi (42.4 mL, 67.90 mmol) was added dropwisely. After about 2 hours, a carbon dioxide gas was bubbled at about −78° C. for about 30 minutes. Then, stirring was performed for about 14 hours while slowly raising the temperature to room temperature. Under 0° C. conditions, 80 mL of 1 M HCl was added to the flask dropwisely to prepare a mixture. An extracting process of the mixture was performed using an extraction funnel, 100 mL of distilled water and ethyl acetate, three times. Moisture in the organic layer obtained by the extracting process was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was dried using a vacuum pump to obtain a white solid product. The solid product was 8.28 g, and the yield was analyzed as 62%. Example 6-3: Synthesis of ethane-1,2-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) To a 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 2,3,4,5-tetrafluoro-6-isopropylbenzoic acid (0.5 g, 2.12 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (0.61 mL, 8.49 mmol) and anhydrous dichloromethane (10 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 18 hours. The mixture was cooled to room temperature, organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (8 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, ethylene glycol (0.053 g, 0.85 mmol), trimethylamine (0.6 mL, 4.23 mmol) and anhydrous dichloromethane (8 mL) were added to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 26 hours, and distilled water (6 mL) was added to terminate the reaction. An extracting process of the second mixture was performed using an extraction funnel, distilled water (15 mL) and dichloromethane (25 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=1:10) to obtain a white solid product. The solid product was 0.342 g, and the yield was analyzed as 81%. Example 6-4: Synthesis of 4-(4-azido-2,3,5-trifluoro-6-isopropylphenyl)-4-oxobutyl4-azido-2,3,5-trifluoro-6-isopropylbenzoate (IP-2Bx) A 10 mL brown vial containing an oven-dried magnetic bar was prepared. To the vial, ethane-1,2-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) (0.34 g, 0.68 mmol) and sodium azide (0.13 g, 2.05 mmol) were added, and distilled water (0.7 mL) and DMF (7 mL) were added thereto, followed by stirring on a hot plate of about 60° C. for about 19 hours. After cooling the vial to room temperature, an extracting process was performed using an extraction funnel, 25 mL of distilled water, ethyl acetate (8 mL) and n-hexane (2 mL), four times. Moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (chloroform:n-hexane=1:2) to obtain a white solid final product. The solid product was 0.35 g, and the yield was analyzed as 94%. Example 7 Synthesis of 2,2-bis(((4-azido-2,3,5-trifluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate) (IP-4Bx) Example 7-1: Synthesis of 2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) To a 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 2,3,4,5-tetrafluoro-6-isopropylbenzoic acid (1.0 g, 4.23 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (2 mL, 27.57 mmol) and anhydrous dichloromethane (10 mL) were added in order, heated to about 70° C., and stirred at about 70° C. for about 22 hours to prepare a mixture. The mixture was cooled to room temperature, organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (10 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, pentaerythritol (0.11 g, 0.81 mmol) and anhydrous pyridine (10 mL) were added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 34 hours, and distilled water (10 mL) was added to terminate the reaction. An extracting process was performed using an extraction funnel, distilled water (20 mL) and dichloromethane (35 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=1:8) to obtain a white solid product. The solid product was 0.51 g, and the yield was analyzed as 61.7%. Example 7-2: Synthesis of 2,2-bis(((4-azido-2,3,5-trifluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate) (IP-4Bx) A 10 mL brown vial containing an oven-dried magnetic bar was prepared. To the vial, 2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) (0.40 g, 0.40 mmol), sodium azide (0.26 g, 4.00 mmol), distilled water (0.4 mL) and DMF (3.6 mL) were added, followed by stirring on a hot plate of about 70° C. for about 39 hours to prepare a mixture. After cooling the mixture to room temperature, an extracting process was performed using an extraction funnel, distilled water (30 mL), ethyl acetate (20 mL) and n-hexane (4 mL), four times. Moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (chloroform:n-hexane=1:2) to obtain a white solid final product. The solid product was 0.42 g, and the yield was analyzed as 95.6%. Example 8 Synthesis of 2-4(4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)-2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)propane-1,3-diylbis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate) (IP-6Bx) Example 8-1: Synthesis of 2-(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)-2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)propane-1,3-diylbis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) To a 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 2,3,4,5-tetrafluoro-6-isopropylbenzoic acid (1.0 g, 4.23 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (2 mL, 27.57 mmol) and anhydrous dichloromethane (10 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 17 hours. The mixture was cooled to room temperature, organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (10 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, dipentaerythritol (0.16 g, 0.55 mmol) and anhydrous pyridine (10 mL) were added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 29 hours, and distilled water (8 mL) was added to terminate the reaction. An extracting process was performed using an extraction funnel, distilled water (20 mL) and dichloromethane (30 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=1:7) to obtain a white solid product. The solid product was 0.40 g, and the yield was analyzed as 46.3%. Example 8-2: Synthesis of 2-(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)-2-((3-((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)-2,2-bis(((4-azido-2,3,5,6-tetrafluorobenzoyl)oxy)methyl)propoxy)methyl)propane-1,3-diylbis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate) (IP-6Bx) A 20 mL brown vial containing an oven-dried magnetic bar was prepared. To the vial, 2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) (0.36 g, 0.40 mmol), sodium azide (0.13 g, 2.06 mmol), distilled water (1.0 mL) and DMF (10 mL) were added, followed by stirring on a hot plate of about 70° C. for about 19 hours. After cooling the mixture to room temperature, an extracting process was performed using an extraction funnel, 15 mL of distilled water, ethyl acetate (15 mL) and n-hexane (3 mL), four times. Moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (chloroform:n-hexane=1:1) to obtain a white solid final product. The solid product was 0.36 g, and the yield was analyzed as 91.9%. Example 9 Synthesis of 2-((3-(3-((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)-2,2-bis(((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)-2,2-bis(((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)methyl)-2-(((4-azido-2,3,5-trifluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate)) (IP-8Bx) Example 9-1: Synthesis of 2-((3-(3-((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)-2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)-2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)methyl)-2-(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) To a 25 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, 2,3,4,5-tetrafluoro-6-isopropylbenzoic acid (1.98 g, 8.40 mmol) was added, and vacuum drying was performed while stirring at room temperature for about 2 hours. To the flask, thionyl chloride (3.04 mL, 42 mmol) and anhydrous dichloromethane (15 mL) were added in order to prepare a mixture. The mixture was heated to about 70° C., and stirred at about 70° C. for about 16 hours. The mixture was cooled to room temperature, organic solvents were removed by a rotary evaporator, and additional drying was performed using a vacuum pump to obtain a product. The product was diluted in anhydrous dichloromethane (15 mL) to prepare a first mixture, and the first mixture was prepared under an argon atmosphere. To a separate 50 ml round-bottom flask, a magnetic bar was put and prepared. The round-bottom flask was vacuum dried while heating with a torch, and the flask was charged with argon. To the flask, tripentaerythritol (0.26 g, 0.52 mmol) and anhydrous pyridine (14 mL) were added and stirred to prepare a mixture. To the mixture, the first mixture prepared first was added dropwisely to prepare a second mixture. The second mixture was stirred at room temperature for about 34 hours, and distilled water (10 mL) was added to terminate the reaction. An extracting process was performed using an extraction funnel, distilled water (25 mL) and dichloromethane (35 mL). The extracting process was repeated three times, moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (ethyl acetate:n-hexane=1:7) to obtain a white solid product. The solid product was 0.35 g, and the yield was analyzed as 32%. Example 9-2: 2-((3-(3-((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)-2,2-bis(((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)-2,2-bis(((4-azido-2,3,5-fluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)methyl)-2-(((4-azido-2,3,5-trifluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(4-azido-2,3,5-trifluoro-6-isopropylbenzoate)) (IP-8Bx) A 20 mL brown vial containing an oven-dried magnetic bar was prepared. To the vial, 2-((3-(3-((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)-2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)-2,2-bis(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propoxy)methyl)-2-(((2,3,4,5-tetrafluoro-6-isopropylbenzoyl)oxy)methyl)propane-1,3-diyl bis(2,3,4,5-tetrafluoro-6-isopropylbenzoate) (0.30 g, 0.14 mmol), sodium azide (0.14 g, 2.24 mmol), distilled water (1.0 mL) and DMF (15 mL) were added, followed by stirring on a hot plate of about 70° C. for about 26 hours. After cooling the mixture to room temperature, an extracting process was performed using an extraction funnel, 15 mL of distilled water, ethyl acetate (15 mL) and n-hexane (3 mL), four times. Moisture in the organic layer thus obtained was removed using anhydrous MgSO4, and organic solvents were removed using a rotary evaporator. The organic material thus obtained was separated by silica gel column chromatography (chloroform:n-hexane=1:1) to obtain a white solid final product. The solid product was 0.07 g, and the yield was analyzed as 21.7%. Experimental Example 1: Manufacture of Organic Thin Film Transistor Device Using 4Bx Crosslinker Experimental Example 1-1: Manufacture of Bottom Gate Top Contact (BGTC) Organic Thin Film Transistor Device A heavily n-doped Si wafer having a SiO2layer with a thickness of about 300 nm was cleansed with acetone, isopropyl alcohol and water in order for about 10 minutes per each using an ultrasonic wave cleaner. The cleansed Si wafer was surface treated using ODTS. 5 mg of an organic semiconductor material, 1 wt % of 4Bx crosslinker and 1 mL of chloroform were added, and then stirred at room temperature to prepare a crosslinker composition. In a glove box in a nitrogen environment, the crosslinker composition was applied by spin coating on the Si wafer to manufacture a thin film. On the thin film, a photomask having a pattern was disposed, and the thin film was exposed to ultraviolet rays to crosslink the crosslinker composition. After that, an uncrosslinked portion was cleansed using chloroform in a spinning state using a spin coater. The substrate was stored in the glove box in a nitrogen environment for about 4 hours to remove remaining solvents. Gold was deposited using a vacuum deposition method to manufacture a source/drain electrode with a thickness of about 50 nm. Accordingly, the channel length and width of the electrode were about 100 μm and about 800 μm, respectively. Experimental Example 1-2: Manufacture of Top Gate Bottom Contact (TGBC) Organic Thin Film Transistor and Logic Electronic Device Using 4Bx Crosslinker A PEN substrate was cleansed with acetone, isopropyl alcohol, and water in order for about 10 minutes per each using an ultrasonic wave cleaner. 50 mg of a solution in which silver-nanoparticles are dispersed, a 4Bx crosslinker (5 wt %) and 1 mL of anhydrous chloroform were stirred at room temperature to prepare a crosslinker composition. In a glove box in a nitrogen environment, the crosslinker composition was applied by spin coating on the PEN substrate to form a thin film. On the thin film, a photomask having a pattern was disposed, and the thin film was exposed to ultraviolet rays to crosslink the crosslinker composition. After that, an uncrosslinked portion was cleansed using chloroform in a spinning state using a spin coater. The substrate was heated in a vacuum oven of about 150° C. for about 8 hours. Accordingly, the thickness of a silver-nano electrode was about 72 nm, and the channel length and width of a source/drain electrode were about 100 μm and about 800 μm, respectively. 5 mg of an organic semiconductor material, 1 wt % of 4Bx crosslinker and 1 mL of chloroform were added, and then stirred at room temperature to prepare a crosslinker composition. On the substrate on which an electrode was patterned, the crosslinker composition was applied by spin coating to form a thin film. On the thin film, a photomask having a pattern was disposed, and the thin film was exposed to ultraviolet rays to crosslink the crosslinker composition. After that, an uncrosslinked portion was cleansed using chloroform in a spinning state using a spin coater. The substrate was stored in a glove box in a nitrogen environment for about 4 hours to remove remaining solvents. In this case, in case of a logic electronic device, after performing a patterning process of a p-type polymer semiconductor as the organic semiconductor material, a solution obtained by adding 5 mg of an n-type organic semiconductor material, 1 wt % of 4Bx and 1 mL of chloroform and stirring, was applied by spin coating to manufacture a thin film. On the thin film, a photomask was disposed, and the thin film was exposed to ultraviolet rays to crosslink. An uncrosslinked portion was cleansed using chloroform in a spinning state using a spin coater, and the substrate was stored in a glove box in a nitrogen environment for about 4 hours to remove remaining solvents. Then, a solution obtained by dissolving 70 mg of PMMA and 3.5 mg of a 4Bx crosslinker in 1 mL of n-butyl acetate was applied by spin coating to manufacture a thin film, and the thin film was exposed to ultraviolet rays to crosslink. After drying in a vacuum oven at about 80° C. for about 6 hours, the thickness of a polymer insulating layer was about 482 nm. On the polymer insulating layer, a gate electrode thin film was manufacture by the same method as the manufacturing method of the silver-nano source/drain (S/D) and crosslinked to manufacture an organic thin film transistor and a logic electronic device having a pattern. Experimental Example 2: Manufacture of Organic Light-Emitting Diode Using 4Bx Crosslinker A substrate of a glass on which indium tin oxide (ITO) was deposited was cleansed with acetone and isopropyl alcohol in order for 10 minutes per each using an ultrasonic wave cleaner. On the substrate, poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) was applied by spin coating (about 4,000 rpm, about 40 seconds) to form a polymer thin film. The polymer thin film was heated on a hot plate at about 110° C. for about 5 minutes and at about 180° C. for about 30 minutes, and then, dried in a glove box in a nitrogen environment at about 120° C. for about 15 minutes. In a separate glove box in the same nitrogen environment, a super yellow light-emitting polymer solution (7 mg/mL) was prepared using cyclohexanone as a solvent and stirring at about 110° C. A 4Bx solution (7 mg/mL) was prepared using cyclohexanone as a solvent and stirring at room temperature. The super yellow light-emitting polymer solution and the 4Bx solution were mixed in a mass ratio of about 99.9:0.1 to prepare a mixture. On the PEDOT:PSS thin film, the mixture was applied by spin coating (about 2,000 rpm, about 60 seconds) and exposed to ultraviolet rays in a glove box in a nitrogen environment to crosslink. In this case, a portion of the super yellow thin film was exposed by using a photomask to induce partial crosslinking of the thin film. A pure cyclohexanone solution was applied on the entire substrate and then spin coating (about 4,000 rpm, about 20 seconds) was performed to cleanse an uncrosslinked portion, twice. Through this, a super yellow polymer pattern may be formed. By using a hot plate in a glove box, the substrate was dried at about 180° C. for about 12 hours. A shadow mask having a pattern was disposed on the substrate, and lithium fluoride of about 2 nm and aluminum of about 150 nm were deposited in order using a vacuum deposition apparatus to manufacture an organic light-emitting diode. Experimental Example 3: Manufacture of Quantum Dot Light-Emitting Diode Using 4Bx Crosslinker A substrate of a glass on which indium tin oxide (ITO) was deposited was cleansed with acetone and isopropyl alcohol in order for 10 minutes per each using an ultrasonic wave cleaner. On the substrate, poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) thin film was coated (about 4,000 rpm, about 40 seconds) as a hole injection layer using a spin coater and then, was heated on a hot plate at about 110° C. for 5 minutes and at about 180° C. for about 30 minutes and additionally dried in a glove box in a nitrogen environment at about 120° C. for about 15 minutes. For efficient hole injection into a quantum dot, on the substrate coated with PEDOT:PSS, poly(N,N′-bis-4-butylphenyl-N,N′-bisphenyl)benzidine (poly-TPD) was coated (about 2,000 rpm, about 30 s) as a hole transport layer and dried at about 140° C. for about 30 minutes. An InP/ZnSeS core shell quantum dot solution and a 4Bx crosslinker solution dispersed in toluene in the same concentration were mixed in a ratio of about 0.5 wt % and applied by spin coating on the poly-TPD layer to form thin films. On the InP/ZnSeS quantum dot thin film, ultraviolet rays were irradiated to induce the crosslinking reaction of the ligand at the surface of the quantum dot and the 4Bx crosslinker for the gelation of the quantum dot thin film. In this process, the quantum dot thin film was partially crosslinked using a patterned shadow mask, and an uncrosslinked portion was cleansed with a toluene solvent to form a quantum dot pattern. On the crosslinked quantum dot thin film, a thin film of zinc oxide (ZnO) nanoparticles was formed (about 2,000 rpm, about 60 s) as an electron transport layer, and drying was performed at about 90° C. for about 30 minutes. An aluminum (Al) electrode of about 150 nm was deposited using a thermal deposition apparatus to manufacture a quantum dot light-emitting diode. Experimental Example 4: Evaluation of Stability of Organic Thin Film Transistor Device Using 4Bx Crosslinker By using P(DPP2DT-TVT) and a 4Bx crosslinker, the stability of the organic thin film transistor manufactured according the manufacturing method of Experimental Example 1 was evaluated. Particularly, the inspected results of the change of hole transport performance in accordance with time is shown inFIG.4. It could be confirmed that the organic thin film transistor (crosslinked) to which the 4Bx crosslinker was added maintained hole transport performance for a long time with the lapse of time when compared with the organic thin film transistor (pristine) to which the 4Bx crosslinker was not added. That is, it was confirmed that the stability of the organic thin film transistor (crosslinked) to which the 4Bx crosslinker was added was improved further when compared with the organic thin film transistor (pristine) to which the 4Bx crosslinker was not added. Experimental Example 5: Evaluation of Electrical Properties of Organic Thin Film Transistor Device Using 4Bx Crosslinker By using P(DPP2DT-TVT) and a 4Bx crosslinker, the electrical properties of the organic thin film transistor manufactured according to the manufacturing method of Experimental Example 1 were evaluated. The evaluation results on the output properties of the organic thin film transistor at six different gate voltages (VG) are shown inFIG.5A. The evaluation results on the transfer properties of the organic thin film transistor at a fixed drain voltage (VD) of about −60 V are shown inFIG.5B. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistor are shown inFIG.5C. By using P(NDI3OT-Se2) and a 4Bx crosslinker, the electrical properties of the organic thin film transistor manufactured according to the manufacturing method of Experimental Example 1 were evaluated. The evaluation results on the output properties of the organic thin film transistor at six different gate voltages (VG) are shown inFIG.6A. The evaluation results on the transfer properties of the organic thin film transistor at a fixed drain voltage (VD) of about +60 V are shown inFIG.6B. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistor are shown inFIG.6C. By using various semiconductor materials and a 4Bx crosslinker, the electrical properties of the organic thin film transistors manufactured according to the manufacturing method of Experimental Example 1 were evaluated. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistors are shown in Table 1 below. The numerical values in bold mean the maximum mobility. TABLE 1Carrier mobilityCarrierSemiconductor(cm2V−1s−1)typeOn-off current ratioThreshold voltageP(DPP2DT-TVT)0.81(±0.18)1.03p-type1.6(±0.8) × 106−56(±4)VP(DPP2DT-F2T2)0.72(±0.21)0.94p-type2.5(±0.5) × 106−74(±5)VP(DPP2DT-T2)0.25(±0.09)0.35p-type1.4(±1.1) × 106−60(±3)VPTB7-Th0.005(±0.001)0.007p-type4.1(±1.8) × 105−42(±6)VP(NDI3OT-Se2)0.15(±0.09)0.24n-type1.5(±0.9) × 10526(±4)VP(NDI2OD-F2T2)0.10(±0.03)0.13n-type2.1(±1.1) × 10647(±7)VP(NDI2OD-Se2)0.14(±0.05)0.19n-type6.1(±0.4) × 10574(±8)VP(NDI3OT-Se)0.14(±0.05)0.21n-type2.8(±0.8) × 10528(±4)V Through the evaluation results on the electrical properties, it could be confirmed that by manufacturing an electronic device using the crosslinker composition of the inventive concept, the manufacturing process may be simplified, and at the same time, the damage of the constituent elements of the electronic device may be prevented, and accordingly, the electrical properties of the electronic device may be improved. Experimental Example 6: Manufacture of Top Gate Bottom Contact (TGBC) Organic Thin Film Transistor and Logic Electronic Device Using 6Bx A PEN substrate was cleansed with acetone, isopropyl alcohol and water, in order for about 10 minutes per each using an ultrasonic wave cleaner. A source/drain electrode (Cr/Au) was deposited by a thermal deposition method on the PEN substrate using a shadow mask. Accordingly, the thickness of the Cr/Au electrode thus formed was about 3/17 nm, and the channel length and width of the source/drain electrode were about 100 μm and about 800 μm, respectively. 4.95 mg of an organic semiconductor material, 0.05 mg of a 6Bx crosslinker and 1 mL of chloroform were added and stirred at room temperature to prepare a crosslinker composition. On the substrate on which an electrode was formed, the crosslinker composition was applied by spin coating (about 1,000 rpm, about 30 s) to form a thin film. On the thin film, a photomask having a pattern was disposed, and the thin film was exposed to ultraviolet rays to crosslink the crosslinker composition. Then, by using a spin coater, an uncrosslinked portion was cleansed with chloroform in a spinning state. The substrate was stored in a glove box in a nitrogen environment for about 4 hours to remove remaining solvents. In this case, in case of a logic electronic device, after performing a patterning process of the p-type polymer semiconductor as the organic semiconductor material, a solution obtained by adding 4.95 mg of an n-type organic semiconductor material, 0.05 mg of 6Bx and 1 mL of chloroform and stirring was applied by spin coating (about 1,000 rpm, about 30 s) to form a thin film. On the thin film, a photomask was disposed, and the thin film was exposed to ultraviolet rays for crosslinking. Then, by using a spin coater, an uncrosslinked portion was cleansed with chloroform in a spinning state and stored in a glove box in a nitrogen environment for about 4 hours to remove remaining solvents. After that, a solution obtained by dissolving 9.7 mg of PMMA and 0.3 mg of a 6Bx crosslinker in 1 mL of n-butyl acetate was applied by spin coating (about 1,000 rpm, about 60 s) to form a thin film, and the thin film was exposed to ultraviolet rays for crosslinking. After drying in a vacuum oven of about 80° C. for about 6 hours, and a polymer insulating layer had a thickness of about 52 nm. On the polymer insulating layer, Al used as a gate electrode was deposited to a thickness of about 40 nm to manufacture an organic thin film transistor. Experimental Example 7: Evaluation of Electrical Properties of Organic Thin Film Transistor Using 6Bx By using P(DPP2DT-TT), a 6Bx crosslinker and a PMMA insulating material, the electrical properties of the organic thin film transistor manufactured according to the manufacturing method of Experimental Example 6 were evaluated. The evaluation results on the output properties of the organic thin film transistor at six different gate voltages (VG) are shown inFIG.7A. The evaluation results on the transfer properties of the organic thin film transistor at a fixed drain voltage (VD) of about −3 V are shown inFIG.7B. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistor are shown inFIG.7C. By using P(NDI3OT-Se2), a 6Bx crosslinker, and a PMMA insulating material, the electrical properties of the organic thin film transistor manufactured according to the manufacturing method of Experimental Example 6 were evaluated. The evaluation results on the output properties of the organic thin film transistor at six different gate voltages (VG) are shown inFIG.8A. The evaluation results on the transfer properties of the organic thin film transistor at a fixed drain voltage (VD) of about +3 V are shown inFIG.8B. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistor are shown inFIG.8C. By using various semiconductor materials and a 6Bx crosslinker, the electrical properties of the organic thin film transistors manufactured according to the manufacturing method of Experimental Example 6 were evaluated. The evaluation results on the hole transport capacity, threshold voltage and on/off ratio of the organic thin film transistors are shown in Table 2 below. The numerical values in bold mean the maximum mobility. TABLE 2Carrier mobilityCarrierOn-offSemiconductor(cm2V−1s−1)typecurrent ratioThreshold voltageP(DPP2DT-TT)8.8(±1.9)10.5p-type3.5(±1.1) × 105−0.05 (±0.11)VP(NDI3OT-Se2)1.35(±0.1)1.75n-type2.0(±1.1) × 1041.25(±0.48)V Through the evaluation results on the electrical properties, it could be confirmed that according to the manufacture of an electronic device using the crosslinker composition of the inventive concept, the manufacturing process may be simplified and at the same time, the damage of the constituent elements of the electronic device may be prevented, and accordingly, the electrical properties of the electronic device may be improved. Particularly, the thickness of a PMMA layer may be reduced further by using 6Bx, the improving properties of the mobility by about twice to seven times and the reducing properties of the threshold voltage by about 20 times or more could be secured at the same time. According to the inventive concept, a photoresist layer for forming a pattern or manufacturing an electronic device may not be necessary. Accordingly, a manufacturing process may be simplified, and at the same time, the damage of the configuration elements of an electronic device may be prevented. Further, the electric properties and stability of an electronic device manufactured using a three-dimensional crosslinker composition may be improved. Although the embodiments of the present invention have been described, it is understood that the present invention should not be limited to these embodiments but various changes and modifications can be made by one ordinary skilled in the art within the spirit and scope of the present invention as hereinafter claimed. | 59,416 |
11860537 | BEST MODE FOR CARRYING OUT THE INVENTION Embodiments of the present invention will be explained below in detail. Unless otherwise stated in the present specification, the numerical range shown by “A to B” includes the values A and B of both ends and they are expressed in terms of common units. For example, “5 to 25 mol %” means 5 mol % or more but 25 mol % or less. Also in the present specification, “Cx to y”, “Cxto Cy” or “Cx” means the number of carbon atoms contained in the molecule or in the substituent. For example, “an alkyl group of C1 to 6” means an alkyl group having 1 or more but 6 or less carbon atoms (such as, methyl, ethyl, propyl, butyl, pentyl or hexyl). Further in the present specification, “a fluoroalkyl group” means an alkyl group in which one or more hydrogen is substituted with fluorine and “a fluoroaryl group” means an aryl group in which one or more hydrogen is substituted with fluorine. Also, unless otherwise stated in the present specification, “an alkyl group” means a straight- or branched-chain alkyl group and “a cycloalkyl group” means an alkyl group having a cyclic structure. The “cycloalkyl group” includes an alkyl group having a cyclic structure which contains a straight- or branched-chain alkyl substituent. The term “hydrocarbon group” means a group containing monovalent, divalent or higher carbon and hydrogen and further, if necessary, oxygen or nitrogen. The term “aliphatic hydrocarbon group” means a straight, branched or cyclic aliphatic hydrocarbon group, and the “aromatic hydrocarbon group” means a group which contains an aromatic ring and may have, if necessary, an aliphatic hydrocarbon substituent. The aliphatic or aromatic hydrocarbon group may contain, if necessary, fluorine, oxy, hydroxy, amino, carbonyl or silyl group. In the present specification, if a polymer comprises two or more kinds of repeating units, those repeating units are copolymerized. The copolymerization may be any of alternating copolymerization, random copolymerization, block copolymerization, graft copolymerization or mixture thereof unless otherwise stated. Also, unless otherwise stated, the temperature in the present specification is represented by Celsius degrees. For example, “20° C.” means the temperature of 20 Celsius degrees. <Positive Type Photosensitive Siloxane Composition> The positive type photosensitive siloxane composition according to the present invention comprises: (I) a polysiloxane, (II) a diazonaphthoquinone derivative, (III) a particular additive, (IV) a solvent, and (V) an optional component. Those components are individually described below. [(I) Polysiloxane] The term “polysiloxane” generally means a polymer having Si—O—Si bonds (siloxane bonds) as the main chain. In the present specification, the polysiloxane includes a silsesquioxane polymer represented by the formula (RSiO1.5)n. The polysiloxane according to the present invention comprises a repeating unit represented by the following formula (Ia): In the above formula, R1is hydrogen, a mono- to trivalent saturated or unsaturated straight, branched or cyclic aliphatic hydrocarbon group of C1 to 30, or a mono- to trivalent aromatic hydrocarbon group of C6 to 30. In the aliphatic or aromatic hydrocarbon group, one or more methylene is substituted with oxy, imido or carbonyl, or unsubstituted, one or more hydrogen is substituted with fluorine, hydroxy or alkoxy, or unsubstituted, and one or more carbon is substituted with a silicon, or unsubstituted. If R1is a di- or trivalent group, R1links Si atoms contained in the plural repeating units. Examples of a monovalent group adoptable as R1include: (i) an alkyl group, such as, methyl, ethyl, propyl, butyl, pentyl, hexyl, heptyl, octyl, or decyl; (ii) an aryl group, such as, phenyl, tolyl, or benzyl; (iii) a fluoroalkyl group, such as, trifluoromethyl, 2,2,2-trifluoroethyl, or 3,3,3-trifluoropropyl; (iv) a fluoroaryl group; (v) a cycloalkyl group, such as, cyclohexyl; (vi) a nitrogen-containing group having an amino or imido structure, such as, glycidyl, isocyanate or amino; and (vii) an oxygen-containing group having an epoxy, acryloyl or methacryloyl structure, such as, glycidyl. Preferred are methyl, ethyl, propyl, butyl, pentyl, hexyl, phenyl, tolyl, glycidyl, and isocyanate. As the fluoroalkyl group, a pentafluoroalkyl group is preferred. Particularly preferred are trifluoromethyl and pentafluoroethyl. The compound in which R1is methyl is preferred because the starting materials thereof are easily available and further because the resultant cured film has high hardness and high chemical resistance. If R1is phenyl, the polysiloxane has such high solubility in the solvent that the resultant cured film hardly suffers from cracks. Accordingly, phenyl is also preferred. Further, R1preferably has hydroxy, glycidyl, isocyanate or amino because those groups improve adhesion between the cured film and the substrate. Preferred examples of the di- or trivalent group adoptable as R1include: groups containing alkylene, arylene, cycloalkylene ring, piperidine ring, pyrrolidine ring or isocyanurate ring. If necessary, the polysiloxane according to the present invention may further comprise a repeating unit represented by the following formula (Ib): The above polymer contains a silanol group at the terminal The above polysiloxane can be produced by hydrolyzing and condensing the silane compound represented by the following formula (ia), if necessary, in the presence of an acidic or basic catalyst. R1[Si(OR2)3]p(ia). In the formula, p is 1 to 3; and R1is hydrogen, a mono- to trivalent saturated or unsaturated straight, branched or cyclic aliphatic hydrocarbon group of C1 to 30, or a mono- to trivalent aromatic hydrocarbon group of C6 to 30. In the aliphatic or aromatic hydrocarbon group, one or more methylene is substituted with oxy, imido or carbonyl, or unsubstituted, one or more hydrogen is substituted with fluorine, hydroxy or alkoxy, or unsubstituted, and one or more carbon is substituted with a silicon, or unsubstituted. Further, R2is an alkyl group of C1 to 10. If the silane compound of the formula (ia) is adopted, the polysiloxane can be so produced as to consist of only the repeating unit of the formula (Ia). However, the compound of the formula (ia) can be used in combination with the silane compound represented by the following formula (ib) to produce another polysiloxane, which contains the repeating units of the formulas (Ia) and (Ib). Si(OR2)4(ib) Here, it is possible to employ two or more kinds of the silane compound (ia) in combination with two or more kinds of the silane compound (ib). If the starting material mixture for producing the polysiloxane contains the silane compound (ib) in a large amount, it may be deposited or the formed coating film may deteriorate in photosensitivity. Accordingly, the blending ratio of the silane compound (ib) in the mixture is preferably 40 mol % or less, further preferably 20 mol % or less based on the total moles of silane compounds used as the starting materials for producing the polysiloxane. The polysiloxane normally has a weight average molecular weight of 500 to 25000 inclusive. However, in view of solubility in an organic solvent and an alkali developer, the weight average molecular weight is preferably 1000 to 20000 inclusive. Here, the molecular weight is represented in terms of polystyrene reduced value, and can be measured by gel permeation chromatography based on polystyrene. The polysiloxane according to the present invention has an alkali dissolution rate (hereinafter, often referred to as “ADR”, described later in detail) which varies according to the thickness of the formed film, to the development conditions, and to the kind and the amount of the photosensitive agent incorporated in the composition. However, for example, if having a thickness of 0.1 to 10 μm (1000 to 100000 Å), the formed film preferably has a dissolution rate of 50 to 5000 Å/second in a 2.38 wt % aqueous solution of tetramethylammonium hydroxide (TMAH). In the present invention, for the purpose of controlling the photosensitivity, a polysiloxane having a higher ADR than usual is preferably used in combination with the particular additive described later so that the exposed area and the unexposed area may be very different in solubility. The polysiloxane having a high ADR generally forms a pattern which has high photosensitivity according to the high ADR but which tends to have a poor remaining film ratio after development. However, in the present invention, since the polysiloxane having a high ADR is used in combination with the particular additive, the remaining film ratio can be improved while the photosensitivity is kept high. It is preferred that the polysiloxane has a high ADR since the effect of the invention can be fully obtained. For example, the photosensitivity can be improved by 10% or more. The composition according to the present invention may comprise a combination of two or more kinds of polysiloxanes different, for example, in ADR or in average molecular weight. Different polysiloxanes can be produced by changing the conditions, such as, catalyst, reaction temperature, reaction time and polymer. If polysiloxanes different in ADR are employed in combination, it becomes possible to reduce pattern reflow and undissolved residues left after development and thereby to improve pattern stability. The polysiloxane is, for example, (M): a polysiloxane which forms a film soluble in a 2.38 wt % TMAH aqueous solution at a dissolution rate of 200 to 3000 Å/second after prebaked. If necessary, it may be mixed with (L): a polysiloxane which forms a film soluble in a 5 wt % TMAH aqueous solution at a dissolution rate of 1000 Å/second or less after prebaked, or (H): a polysiloxane which forms a film soluble in a 2.38 wt % TMAH aqueous solution at a dissolution rate of 4000 Å/second or more after prebaked, to prepare a composition having a desired dissolution rate. The polysiloxanes (M), (H) and (L) individually have weight average molecular weights described above. In order to enlarge the aforementioned difference in solubility, it is possible to control the blending ratio of two polysiloxanes having different ADRs. The polysiloxane adopted in the present invention has a branched structure because the compound (ia) or (ib) is employed as the starting material. If necessary, those starting materials can be used in combination with a two-functional silane compound so that the resultant polysiloxane may partly have a straight-chain structure. However, if high heat-resistance is necessary for use, the polysiloxane preferably contains a straight-chain structure in a small amount. Specifically, the straight-chain structure derived from the two-functional silane compound is contained preferably in an amount of 30 mol % or less based on the whole polysiloxane structure. (Measurement and Calculation of Alkali Dissolution Rate (ADR)) The alkali dissolution rates given by the polysiloxanes and a mixture thereof are measured and calculated in the following manner, where a TMAH aqueous solution is adopted as an alkali solution. First, the polysiloxane is diluted with propyleneglycol monomethyletheracetate (hereinafter, referred to as “PGMEA”) to be 35 wt %, and stirred and dissolved with a stirrer for 1 hour at room temperature. In a clean-room under an atmosphere of temperature: 23.0±0.5° C. and humidity: 50±5.0%, the prepared polysiloxane solution is then dropped with a pipet in an amount of 1 cc onto a 4-inch silicon wafer of 525 μm thickness at the center area, and spin-coated to form a coating film of 2±0.1 μm thickness. Thereafter, the coating film is heated for 90 seconds on a hot-plate at 100° C. to remove the solvent. The thickness of the coating film is then measured with a spectro-ellipsometer (manufactured by J.A. Woollam). Subsequently, the silicon wafer covered with the coating film is placed in a 6 inch-diameter glass petri dish filled with 100 ml of a TMAH aqueous solution of predetermined concentration at 23.0±0.1° C., and left to be immersed. The time it takes for the coating film to disappear is measured. The dissolution rate is obtained by dividing the initial thickness of the coating film by the time it takes for the coating film to dissolve and disappear in the area from the wafer edge to 10-mm inside. Otherwise, if the dissolution rate is extremely slow, the wafer is immersed in the TMAH aqueous solution for a predetermined time and then heated for 5 minutes on a hot-plate at 200° C. to remove water soaked in the coating film during the measurement of dissolution rate, and thereafter the thickness of the coating film is measured. The thickness change between before and after the immersion is divided by the immersing time to obtain the dissolution rate. The measurement is repeated five times and the obtained values are averaged to determine the dissolution rate of the polysiloxane. [(II) Diazonaphthoquinone Derivative] The siloxane composition according to the present invention contains a diazonaphthoquinone derivative as a photosensitive agent. Since the composition contains the photosensitive agent, it is possible to fabricate a patterned cured film by exposure and development and hence it is unnecessary to pattern the film by use of dry etching or the like. Accordingly, the composition of the invention has the advantage of reducing damage to the circuit or the elements during a process of manufacturing a device. The positive type photosensitive siloxane composition of the present invention comprises a diazonaphthoquinone derivative serving as a photosensitive agent. This composition forms a positive type photosensitive layer, in which a part in the exposed area becomes soluble in an alkali developer and hence is removed by development. The diazonaphthoquinone derivative functioning as a photosensitive agent in the present invention is a compound in which a naphthoquinone diazide sulfonic acid is ester-bonded with a phenolic hydroxy-containing compound. There are no particular restrictions on the structure thereof, but the derivative is preferably an ester compound formed by esterification of a compound having one or more phenolic hydroxy groups. Examples of the naphthoquinone diazide sulfonic acid include: 4-naphthoquinone diazide sulfonic acid and 5-naphthoquinone diazide sulfonic acid. Because of having an absorption band in the i-line region (wavelength: 365 nm), 4-naphthoquinone diazide sulfonate is suitable for i-line exposure. On the other hand, 5-naphthoquinone diazide sulfonate is suitable for exposure in a wide wavelength range because absorbing light in a wide wavelength region. Accordingly, it is preferred to select 4-naphthoquinone diazide sulfonate or 5-naphthoquinone diazide sulfonate according to the exposure wavelength. It is also possible to use both 4-naphthoquinone diazide sulfonate and 5-naphthoquinone diazide sulfonate in a mixture. There are no particular restrictions on the phenolic hydroxy-containing compound. Examples thereof include: bisphenol A, BisP-AF, BisOTBP-A. Bis26B-A, BisP-PR, BisP-LV, BisP-OP, BisP-NO, BisP-DE, BisP-AP, BisOTBP-AP, TrisP-HAP, BisP-DP, TrisP-PA, BisOTBP-Z, BisP-FL, TekP-4HBP, TekP-4HBPA, and TrisP-TC ([trademark], manufactured by Honshu Chemical Industry Co., Ltd.). The optimal amount of the diazonaphthoquinone derivative depends on the esterification ratio of naphthoquinone diazide sulfonic acid, on properties of the adopted polysiloxane, on the required photosensitivity and on the required dissolution contrast between the exposed and unexposed areas. However, it is preferably 1 to 20 weight parts, more preferably 3 to 15 weight parts, based on 100 weight parts of the polysiloxane. If the amount is 1 weight part or more, the dissolution contrast between the exposed and unexposed areas is high enough to obtain favorable photosensitivity. For realizing more favorable dissolution contrast, the amount is preferably 5 weight parts or more. On the other hand, however, the less the diazonaphthoquinone derivative is contained, the more the resultant cured film is improved in colorless transparency and hence the higher transmittance the film has. That is preferred. [(III) Particular Additive] The composition according to the present invention contains a particular additive. The particular additive (III) is considered to be a compound having such an interaction with the polysiloxane as to decrease the ADR of the polysiloxane. Here, the “interaction” means an intermolecular force interaction, such as, hydrogen bond, ionic bond or dipole interaction. The above intermolecular force interaction is caused by a particular nitrogen-containing structure. The additive is a compound having a >N—C(═O)— or >N—C(═S)— structure. On the other hand, in the photolithographic process for forming a pattern, the additive has a function of increasing solubility of the polysiloxane in the exposed area. As a result, a high contrast can be obtained without increasing the photosensitive agent. In a coating film formed from the composition of the present invention, nitrogen contained in the >N—C(═O)— or >N—C(═S)— structure is considered to interact with oxygen in the polysiloxane and consequently to form a weak bond. Meanwhile, in a coating film formed from a known positive type photosensitive composition containing diazonaphthoquinone, the diazonaphthoquinone serves as a dissolution inhibitor in the unexposed area. The particular additive according to the present invention is considered to assist it in inhibiting dissolution. Specifically, the interaction between the particular additive and the polysiloxane is so weak as to be cancelled by acid generated in the coating film as a result of exposure, and hence the difference of polysiloxane dissolution rate is enlarged between in the exposed area and in the unexposed area. The contrast of the resultant pattern is thus thought to be enhanced. In this way, the particular additive according to the present invention is presumed to assist the photosensitive agent in controlling the photosensitivity. As described above, it is preferred that there is the interaction between the particular additive and the polysiloxane until the composition is exposed and the interaction is lost by acid generated by light after the composition is exposed In order to realize the preferred interaction, the particular additive preferably has a particular structure. Specifically, the additive (III) necessarily has a >N—C(═O)— or >N—C(═S)— structure. Examples thereof include: >N—C(═O)—N<(urea structure), >N—C(═S)—N<(thiourea structure), >N—C(═O)—O (urethane structure), >N—C(═O)—S— (thiourethane structure), >N—C(═O)— (amide structure), and >N—C(═S)— (thioamide structure). In addition to the above structure, the additive (III) may contain any other structure as long as it does not impair the effect of the present invention. However, in order that the interaction between the additive (III) and the polysiloxane may bring about favorable effects, it is not preferred for the additive (III) to have a functional group or the like which forms a stronger bond with the polysiloxane than the particular nitrogen-containing structure. If the functional group or the like forms the strong bond, the aimed interaction is so inhibited that dissolution is not promoted in the exposed area. As a result, the film may suffer from undissolved residues formed in the development process or from unevenness of development. Accordingly, the additive (III) preferably does not have a binding site or functional group capable of forming a stronger bond with the polysiloxane than the particular nitrogen-containing structure. Specifically, the additive (III) preferably contains neither an alkoxysilyl group nor a hydroxysilyl group. The interaction between the additive (III) and the polysiloxane may be influenced by substituents of the additive (III) or the like. Further, if the additive (III) has a bulky substituent, the substituent may interfere with the interaction between the >N—C(═O)— or >N—C(═S)— structure and the polysiloxane. Those are preferably taken into account when the substituents in the additive (III) are selected. In the process for forming the cured film, the additive is left in the unexposed area of the coating film. In view of that, the additive (III) preferably decomposes or vaporizes at a low temperature, for example, at room temperature or at a temperature lower than the curing temperature of the coating film. Specifically, the additive according to the present invention is preferably represented by one of the following formulas (III-i) to (III-iv): In the formula, Z is oxygen or sulfur; and each Rais independently hydrogen or a straight, branched or cyclic alkyl group of C1 to 20. In the alkyl group, one or more methylene is substituted or oxy or carbonyl, or unsubstituted, one or more hydrogen is substituted with alkoxy, or unsubstituted, and one or more carbon is substituted with a silicon, or unsubstituted. Each Rbis independently hydrogen or an alkyl group of C1 to 4, provided that two Rbs in (III-ii) may form a cyclic structure; and n is an integer of 1 to 3. The group Rais preferably neither an alkoxysilyl group nor a hydroxysilyl group. Among the above, preferred is the additive represented by the formula (III-i). In the above formula, Z is preferably oxygen, and Rais preferably an alkyl, silyl, alkylsilyl, alkylsilylalkyl or silylalkyl group, more preferably an alkyl or alkylsilylalkyl group of C3 to 10. In the formula (III-i), at least one of the two Rbs is preferably hydrogen and both are more preferably hydrogens. If at least one Rbis hydrogen, the effect of the present invention tends to be enhanced. Any of the formulas (III-i) to (III-iv) preferably contains no hydroxysilyl group. Specific examples of the additive (III) include: 1,3-di(trimethylsilyl)urea, 1,3-di(methylsilylmethyl)urea, and Other preferred examples of the additive (III) represented by the formula (III-i) include: urea, thiourea, methylurea, ethylurea, 1-acetyl-2-thiourea, 1-methyl-2-thiourea, 1-ethyl-2-thiourea, 1,1-dimethylurea, 1,1-diethylurea, 1,3-dimethylurea, 1,3-diethylurea, 1,3-dimethyl-2-thiourea, 1,3-diethyl-2-thiourea, 1,3-dipropyl-2-thiourea, 1,3-diisopropyl-2-thiourea, 1,3-di-n-butyl-2-thiourea, 1,3-di-tert-butyl-2-thiourea, 1-(sec-butyl)-3-methylurea, 1-isobutyl-3-methylurea, 1-butyl-3-ethylurea, 1-ethyl-3-propylurea, 1-methyl-3-propyl-2-thiourea, ethyleneurea, propyleneurea, ethylenethiourea, 1,1,3-trimethylurea, 1,1,3-trimethylthiourea, N,N′-dimethylethyleneurea, N,N′-dimethylpropyleneurea, N,N,N′,N′-tetramethylurea, N,N-dimethyl-N′,N′-diethylurea, N,N-dimethyl-N′,N′-diphenylurea, and methoxymethylurea. Preferred examples of the additive (III) represented by the formula (III-ii) include: ethyl carbamate, tert-butyl carbamate, ethyl N-phenylcarbamate, ethyl N-dimethylphenylcarbamate, 2-(1-methylpropyl)phenyl N-methylcarbamate, and N,N′-methylenedicyclohexyl-ethyl dicarbamate. Preferred examples of the additive (III) represented by the formula (III-iii) include: N,N-dimethylformamide, acetamide, N,N-dimethylacetamide, propionamide, benzamide, acetanilide, benzanilide, N-methylacetanilide, and N,N-dimethylthioformamide. The preferred amount of the additive (III) is determined according to the content of the >N—C(═O)— or >N—C(═S)— structures in the compound. Specifically, the additive (III) is preferably incorporated in such an amount that the content of the structures may be 0.1 to 5 weight parts, preferably 1.0 to 4.0 weight parts based on 100 weight parts of the polysiloxane in the composition. The composition containing the additive (III) in an amount of the above range tends to have high photosensitivity and tends to be capable of forming a pattern of high transparency. In the present invention, two or more kinds of the additives (III) may be used in combination. [(IV) Organic Solvent] The composition according to the present invention contains an organic solvent, which is selected from solvents capable of evenly dissolving or dispersing the components in the composition. Examples of the organic solvent include: ethylene glycol monoalkyl ethers, such as, ethylene glycol monomethyl ether, ethylene glycol monoethyl ether, ethylene glycol monopropyl ether, and ethylene glycol monobutyl ether; diethylene glycol dialkyl ethers, such as, diethylene glycol dimethyl ether, diethylene glycol diethyl ether, diethylene glycol dipropyl ether, and diethylene glycol dibutyl ether; ethylene glycol alkyl ether acetates, such as, methyl cellosolve acetate and ethyl cellosolve acetate; propylene glycol monoalkyl ether, such as, propylene glycol monomethyl ether (PGME) and propylene glycol monoethyl ether; propylene glycol alkyl ether acetates, such as, propylene glycol monomethyl ether acetate (PGMEA), propylene glycol monoethyl ether acetate, and propylene glycol monopropyl ether acetate; aromatic hydrocarbons, such as, benzene, toluene and xylene; ketones, such as, methyl ethyl ketone, acetone, methyl amyl ketone, methyl isobutyl ketone, and cyclohexanone; and alcohols, such as, isopropanol, and propanediol. Those solvents can be used singly or in combination of two or more, and the amount thereof depends on the coating method and on the required thickness of the coating film. For example, in a spray coating process, the amount of the solvent is often 90 wt % or more based on the total weight of the polysiloxane and the optional component. However, when a slit-coating method is adopted in coating a large glass substrate for producing a display, the solvent amount is normally 50 wt % or more, preferably 60 wt % or more but normally 90 wt % or less, preferably 85 wt % or less. [(V) Optional Component] The composition of the present invention may contain other optional components, if necessary. Examples of the optional components include curing promoters. As the curing promoters, compounds generating acid or base under exposure of light or heat are often adopted. Phot acid-generator, photo base-generator, thermal acid-generator, thermal base generator, photo thermal acid-generator and photo thermal base-generator can be exemplified. The photo thermal acid-generator or photo thermal base-generator can be the compound which changes the chemical structure under exposure of light without generating acid or base, and then generate acid or base by bond cleavage caused by heat. They are selected according to polymerization reactions or crosslinking reactions conducted in the process for producing the cured film. Here, the “light” is, for example, visible light, UV rays or IR rays. The curing promoter is preferably a compound generating acid or base under exposure of UV rays employed for manufacturing thin film transistors. The amount of the curing promoter depends on the kind of the active substance released by decomposition of the curing promoter, on the amount of the released substance, on the required photosensitivity and on the required dissolution contrast between the exposed and unexposed areas. However, the amount is preferably 0.001 to 10 weight parts, more preferably 0.01 to 5 weight parts, based on 100 weight parts of the polysiloxane. If the amount is 0.001 weight part or more, the dissolution contrast between the exposed and unexposed areas is high enough to obtain a favorable effect of the curing promoter. On the other hand, if it is 10 weight parts or less, the formed film hardly suffers from cracks and is not colored by decomposition of the curing promoter, so that the coating film is improved in colorless transparency. Examples of the photo acid-generator include: diazomethane compounds, triazine compounds, sulfonic acid esters, diphenyliodonium salts, triphenylsulfonium salts, sulfonium salts, ammonium salts, phosphonium salts, and sulfonimide compounds. Specific examples of the employable photo acid-generator include: 4-methoxyphenyldiphenylsulfonium hexafluorophosphonate, 4-methoxyphenyldiphenylsulfonium hexafluoroarsenate, 4-methoxyphenyldiphenylsulfonium methanesulfonate, 4-methoxyphenyldiphenylsulfonium trifluoroacetate, triphenylsulfonium tetrafluoroborate, triphenylsulfonium tetrakis(pentafluorophenyl)borate, triphenylsulfonium hexafluorophosphonate, triphenylsulfonium hexafluoroarsente, 4-methoxyphenyldiphenylsulfonium-p-toluenesulfonate, 4-phenylthiophenyldiphenyl tetrafluoroborate, 4-phenylthiophenyldiphenyl hexafluorophosphonate, triphenylsulfonium methanesulfonate, triphenylsulfonium trifluoroacetate, triphenylsulfonium-p-toluenesulfonate, 4-methoxyphenyldiphenylsulfonium tetrafluoroborate, 4-phenylthiophenyldiphenyl hexafluoroarsenate, 4-phenylthiophenyldiphenyl-p-toluenesulfonate, N-(trifluoromethylsulfonyloxy)succinimide, N-(trifluoromethylsulfonyloxy)phthalimide, 5-norbornene-2,3-dicarboxyimidyl triflate, 5-norbornene-2,3-dicarboxyimidyl-p-toluenesulfonate, 4-phenylthiophenyldiphenyl trifluoromethanesulfonate, 4-phenylthiophenyldiphenyl trifluoroacetate, N-(trifluoromethylsulfonyloxy)diphenylmaleimide, N-(trifluoromethylsulfonyloxy)bicyclo[2.2,1]hept-5-ene-2,3-dicarboximide, N-(trifluoromethylsulfonyloxy)naphthylimide, and N-(nonaflurorobutylsulfonyloxy)naphthylimide. Examples of the photo base-generator include: multi-substituted amido compounds having amido groups, lactams, imido compounds, and compounds containing the structures thereof. Further, also usable are ion-type photo base-generators, which contain anions, such as, amide anion, methide anion, borate anion, phosphate anion, sulfonate anion and carboxylate anion. Examples of the photo thermal base-generator represented by the following formula (PBG): In the formula, x is an integer of 1 to 6 inclusive, and each of R1′to R6′is independently hydrogen, a halogen, hydroxy, mercapto, sulfide, silyl, silanol, nitro, nitroso, sulfino, sulfo, sulfonate, phosphino, phosphinyl, phosphono, phosphonato, amino, ammonium, an aliphatic hydrocarbon group of C1 to 20which may have a substituent, an aromatic hydrocarbon group of C6 to 22which may have a substituent, an alkoxy group of C1 to 20which may have a substituent, or an aryloxy group of C6 to 20which may have a substituent. Among the above, each of R1′to R4′is preferably independently hydrogen, hydroxy, an aliphatic hydrocarbon group of C1 to 6or an alkoxy group of C1 to 6; and each of R5′and R6′is particularly preferably hydrogen. Two or more of R1′to R4′may be linked to form a cyclic structure, and the cyclic structure may contain a hetero atom. In the above formula, N is a constituting atom of a nitrogen-containing heterocyclic ring, which is a 3- to 10-membered ring. The nitrogen-containing heterocyclic ring may have one or more substituents different from CxH2xOH shown in the formula (PBG), and may further have an aliphatic hydrocarbon group of C1 to 20, particularly of C1 to 6. Each of R1′to R4′is preferably selected according to the employed exposure wavelength. For use in a display device, preferred are alkoxy groups, nitro group and unsaturated hydrocarbon-linking functional groups, such as vinyl and alkynyl, which have a function of shifting the absorption wavelength to the g-, h- or i-line region. Among them, methoxy and ethoxy are particularly preferred. Specific examples are as follows: The photo thermal base-generator represented by the formula (PBG) is preferably used in the form of a hydrate or solvate. If the photo thermal base-generator is used in the form of an anhydrate, the effect often cannot be fully obtained. Here, the “anhydrate” means a compound that is neither hydrated nor solvated. There are no particular restrictions on how to hydrate or solvate an anhydrate of the photo thermal base-generator, and known methods can be adopted. For example, the photo thermal base-generator anhydrate is added to water or a solvent under the condition where the amount of water or solvent is 10 times or more by weight of that of the anhydrate, and then the solution is stirred for about 1 hour at room temperature or above. For forming the solvate, the solvent is preferably capable of both dissolving the photo thermal base-generator and being dissolved in water and also preferably has a boiling point lower than water. Examples of the solvent include THF and alcohols of C6or less. Subsequently, excess of the solvent is distilled off from the obtained mixture with an evaporator, to obtain the hydrate or solvate. It can be verified by infrared (IR) absorption spectroscopy, by1H-NMR or by thermogravimetry differential thermal analysis (TG-DTA) whether or not the resultant product is hydrated or solvated. In another way, the photo thermal base-generator in the form of an anhydrate may be mixed with water or solvent, stirred and then directly used without isolating the hydrate or solvate. The amount of water for hydration or solvent for solvation is 0.1 mol or more, preferably 1 mol or more based on 1 mol of the photo thermal base-generator represented by the formula (PBG). The curing promoter may be a thermal acid-generator or a thermal base-generator. Examples of the thermal acid-generator include: various aliphatic sulfonic acids and salts thereof; various aliphatic carboxylic acids, such as, citric acid, acetic acid and maleic acid, and salts thereof; various aromatic carboxylic acids, such as, benzoic acid and phthalic acid, and salts thereof; aromatic sulfonic acids and ammonium salts thereof; various amine salts; aromatic diazonium salts; and phosphonic acid and salts thereof. Among those salts or esters capable of generating organic acids, salts of organic acids and organic bases are preferred, and further preferred are salts of sulfonic acids and organic bases. Examples of the preferred sulfonic acids include: p-toluenesulfonic acid, benzenesulfonic acid, p-dodecylbenzenesulfonic acid, 1,4-naphthalenedisulfonic acid, and methanesulfonic acid. Those thermal acid-generators can be used singly or in mixture. Examples of the thermal base-generator include: compounds generating bases, such as, imidazoles, tertiary amines, and quaternary ammoniums; and mixtures of those compounds. Examples of the generated bases include: imidazole derivatives, such as, N-(2-nitrobenzyloxycarbonyl)imidazole, N-(3-nitrobenzyloxycarbonyl)imidazole, N-(4-nitrobenzyloxycarbonyl)imidazole, N-(5-methyl-2-nitrobenzyloxycarbonyl)imidazole, and N-(4-chloro-2-nitrobenzyloxycarbonyl)imidazole; and 1,8-diazabicyclo[5,4,0]undecene-7. Those base-generators as well as the acid-generators can be used singly or in mixture. Examples of the optional components also include surfactants. The composition according to the present invention preferably contains a surfactant because the surfactant improves coating properties. The surfactants usable in the composition of the present invention are, for example, nonionic, anionic and amphoteric surfactants. Examples of the nonionic surfactants include: polyoxyethylene alkyl ethers, such as, polyoxyethylene lauryl ether, polyoxyethylene oleyl ether and polyoxyethylene cetyl ether; polyoxyethylene fatty acid diethers; polyoxyethylene fatty acid monoethers; polyoxyethylene-polyoxypropylene block polymer; acetylene alcohol; acetylene alcohol derivatives, such as, polyethoxyate of acetylene alcohol; acetylene glycols; acetylene glycol derivatives such as polyethoxyate of acetylene glycol; fluorine-containing surfactants, such as, Fluorad ([trademark], manufactured by Sumitomo 3M Limited), MEGAFAC ([trademark], manufactured by DIC Corporation), and Surufuron ([trademark], manufactured by Asahi Glass Co., Ltd.); and organic siloxane surfactants, such as, KP341 ([trademark], manufactured by Shin-Etsu Chemical Co., Ltd.). Examples of the above acetylene glycols include: 3-methyl-1-butyne-3-ol, 3-methyl-1-pentyne-3-ol, 3,6-dimethyl-4-octyne-3,6-diol, 2,4,7,9-tetramethyl-5-decyne-4,7-diol, 3,5-dimethyl-1-hexyne-3-ol, 2,5-dimethyl-3-hexyne-2,5-diol, and 2,5-dimethyl-2,5-hexanediol. Examples of the anionic surfactants include: ammonium salts and organic amine salts of alkyldiphenylether disulfonic acids, ammonium salts and organic amine salts of alkyldiphenylether sulfonic acids, ammonium salts and organic amine salts of alkylbenzenesulfonic acids, ammonium salts and organic amine salts of polyoxyethylenealkylether sulfuric acids, and ammonium salts and organic amine salts of alkylsulfuric acids. Further, examples of the amphoteric surfactants include 2-alkyl-N-carboxymethyl-N-hydroxyethyl imidazolium betaine, and laurylic acid amidopropyl hydroxy sulfone betaine. Those surfactants can be used singly or in combination of two or more. The amount thereof is normally 50 to 10000 ppm, preferably 100 to 5000 ppm based on the total weight of the photosensitive siloxane composition. <Cured Film and Electronic Device Comprising the Cured Film> The cured film according to the present invention can be produced by coating a substrate with the above-described positive type photosensitive siloxane composition and then curing the formed coating film. The coating film can be formed from the composition of the present invention by a known coating method, such as, immersion coating, roll coating, bar coating, brush coating, spray coating, doctor coating, flow coating, spin coating, or slit coating. Those are conventionally known as methods for applying a photosensitive siloxane composition. The substrate can be also appropriately selected from, for example, a silicon substrate, a glass substrate or a resin film. If the substrate is in the form of a film, gravure coating can be carried out. If desired, a drying step can be independently carried out after coating. Further, according to necessity, the coating step may be repeatedly carried out once or twice or more so as to form a coating film of desired thickness. After formed from the photosensitive siloxane composition of the present invention, the coating film is preferably subjected to prebaking (preheating treatment) for the purposes of drying the film and of reducing the solvent remaining therein. The prebaking step is carried out at a temperature of generally 70 to 150° C., preferably 90 to 120° C. for 10 to 180 seconds, preferably 30 to 90 seconds on a hot-plate or for 1 to 30 minutes in a clean oven. Since the composition of the present invention is photosensitive, it can form a patterned cured film. The method for forming a pattern is explained below. In order to form a desired pattern, a coating film is formed from the composition of the present invention, then prebaked, and subsequently pattern-wise exposed to light. Examples of the light source include high-pressure mercury lamp, low-pressure mercury lamp, metal halide lamp, xenon lamp, laser diode and LED. Light for the exposure is normally UV rays of g-line, h-line, i-line or the like. Except for in the case of ultrafine fabrication of semiconductors and the like, it is general to use light of 360 to 430 nm (high-pressure mercury lamp) for patterning in several micrometers to several tens of micrometers. Particularly in producing a liquid crystal display, light of 430 nm is often used. Energy of the exposure light depends on the light source and the initial thickness of the coating film, but is generally 10 to 2000 mJ/cm2, preferably 20 to 1000 mJ/cm2. If the exposure energy is lower than 10 mJ/cm2, the composition decomposes insufficiently. On the other hand, if it is more than 2000 mJ/cm2, the coating film is exposed so excessively that the exposure may cause halation. In order that the coating film can be pattern-wise exposed to light, common photomasks are employable. Those photomasks are known to those skilled in the art. The exposure can be carried out under an ambient atmosphere (the normal atmosphere) or under a nitrogen atmosphere. If the cured film is intended to be formed on the whole surface of the substrate, the whole film surface is exposed to light. In the present invention, the term “pattern film” includes a film thus formed on the whole surface of the substrate. As a developer used in the development step, it is possible to adopt any developer employed in developing conventional photosensitive siloxane compositions. The developer is preferably an alkali developer, which is an aqueous solution of alkaline compound, such as, tetraalkylammonium hydroxide, choline, alkali metal hydroxide, alkali metal metasilicate (hydrate), alkali metal phosphate (hydrate), ammonia, alkylamine, alkanolamine, or heterocyclic amine. A particularly preferred alkali developer is an aqueous solution of tetraalkylammonium hydroxide. Those alkali developers may further contain surfactants or water-soluble organic solvents, such as, methanol and ethanol, if necessary. After developed with an alkali developer, the film is normally washed with water. Subsequently, the film is normally subjected to entire surface exposure (flood exposure). The entire surface exposure photo-decomposes unreacted molecules of the diazonaphthoquinone derivative remaining in the film, and thereby improves light-transparency of the film. Accordingly, if intended to be used as a transparent film, the film is preferably subjected to entire surface exposure. If incorporated as the curing promoter, the photo acid- or base-generator receives light and releases acid or base, respectively, in the entire surface exposure step. If incorporated as the curing promoter, the photo thermal acid- or the photo thermal base-generator receives light and changes the chemical structure. In the overall exposure, the whole film surface is exposed to light at an exposure dose of 100 to 2000 mJ/cm2(in terms of reduced amount of exposure at 365 nm) by use of a UV-visible exposure unit, such as, PLA. After developed, the pattern film is heated to cure. The heating temperature is not particularly restricted as long as the film can be cured, but normally 150 to 400° C., preferably 200 to 350° C. If it is lower than 150° C., the silanol groups tend to remain unreacted. The silanol groups generally have such polarity as to often induce high permittivity, and hence the film is preferably cured at a temperature of 200° C. or above if the permittivity is intended to be lowered. The cured film according to the present invention has high transparency. Specifically, the transmittance thereof is preferably 90% or more, further preferably 95% or more at 400 nm. The cured film thus produced can be advantageously used for various applications. For example, it can be adopted as a planarization film, an interlayer insulating film or a transparent protective film employed in various devices such as flat panel displays (FPDs), and also is employable as an interlayer insulating film for low temperature polysilicon or as a buffer coating film for IC chips. Further, the cured product can be used as an optical device element. EXAMPLES The present invention will be further specifically explained by use of the following examples. Synthesis Example 1 (Synthesis of Polysiloxane (M)) In a 2-L flask equipped with a stirrer, a thermometer and a condenser, 32.5 g of a 25 wt % TMAH aqueous solution, 800 ml of isopropyl alcohol (IPA) and 2.0 g of water were placed. Independently, 39.7 g of phenyltrimethoxysilane, 34.1 g of methyltrimethoxysilane and 7.6 g of tetramethoxysilane were mixed to prepare a mixed solution, which was then placed in a dropping funnel. The mixed solution was dropped into the flask at 10° C., and successively the obtained mixture was stirred at the same temperature for 3 hours. Subsequently, 10% HCl aqueous solution was added to neutralize the mixture, and then 400 ml of toluene and 100 ml of water were added into the neutralized mixture, so that the mixture was separated into two layers. The organic layer was collected and condensed under reduced pressure to remove the solvent. To the obtained concentrate, PGMEA was added so that the solid content might be 40 wt %. The molecular weight (in terms of polystyrene reduced value) of the obtained polysiloxane (M) was measured to find the weight average molecular weight (Mw)=1800. Further, a silicon wafer was coated with the obtained resin solution so that the formed film might have a thickness of 2 μm after prebaked. Thereafter, the dissolution rate in a 2.38 wt % TMAH aqueous solution was measured and found to be 1200 Å/second. The reaction conditions were changed to synthesize the polysiloxanes (H) and (L). The polysiloxane (H) was found to have a weight average molecular weight (Mw) of 1500, and the dissolution rate thereof in a 2.38 wt % TMAH aqueous solution was found to be 10000 Å/second. The polysiloxane (L) was found to have a weight average molecular weight (Mw) of 2500, and the dissolution rate thereof in a 5% TMAH aqueous solution was found to be 300 Å/second. Example 101 and Comparative Examples 101 to 104 Various additives were combined and dissolved in PGMEA to prepare siloxane compositions of Example 101 and Comparative examples 101 to 104. The ingredients of each composition are shown in Table 1. In the table, the value of each polysiloxane means a blending ratio by weight and the amounts of the particular additive, the photosensitive agent and the curing promoter are shown in terms of weight parts based on 100 weight parts of the polysiloxanes in total. The additives are as follows: diazonaphthoquinone derivative (DNQ): 4,4′-(1-(4-(1-(4-hydroxyphenol)-1-methylethyl)-phenyl)ethylidene)bisphenol modified with 2.0 mol of diazonaphthoquinone, curing promoter: 1,8-naphthalimidyl triflate (NAI-105 [trademark], manufactured by Midori Kagaku Co., Ltd.), surfactant: KF-53 ([trademark], manufactured by Shin-Etsu Chemical Co., Ltd.), III-1: N,N′-bis(trimethylsilyl)urea, III-R1: 1,1-di(2-phenoxyethoxy)cyclohexane, and III-R2: N,N′-bis(3-trimethoxysilylbutyl)urea. The photosensitivity was regarded as such exposure energy that the obtained composition could form a 1:1 line-and-space pattern of 3 μm after development. The remaining film ratio was also regarded as the ratio of the film thickness after development to that before development. The pattern shape was evaluated and categorized into the following grades: A: the pattern kept clear lines and the film reduction after development was minor, B: the pattern was hollowed at the base or the film reduction after development was serious, and C: the top or side surface of the pattern was roughened, and residues were left to make the pattern contrast unclear. Further, cured films were produced in the following manner. A 4-inch silicon wafer was spin-coated with each composition to form a coating film of 2.5 μm thickness. The obtained film was prebaked for 90 seconds at 100° C. to evaporate the solvent. The dried film was then subjected to pattern-exposure at 120 to 160 mJ/cm2by use of a g+h+i line mask aligner (PLA-501F [trademark], manufactured by Canon Inc.), thereafter subjected to paddle development for 60 seconds with a 2.38 wt % TMAH aqueous solution, and finally rinsed with pure water for 60 seconds. Further, the film was subjected to flood exposure at 1000 mJ/cm2by use of the g+h+i line mask aligner, and then heated to cure at 230° C. for 30 minutes. The transmittances of the cured films thus obtained were individually measured at 400 nm with a MultiSpec-1500 ([trademark], manufactured by Shimadzu Corporation). The results are shown in Table 1. TABLE 1ExampleComparative examples101101102103104(I) polysiloxane(H)4334432639(blending ratio by(M)2626262626weight)(L)3140314835(II) DNQ88888(III) additivecompoundIIIb-1—III-R1III-R2—amount10—1010—curing aid11111ADR of (I)150050015001001000photosensitivity (mJ/cm2)112160132160dissolved,pattern shapeAABAnoremaining film ratio (%)>99>9979>99residualtransmittance (%)96.496.6dissolved,96.3film leftcloudedcontent of nitrogen-containing2.06—01.09—structure (phr) In the table, the “content of nitrogen-containing structure” means the weight part of the >N—C(═O)— or >N—C(═S)— structures contained in the additive(III) based on 100 weight parts of the polysiloxane. The coating film of Comparative example 102 was partly dissolved and the surface thereof was clouded after development. The coating film of Comparative example 104 was dissolved and not left after development. From the obtained results, the composition according to the present invention was verified to make it possible to realize high transparency, high photosensitivity and a high remaining film ratio after development. | 48,713 |
11860538 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of constituent elements in the present invention may be made based on representative embodiments of the present invention. However, the present invention is not limited to such embodiments. In describing a group (atomic group) in the present specification, a description having no indication about substitution and non-substitution includes a description having a substituent as well as a description having no substituent. For example, “alkyl group” includes not only an alkyl group (unsubstituted alkyl group) having no substituent but also an alkyl group (substituted alkyl group) having a substituent. In the present specification, unless otherwise specified, “exposure” includes not only exposure using light but also lithography with particle beams such as electron beams and ion beams. In addition, as light used for exposure, generally, actinic rays or radiations such as a bright line spectrum of a mercury lamp, far ultraviolet rays typified by excimer laser, extreme ultraviolet rays (EUV light), X-rays, and electron beams are mentioned. In the present specification, a numerical range expressed using “to” means a range including numerical values described before and after the preposition “to” as a lower limit value and an upper limit value. In the present specification, “(meth)acrylate” represents either or both of “acrylate” and “methacrylate”, “(meth)acryl” represents either or both of “acryl” and “methacryl”, and “(meth)acryloyl” represents either or both of “acryloyl” and “methacryloyl”. In the present specification, the term “step” not only includes an independent step, but also steps in a case where an intended action of the step is achieved even though it is not possible to make a clear distinction from the other step. In the present specification, a solid content is a mass percentage of other components excluding a solvent with respect to a total mass of a composition. In addition, the concentration of solid contents refers to a concentration at 25° C. unless otherwise stated. In the present specification, unless otherwise stated, weight-average molecular weight (Mw) and number-average molecular weight (Mn) are defined as polystyrene equivalent values according to gel permeation chromatography (GPC measurement). In the present specification, the weight-average molecular weight (Mw) and the number-average molecular weight (Mn) can be obtained, for example, by using HLC-8220 (manufactured by Tosoh Corporation) and using, as a column, GUARD COLUMN HZ-L, TSKgel Super HZM-M, TSK gel Super HZ4000, TSK gel Super HZ3000, TSK gel Super HZ2000 (manufactured by Tosoh Corporation). Unless otherwise stated, measurement is performed using tetrahydrofuran (THF) as an eluent. In addition, unless otherwise stated, detection is made using a detector having an ultraviolet ray (UV ray) wavelength of 254 nm. Photosensitive Resin Composition A photosensitive resin composition of the embodiment of the present invention contains a heterocyclic ring-containing polymer precursor selected from a polyimide precursor and a polybenzoxazole precursor, and a solvent. A solid content of the photosensitive resin composition has an amine value of 0.0002 to 0.0200 mmol/g. With such a constitution, it is possible to obtain a photosensitive resin composition in which a cyclization rate is fast and a storage stability over time is excellent. In the present invention, the solid content of the photosensitive resin composition has an amine value of 0.0002 mmol/g or more, preferably 0.0006 mmol/g or more, more preferably 0.0009 mmol/g or more, even more preferably 0.0020 mmol/g or more, still more preferably 0.0028 mmol/g or more. The solid content of the photosensitive resin composition has an amine value of 0.0200 mmol/g or less, preferably 0.0150 mmol/g or less, more preferably 0.0120 mmol/g or less, even more preferably 0.0100 mmol/g or less. By setting the amine value to 0.0002 mmol/g or more, it is possible to facilitate cyclization reaction during curing the heterocyclic ring-containing polymer precursor. In contrast, by setting the amine value to 0.0200 mmol/g or less, cyclization reaction over time is difficult to proceed and it is possible to more effectively suppress precipitation of a solid. Although the cyclization rate and storage stability over time are in a trade-off relationship, the effect of the present invention is surprising. The amine value of the solid content of the photosensitive resin composition according to the embodiment of the present invention is adjusted to the above range by components contained in the photosensitive resin composition, and more preferably adjusted by at least the heterocyclic ring-containing polymer precursor. In the present invention, it is particularly preferable that an amine value of 90% by mol or more (more preferably 95% by mol or more, and particularly preferably 99% by mol or more) of the solid content of the photosensitive resin composition according to the embodiment of the present invention is derived from the heterocyclic ring-containing polymer precursor. The amine value of the solid content of the photosensitive resin composition is measured according to a method described in examples described later. Until now, in many cases, the heterocyclic ring-containing precursor has been used, while being blended with the photosensitive resin composition, without effective purification to remove impurities after being synthesized. However, the solid content of the photosensitive resin composition containing such heterocyclic ring-containing polymer precursors generally has a very high amine value. This is caused by a residual base component which is an unreacted diamine or a reaction product in a case where the heterocyclic ring-containing polymer precursor is synthesized by a conventional method. Thus, it is conceivable to carefully purify the heterocyclic ring-containing polymer precursor after being synthesized. However, in a case where the heterocyclic ring-containing polymer precursor is carefully purified, then the amine value of the solid content of the photosensitive resin composition drops remarkably. This is because, in a case where a heterocyclic ring-containing polymer precursor is synthesized by a conventional method, a repeating unit derived from a carboxylic acid or a carboxylic acid anhydride is very often present at a terminal of the precursor. Therefore, until now, it is not easy to adjust the amine value of the solid content of the photosensitive resin composition, and no examination has been performed about the influence of the amine value of the solid content of the photosensitive resin composition. In the present invention, inventors have succeeded in adjusting the amine value of the solid content of the photosensitive resin composition within a predetermined range, by synthesizing the heterocyclic ring-containing polymer precursor while adjusting a reaction condition thereof, effectively purifying the obtained heterocyclic ring-containing polymer precursor to remove impurities, and blending the photosensitive resin composition. In addition, inventors have found that the photosensitive resin composition which meets the amine value above has outstanding characteristics for improvement of a cyclization rate or temporal stability. Furthermore, inventors have found that the photosensitive resin composition in which a cyclization rate is fast and temporal stability is excellent is obtained by using the photosensitive resin composition of which an amine value is within a predetermined range, irrespective of a producing method. Therefore, as mentioned above, as means to adjust the amine value of the solid content of the photosensitive resin composition, the followings are exemplified: purifying the heterocyclic ring-containing polymer precursor; adjusting an amount of reaction solvent, reaction temperature and reaction time while synthesizing the heterocyclic ring-containing polymer precursor; and the like. Additionally, an amino group may be attached to a terminal of the heterocyclic ring-containing polymer precursor. In the present invention, the amine value of the solid content of the photosensitive resin composition is important and the adjustment of the amine value may be performed by any means. The photosensitive resin composition according to the embodiment of the present invention preferably contains a heterocyclic ring-containing polymer A in a proportion of 80.0% to 99.7% by mol, which does not contain an amino group (—NR2, where R is a hydrogen atom or an organic group, preferably a hydrogen atom or an alkyl group having 1 to 4 carbon atoms, more preferably a hydrogen atom or a methyl group, even more preferably a hydrogen atom) at a terminal group of a main chain, and a heterocyclic ring-containing polymer precursor B which contains the amino group at a terminal group of a main chain, in a total proportion of 0.3% to 20.0% by mol, and more preferably the heterocyclic ring-containing polymer precursor A in a proportion of 85.0% to 99.0% by mol and the heterocyclic ring-containing polymer precursor B in a proportion of 1.0% to 15.0% by mol. The terminal group of a main chain refers to a group, bonded to a terminal of a main chain skeleton which is a polyimide chain or polybenzoxazole chain, and, for example, is distinguished from a group that is bonded to a terminal of a repeating unit represented by Formula (1) or a repeating unit represented by Formula (2). The terminal group of a main chain may be a group composed only of an amino group or a group containing an amino group. Heterocyclic Ring-Containing Polymer Precursor A photosensitive resin composition according to the embodiment of the present invention contains at least one type of heterocyclic ring-containing polymer precursor. The heterocyclic ring-containing polymer precursor preferably includes a polyimide precursor or a polybenzoxazole precursor, more preferably a polyimide precursor, and even more preferably a polyimide precursor including a repeating unit represented by Formula (1) described later. An amine value of the heterocyclic ring-containing polymer precursor used in the present invention preferably has a lower limit value of 0.0010 mmol/g or more, more preferably 0.0030 mmol/g or more, and even more preferably 0.0040 mmol/g or more. The amine value of the heterocyclic ring-containing polymer precursor preferably has an upper limit value of 0.0300 mmol/g or less, more preferably 0.0200 mmol/g or less, and even more preferably 0.0150 mmol/g or less. With such a range setting, the effect of the present invention tends to be exhibited more effectively. The amine value of the heterocyclic ring-containing polymer precursor is measured by a method described later in examples. Polyimide Precursor A type and the like of the polyimide precursor used in the present invention are not particularly specified, and it is preferable to contain a repeating unit represented by Formula (1). In Formula (1), A1and A2each independently represent an oxygen atom or NH, R111represents a divalent organic group, R115represents a tetravalent organic group, and R113and R114each independently represent a hydrogen atom or a monovalent organic group. In Formula (1), A1and A2are preferably an oxygen atom or NH, and more preferably an oxygen atom. R111in Formula (1) represents a divalent organic group. As the divalent organic group, a linear or branched aliphatic group, a cyclic aliphatic group, and a group containing an aromatic group are exemplified. The divalent organic group is preferably a linear aliphatic group having 2 to 20 carbon atoms, a branched aliphatic group having 3 to 20 carbon atoms, a cyclic aliphatic group having 3 to 20 carbon atoms, an aromatic group having 6 to 20 carbon atoms, or a group composed of a combination thereof, and is more preferably a group composed of an aromatic group having 6 to 20 carbon atoms. R111is preferably derived from diamine. As the diamine used for producing the polyimide precursor, linear or branched aliphatic, cyclic aliphatic, or aromatic diamine, or the like is mentioned. For diamine, only one type may be used, or two or more types may be used. Specifically, diamine containing a linear aliphatic group having 2 to 20 carbon atoms, a branched or a cyclic aliphatic group having 3 to 20 carbon atoms, an aromatic group having 6 to 20 carbon atoms, or a group composed of a combination thereof is preferable, and diamine containing a group composed of an aromatic group having 6 to 20 carbon atoms is more preferable. Examples of the aromatic group include the following. In the formulas, A is preferably a single bond, or a group selected from an aliphatic hydrocarbon group having 1 to 10 carbon atoms which may be substituted with a fluorine atom, —O—, —C(═O)—, —S—, —S(═O)2—, —NHCO—, and a combination thereof, more preferably a single bond, or a group selected from an alkylene group having 1 to 3 carbon atoms which may be substituted with a fluorine atom, —O—, —C(═O)—, —S—, and —SO2—, and even more preferably a divalent group selected from the group consisting of —CH2, —O—, —S—, —SO2—, —C(CF3)2—, and —C(CH3)2—. As diamine, specifically, at least one diamine selected from 1,2-diaminoethane, 1,2-diaminopropane, 1,3-diaminopropane, 1,4-diaminobutane, and 1,6-diaminohexane; 1,2- or 1,3-diaminocyclopentane, 1,2-, 1,3-, or 1,4-diaminocyclohexane, 1,2-, 1,3-, or 1,4-bis(aminomethyl)cyclohexane, bis-(4-aminocyclohexyl)methane, bis-(3-aminocyclohexyl)methane, 4,4′-diamino-3,3′-dimethylcyclohexylmethane, and isophorone diamine; meta- and paraphenylene diamine, diaminotoluene, 4,4′- and 3,3′-diaminobiphenyl, 4,4′-diaminodiphenyl ether, 3,3-diaminodiphenyl ether, 4,4′- and 3,3′-diaminodiphenylmethane, 4,4′- and 3,3′-diaminodiphenyl sulfone, 4,4′- and 3,3′-diaminodiphenyl sulfide, 4,4′- and 3,3′-diaminobenzophenone, 3,3′-dimethyl-4,4′-diaminobiphenyl, 2,2′-dimethyl-4,4′-diaminobiphenyl, 3,3′-dimethoxy-4,4′-diaminobiphenyl, 2,2-bis(4-aminophenyl)propane, 2,2-bis(4-aminophenyl)hexafluoropropane, 2,2-bis(3-hydroxy-4-aminophenyl)propane, 2,2-bis(3-hydroxy-4-aminophenyl)hexafluoropropane, 2,2-bis(3-amino-4-hydroxyphenyl)propane, 2,2-bis(3-amino-4-hydroxyphenyl) hexafluoropropane, bis(3-amino-4-hydroxyphenyl)sulfone, bis(4-amino-3-hydroxyphenyl)sulfone, 4,4′-diaminoparaterphenyl, 4,4-bis(4-aminophenoxy)biphenyl, bis[4-(4-aminophenoxy)phenyl]sulfone, bis[4-(3-aminophenoxy)phenyl]sulfone, bis[4-(2-aminophenoxy)phenyl]sulfone, 1,4-bis(4-aminophenoxy)benzene, 9,10-bis(4-aminophenyl)anthracene, 3,3′-dimethyl-4,4′-diaminodiphenyl sulfone, 1,3-bis(4-aminophenoxy)benzene, 1,3-bis(3-aminophenoxy)benzene, 1,3-bis(4-aminophenoxy)benzene, 3,3′-diethyl-4,4′-diaminodiphenylmethane, 3,3′-dimethyl-4,4′-diaminodiphenylmethane, 4,4′-diaminooctafluorobiphenyl, 2,2-bis[4-(4-aminophenoxy)phenyl]propane, 2,2-bis[4-(4-aminophenoxy)phenyl]hexafluoropropane, 9,9-bis(4-aminophenyl)-10-hydroanthracene, 3,3′,4,4′-tetraaminobiphenyl, 3,3′,4,4′-tetraaminodiphenyl ether, 1,4-diaminoanthraquinone, 1,5-diaminoanthraquinone, 3,3-dihydroxy-4,4′-diaminobiphenyl, 9,9′-bis(4-aminophenyl)fluorene, 4,4′-dimethyl-3,3′-diaminodiphenyl sulfone, 3,3′,5,5′-tetramethyl-4,4′-diaminodiphenylmethane, 2,4- and 2,5-diaminocumene, 2,5-dimethyl-paraphenylene diamine, acetoguanamine, 2,3,5,6-tetramethyl-paraphenylene diamine, 2,4,6-trimethyl-metaphenylene diamine, bis(3-aminopropyl)tetramethyldisiloxane, 2,7-diaminofluorene, 2,5-diaminopyridine, 1,2-bis(4-aminophenyl)ethane, diaminobenzanilide, esters of diaminobenzoic acid, 1,5-diaminonaphthalene, diaminobenzotrifluoride, 1,3-bis(4-aminophenyl)hexafluoropropane, 1,4-bis(4-aminophenyl)octafluorobutane, 1,5-bis(4-aminophenyl)decafluoropentane, 1,7-bis(4-aminophenyl)tetradecafluoroheptane, 2,2-bis[4-(3-aminophenoxy)phenyl]hexafluoropropane, 2,2-bis[4-(2-aminophenoxy)phenyl]hexafluoropropane, 2,2-bis[4-(4-aminophenoxy)-3,5-dimethylphenyl]hexafluoropropane, 2,2-bis[4-(4-amino phenoxy)-3,5-bis(trifluoromethyl)phenyl]hexafluoropropane, parabis(4-amino-2-trifluoromethylphenoxy)benzene, 4,4′-bis(4-amino-2-trifluoromethylphenoxy)biphenyl, 4,4′-bis(4-amino-3-trifluoromethylphenoxy)biphenyl, 4,4′-bis(4-amino-2-trifluoromethylphenoxy)diphenyl sulfone, 4,4′-bis(3-amino-5-trifluoromethylphenoxy)diphenyl sulfone, 2,2-bis[4-(4-amino-3-trifluoromethylphenoxy)phenyl]hexafluoropropane, 3,3′,5,5′-tetramethyl-4,4′-diaminobiphenyl, 4,4′-diamino-2,2′-bis(trifluoromethyl)biphenyl, 2,2′,5,5′,6,6′-hexafluorotolidine, or 4,4′-diaminoquaterphenyl is mentioned. In addition, diamines (DA-1) to (DA-18) as shown below are also preferable. In addition, as preferable examples of the diamine, diamines having at least two alkylene glycol units in a main chain are also mentioned. Diamines containing two or more in total of either or both of ethylene glycol chain and propylene glycol chain in one molecule are preferable, and diamines containing no aromatic ring are more preferable. As specific examples thereof, JEFFAMINE (registered trademark) KH-511, JEFFAMINE (registered trademark) ED-600, JEFFAMINE (registered trademark) ED-900, JEFFAMINE (registered trademark) ED-2003, JEFFAMINE (registered trademark) EDR-148, JEFFAMINE (registered trademark) EDR-176, D-200, D-400, D-2000, D-4000 (all trade names, manufactured by Huntsman Corporation), 1-(2-(2-(2-aminopropoxy)ethoxy)propoxy)propane-2-amine, and 1-(1-(1-(2-aminopropoxy)propan-2-yl)oxy)propane-2-amine are mentioned, but not limited thereto. Structures of JEFFAMINE (registered trademark) KH-511, JEFFAMINE (registered trademark) ED-600, JEFFAMINE (registered trademark) ED-900, JEFFAMINE (registered trademark) ED-2003, JEFFAMINE (registered trademark) EDR-148, and JEFFAMINE (registered trademark) EDR-176 are shown below. In the above, x, y, z are average values. R111is preferably represented by —Ar-L-Ar— from the viewpoint of flexibility of an obtained cured film. However, Ar's are each independently an aromatic hydrocarbon group, and L is an aliphatic hydrocarbon group having 1 to 10 carbon atoms which may be substituted with a fluorine atom, —O—, —CO—, —S—, —SO2— or —NHCO—, and a group consisting of a combination of the two or more thereof. Ar is preferably a phenylene group, and L is more preferably an aliphatic hydrocarbon group having 1 or 2 carbon atoms which may be substituted with a fluorine atom, —O—, —CO—, —S—, or —SO2—. The aliphatic hydrocarbon group herein is preferably an alkylene group. From the viewpoint of i-ray transmittance, R111is preferably a divalent organic group represented by Formula (51) or Formula (61) below. In particular, from the viewpoint of i-ray transmittance and ease of availability, a divalent organic group represented by Formula (61) is more preferable. In Formula (51), R10to R17each independently represent a hydrogen atom, a fluorine atom, or a monovalent organic group, and at least one of R10, . . . , or R17represents a fluorine atom, a methyl group, a fluoromethyl group, a difluoromethyl group, or a trifluoromethyl group. As a monovalent organic group in R10to R17, an unsubstituted alkyl group having 1 to 10 (preferably 1 to 6) carbon atoms, a fluorinated alkyl group having 1 to 10 (preferably 1 to 6) carbon atoms, and the like are mentioned. In Formula (61), R18and R19each independently represent a fluorine atom, a fluoromethyl group, a difluoromethyl group, or a trifluoromethyl group. As a diamine compound that forms a structure of Formula (51) or (61), dimethyl-4,4′-diaminobiphenyl, 2,2′-bis(trifluoromethyl)-4,4′-diaminobiphenyl, 2,2′-bis(fluoro)-4,4′-diaminobiphenyl, 4,4′-diaminooctafluorobiphenyl, and the like are mentioned. One type of these may be used, or two or more types thereof may be used in combination. R115in Formula (1) represents a tetravalent organic group. As the tetravalent organic group, a tetravalent organic group containing an aromatic ring is preferable, and a group represented by Formula (5) or Formula (6) is more preferable. In Formula (5), R112is preferably a single bond, or a group selected from an aliphatic hydrocarbon group having 1 to 10 carbon atoms which may be substituted with a fluorine atom, —O—, —CO—, —S—, —SO2—, —NHCO—, and a combination thereof, more preferably a single bond, or a group selected from an alkylene group having 1 to 3 carbon atoms which may be substituted with a fluorine atom, —O—, —CO—, —S—, and —SO2—, and even more preferably a divalent group selected from the group consisting of —CH2—, —C(CF3)2—, —C(CH3)2—, —O—, —CO—, —S—, and —SO2—. As the tetravalent organic group represented by R115in Formula (1), specifically, a tetracarboxylic acid residue that remains after removing an acid dianhydride group from tetracarboxylic acid dianhydride is mentioned. For the tetracarboxylic acid dianhydride, only one type may be used, or two or more types may be used. The tetracarboxylic acid dianhydride is preferably a compound represented by Formula (O). In Formula (O), R115represents a tetravalent organic group. R115has the same meaning as R115in Formula (1). As specific examples of the tetracarboxylic acid dianhydride, at least one type selected from pyromellitic acid, pyromellitic acid dianhydride (PMDA), 3,3′,4,4′-biphenyltetracarboxylic acid dianhydride, 3,3′,4,4′-diphenylsulfide tetracarboxylic acid dianhydride, 3,3′,4,4′-diphenylsulfone tetracarboxylic acid dianhydride, 3,3′,4,4′-benzophenone tetracarboxylic acid dianhydride, 3,3′,4,4′-diphenylmethane tetracarboxylic acid dianhydride, 2,2′,3,3′-diphenylmethane tetracarboxylic acid dianhydride, 2,3,3′,4′-biphenyltetracarboxylic acid dianhydride, 2,3,3′,4′-benzophenone tetracarboxylic acid dianhydride, 4,4′-oxydiphthalic acid dianhydride, 2,3,6,7-naphthalene tetracarboxylic acid dianhydride, 1,4,5,7-naphthalene tetracarboxylic acid dianhydride, 2,2-bis(3,4-dicarboxyphenyl)propane dianhydride, 2,2-bis(2,3-dicarboxyphenyl)propane dianhydride, 2,2-bis(3,4-dicarboxyphenyl) hexafluoropropane dianhydride, 1,3-diphenylhexafluoropropane-3,3,4,4-tetracarboxylic acid dianhydride, 1,4,5,6-naphthalene tetracarboxylic acid dianhydride, 2,2′,3,3′-diphenyl tetracarboxylic acid dianhydride, 3,4,9,10-perylene tetracarboxylic acid dianhydride, 1,2,4,5-naphthalene tetracarboxylic acid dianhydride, 1,4,5,8-naphthalene tetracarboxylic acid dianhydride, 1,8,9,10-phenanthrene tetracarboxylic acid dianhydride, 1,1-bis(2,3-dicarboxyphenyl)ethane dianhydride, 1,1-bis(3,4-dicarboxyphenyl)ethane dianhydride, 1,2,3,4-benzene tetracarboxylic acid dianhydride, or alkyl derivatives having 1 to 6 carbon atoms and/or alkoxy derivatives having 1 to 6 carbon atoms thereof are exemplified. In addition, tetracarboxylic acid dianhydrides (DAA-1) to (DAA-5) as shown below are also mentioned as preferable examples. R113and R114each independently represent a hydrogen atom or a monovalent organic group. It is preferable that at least one of R113or R114contains a radically polymerizable group, and it is more preferable that both R113and R114contain a radically polymerizable group. The radically polymerizable group is a group capable of undergoing a crosslinking reaction by an action of a radical, and preferable examples thereof include a group having an ethylenically unsaturated bond. As the group having an ethylenically unsaturated bond, a vinyl group, a (meth)allyl group, a group represented by Formula (III), and the like are mentioned. In Formula (III), R200represents a hydrogen atom or a methyl group, with a methyl group being more preferable. In Formula (III), R201represents an alkylene group having 2 to 12 carbon atoms, —CH2CH(OH)CH2—, or a polyoxyalkylene group having 4 to 30 carbon atoms. Suitable examples of R201include an ethylene group, a propylene group, a trimethylene group, a tetramethylene group, a 1,2-butanediyl group, a 1,3-butanediyl group, a pentamethylene group, a hexamethylene group, an octamethylene group, a dodecamethylene group, and —CH2CH(OH)CH2—, with an ethylene group, a propylene group, a trimethylene group, and —CH2CH(OH)CH2— being more preferable. Particularly preferably, R200is a methyl group, and R201is an ethylene group. As the monovalent organic group represented by R113or R114, a substituent that improves solubility of a developer is preferably used. In a case where R113or R114is a monovalent organic group, an aromatic group, an aralkyl group and the like, which include one, two or three, preferably one acidic group that is bonded to a carbon constituting an aryl group are mentioned. Specifically, an aromatic group having 6 to 20 carbon atoms which has an acidic group and an aralkyl group having 7 to 25 carbon atoms which has an acidic group are mentioned. More specifically, a phenyl group having an acidic group and a benzyl group having an acidic group can be mentioned. The acidic group is preferably an OH group. R113or R114is more preferably a hydrogen atom, 2-hydroxybenzyl, 3-hydroxybenzyl, and 4-hydroxybenzyl from the viewpoint of solubility in an aqueous developer. From the viewpoint of solubility in an organic solvent, R113or R114is preferably a monovalent organic group. The monovalent organic group preferably contains a linear or a branched alkyl group, a cyclic alkyl group, or an aromatic group, and more preferably an alkyl group substituted with an aromatic group. The alkyl group preferably has 1 to 30 carbon atoms. The alkyl group may be linear, branched, or cyclic. As the linear or branched alkyl group, for example, a methyl group, an ethyl group, a propyl group, a butyl group, a pentyl group, a hexyl group, a heptyl group, an octyl group, a nonyl group, a decyl group, a dodecyl group, a tetradecyl group, an octadecyl group, an isopropyl group, an isobutyl group, a sec-butyl group, a t-butyl group, a 1-ethylpentyl group, and a 2-ethylhexyl group are mentioned. The cyclic alkyl group may be a monocyclic cyclic alkyl group or a polycyclic cyclic alkyl group. As the monocyclic cyclic alkyl group, for example, a cyclopropyl group, a cyclobutyl group, a cyclopentyl group, a cyclohexyl group, a cycloheptyl group, and a cyclooctyl group are mentioned. As the polycyclic cyclic alkyl group, for example, an adamantyl group, a norbornyl group, a bornyl group, a camphenyl group, a decahydronaphthyl group, a tricyclodecanyl group, a tetracyclodecanyl group, a camphoroyl group, a dicyclohexyl group, and a pinenyl group are mentioned. Among these, a cyclohexyl group is most preferable from the viewpoint of compatibility with high sensitivity. In addition, the alkyl group substituted with an aromatic group is preferably a linear alkyl group substituted with an aromatic group as described later. As the aromatic group, specifically, a substituted or unsubstituted benzene ring, a naphthalene ring, a pentalene ring, an indene ring, an azulene ring, a heptalene ring, an indacene ring, a perylene ring, a pentacene ring, an acenaphthene ring, a phenanthrene ring, an anthracene ring, a naphthacene ring, a chrysene ring, a triphenylene ring, a fluorene ring, a biphenyl ring, a pyrrole ring, a furan ring, a thiophene ring, an imidazole ring, an oxazole ring, a thiazole ring, a pyridine ring, a pyrazine ring, a pyrimidine ring, a pyridazine ring, an indolizine ring, an indole ring, a benzofuran ring, a benzothiophene ring, an isobenzofuran ring, a quinolizine ring, a quinoline ring, a phthalazine ring, a naphthyridine ring, a quinoxaline ring, a quinoxazoline ring, an isoquinoline ring, a carbazole ring, a phenanthridine ring, an acridine ring, a phenanthroline ring, a thianthrene ring, a chromene ring, a xanthene ring, a phenoxathiin ring, a phenothiazine ring, or a phenazine ring is mentioned. A benzene ring is most preferable. In Formula (1), in a case where R113is a hydrogen atom, or in a case where R114is a hydrogen atom, the polyimide precursor may form a counter salt with a tertiary amine compound having an ethylenically unsaturated bond. As an example of such a tertiary amine compound having an ethylenically unsaturated bond, N,N-dimethylaminopropyl methacrylate is mentioned. In addition, it is also preferable that the polyimide precursor has a fluorine atom in a structural unit. A content of fluorine atoms in the polyimide precursor is preferably 10% by mass or higher, and more preferably 20% by mass or lower. In addition, for the purpose of improving adhesiveness to a substrate, an aliphatic group having a siloxane structure may be copolymerized. Specifically, as diamine components, bis(3-aminopropyl)tetramethyldisiloxane, bis(paraaminophenyl)octamethyl pentasiloxane, and the like are mentioned. The repeating unit represented by Formula (1) is preferably a repeating unit represented by Formula (1-A). That is, at least one type of the polyimide precursor used in the present invention is preferably a precursor having the repeating unit represented by Formula (1-A). By adopting such a structure, it is possible to further widen a width of exposure latitude. In Formula (1-A), A1and A2each represent an oxygen atom, R111and R2each independently represent a divalent organic group, R113and R114each independently represent a hydrogen atom or a monovalent organic group, and it is preferable that at least one of R113or R114is a polymerizable group or a group containing a polymerizable group. A1, A2, R111, R113, and R114each independently have the same meaning as A1, A2, R111, R113, and R114in Formula (1), and preferable ranges thereof are also the same. R112has the same meaning as R112in Formula (5), and a preferable range thereof is the same. In the polyimide precursor, a repeating structural unit represented by Formula (1) may be one type or two or more types. In addition, the polyimide precursor may contain a structural isomer of the repeating unit represented by Formula (1). In addition, the polyimide precursor may also contain other types of repeating structural unit in addition to the repeating unit of Formula (1). As one embodiment of the polyimide precursor in the present invention, polyimide precursor in which the repeating unit represented by Formula (1) accounts for 50% by mol or higher, even 70% by mol or higher, particularly 90% by mol or higher in an entirety of the repeating units. The first embodiment of the polyimide precursor used in the present invention is a heterocyclic ring-containing polymer precursor containing the heterocyclic ring-containing polymer precursor, in a proportion of 80.0% to 99.7% by mol (preferably 85.0% to 99.0% by mol), which contains the repeating unit represented by Formula (1) and does not contain a group represented by Formula (A) at any of terminal groups of a main chain, and containing the heterocyclic ring-containing polymer precursor, in a total proportion of 0.3% to 20.0% by mol (preferably 1.0% to 15.0% by mol), which contains the repeating unit represented by Formula (1) and contains a group represented by Formula (A) at at least one of terminal groups of a main chain. In Formula (1), A1and A2each independently represent an oxygen atom or NH, R111represents a divalent organic group, R115represents a tetravalent organic group, and R113and R114each independently represent a hydrogen atom or a monovalent organic group. In Formula (A), R1and R2each independently represent a hydrogen atom, a linear or branched alkyl group, an aryl group, or a monovalent heterocyclic group which is a ring formed by bonding of R1and R2to each other, and which contains a nitrogen atom. In a case of alkyl groups, at least one of R1or R2is preferably an alkyl group having 1 to 10 carbon atoms which is substituted or not substituted, more preferably an alkyl group having 1 to 5 carbon atoms which is substituted or not substituted, and even more preferably an alkyl group having 1 to 5 carbon atoms which is not substituted. In a case of aryl groups, at least one of R1or R2is preferably an aryl group having 6 to 20 carbon atoms which is substituted or not substituted, more preferably an aryl group having 6 to 10 carbon atoms which is substituted or not substituted, even more preferably an aryl group having 6 to 10 carbon atoms which is not substituted. In a case of a ring in which R1and R2are bonded to each other, and in a case of a monovalent heterocyclic group containing a nitrogen atom, an imidazole ring group, a pyrazole ring group, a pyridine ring group, a pyrazine ring group, and the like are exemplified. The heterocyclic group may or may not have a substituent, but preferably has a substituent. In a case where at least one of R1or R2has a substituent, as a relating substituent, an alkyl group, a cycloalkyl group, an aryl group, an amino group, an amido group, a ureido group, a urethane group, a hydroxyl group, a carboxyl group, a halogen atom, an alkoxy group, a thioether group, an acyl group, an acyloxy group, an alkoxycarbonyl group, a cyano group, a nitro group, and the like are exemplified. The above Formula (1) has the same meaning as Formula (1) described above, and the preferable range is the same. R1and R2in Formula (A) are each independently preferably a hydrogen atom or an alkyl group having 1 to 4 carbon atoms, more preferably a hydrogen atom or a methyl group, and even more preferably a hydrogen atom. A second embodiment of the polyimide precursor in the present invention is a heterocyclic ring-containing polymer precursor in the first embodiment in which the heterocyclic ring-containing polymer precursor containing a repeating unit represented by Formula (1) and a group represented by Formula (A) at at least one of terminal groups of a main chain is selected from a heterocyclic ring-containing polymer precursor represented by Formula (B) and a heterocyclic ring-containing polymer precursor represented by Formula (C). In Formula (B), A1and A2each independently represent an oxygen atom or NH, R111represents a divalent organic group, R115represents a tetravalent organic group, R113and R114each independently represent a hydrogen atom or a monovalent organic group, n represents a positive integer, and A is an organic group which does not contain a NH2group. In Formula (C), A1and A2each independently represent an oxygen atom or NH, R111represents a divalent organic group, R115represents a tetravalent organic group, R113and R114each independently represent a hydrogen atom or a monovalent organic group, and n represents a positive integer. A1, A2, R111, R115, R113and R114in Formula (B) each respectively have the same meanings as A1, A2, R111, R115, R113and R114in Formula (1), and preferable ranges are also the same. n represents a positive integer, preferably n is 1 to 100, and more preferably n is 5 to 50. A is a group which does not include a NH2group. Typically, A is an organic group which does not include a NH2group and Formula weight of a group represented by A per mole is preferably 300 or less, and more preferably 200 or less. A is preferably a group including at least one of an alkyl group or an aryl group, and an alkyl group is more preferable. The number of carbon atoms in the alkyl group is preferably 1 to 5. The number of carbon atoms in the aryl group is preferably 6 to 10. A1, A2, R111, R115, R113and R114in Formula (C), respectively have the same meanings as A1, A2, R111, R115, R113and R114in Formula (1), and preferable ranges are also the same. n represents a positive integer, preferably n is 1 to 100, and more preferably n is 5 to 50. It is preferable that the polyimide precursor used in the present invention satisfies the first embodiment, and further the second embodiment, and satisfies the amine value of the heterocyclic ring-containing polymer precursor. A weight-average molecular weight (Mw) of the polyimide precursor is preferably 2,000 to 500,000, more preferably 5,000 to 100,000, and even more preferably 10,000 to 50,000. In addition, a number average molecular weight (Mn) thereof is preferably 800 to 250,000, more preferably 2,000 to 50,000, and even more preferably 4,000 to 25,000. The degree of dispersion of the polyimide precursor is preferably 1.5 to 2.5. The polyimide precursor is obtained by reacting a dicarboxylic acid or a dicarboxylic acid derivative with diamine. Preferably, the polyimide precursor is obtained by halogenating dicarboxylic acid or a dicarboxylic acid derivative with a halogenating agent, and then causing the resultant to be reacted with diamine. In a method for producing the polyimide precursor, it is preferable to use an organic solvent at the time of reaction. One type of organic solvent may be used, or two or more types of organic solvents may be used. The organic solvent can be appropriately determined according to raw materials, and pyridine, diethylene glycol dimethyl ether (diglyme), N-methylpyrrolidone, and N-ethylpyrrolidone are exemplified. In the present invention, as described above, the amine value of the heterocyclic ring-containing polymer precursor is adjusted by adding an amino group to a terminal of a main chain after a dicarboxylic acid or a dicarboxylic acid derivative is reacted with a diamine. As a method of adding an amino group to a terminal of a main chain, a method of adding an amine compound having an amino group or a hydroxyl group to a terminal of a repeating unit derived from a dicarboxylic acid or a dicarboxylic acid derivative is preferable. As an amine compound having a hydroxyl group, 5-amino-8-hydroxyquinoline, 1-hydroxy-7-aminonaphthalene, 1-hydroxy-6-aminonaphthalene, 1-hydroxy-5-aminonaphthalene, 1-hydroxy-4-aminonaphthalene, 2-hydroxy-7-aminonaphthalene, 2-hydroxy-6-aminonaphthalene, 2-hydroxy-5-aminonaphthalene, 2-aminophenol, 3-aminophenol, 4-aminophenol, 2-aminothiophenol, 3-aminothiophenol, 4-aminothiophenol, and the like are mentioned. Two or more of these type may be used, or amine compounds having a plurality of hydroxyl groups may be reacted. As an amine compound having an amino group, N,N-dimethylethylenediamine, N,N-diethylethylenediamine, 1-(2-aminoethyl) piperidine, and 2-morpholinoethylamine are mentioned. Two or more of these may be used, or an amine compound having a plurality of amino groups may be reacted. In a case of producing the polyimide precursor, it is preferable to include a step of precipitating a solid. Specifically, the polyimide precursor in a reaction solution is sedimented in water and dissolved in a solvent in which the polyimide precursor such as a tetrahydrofuran and the like can be solubilized, and then the solid can be precipitated. Polybenzoxazole Precursor The polybenzoxazole precursor used in the present invention preferably contains a repeating unit represented by Formula (2). In Formula (2), R121represents a divalent organic group, R122represents a tetravalent organic group, and R123and R124each independently represent a hydrogen atom or a monovalent organic group. In Formula (2), R121represents a divalent organic group. The divalent organic group is preferably a group containing at least one of an aliphatic group or an aromatic group. The aliphatic group is preferably a linear aliphatic group. R121is preferably derived from 4,4′-oxydibenzoyl chloride and the like. In Formula (2), R122represents a tetravalent organic group. The tetravalent organic group has the same meaning as R115in Formula (1) described above, and a preferable range thereof is also the same. R122is preferably derived from 2,2′-bis (3-amino-4-hydroxyphenyl) hexafluoropropane and the like. The polybenzoxazole precursor may contain other types of repeating structural unit in addition to the repeating unit of the above Formula (2). From the viewpoint that occurrence of warping due to ring closure can be suppressed, it is preferable to contain a diamine residue represented by Formula (SL) as the other type of repeating structural unit. In Formula (SL), Z has an a structure and a b structure, R1sis a hydrogen atom or a hydrocarbon group having 1 to 10 carbon atoms, R2sis a hydrocarbon group having 1 to 10 carbon atoms, and at least one of R3s, R4s, R5s, or R6sis an aromatic group and the rest, which may be the same as or different from each other, is a hydrogen atom or an organic group having 1 to 30 carbon atoms. Polymerization of the a structure and the b structure may be either block polymerization or random polymerization. In the Z portion, the a structure is 5% to 95% by mol, the b structure is 95% to 5% by mol, and a +b is 100% by mol. In Formula (SL), as a preferable Z, the b structure in which R5sand R6sare phenyl groups is mentioned. In addition, a molecular weight of a structure represented by Formula (SL) is preferably 400 to 4,000, and more preferably 500 to 3,000. The molecular weight can be obtained by gel permeation chromatography which is commonly used. In a case where the molecular weight is set to be within the above-mentioned range, it is possible to decrease a modulus of elasticity of the polybenzoxazole precursor after dehydration ring closure, and to achieve both effects of suppressing warping and of improving solubility. In a case where a diamine residue represented by Formula (SL) is contained as the other type of repeating structural unit, from the viewpoint of improving alkaline solubility, it is preferable to further contain a tetracarboxylic acid residue that remains after removing an acid dianhydride group from tetracarboxylic acid dianhydride, as a repeating structural unit. As examples of such a tetracarboxylic acid residue, examples of R115in Formula (1) are mentioned. The first embodiment of the polybenzoxazole precursor used in the present invention is a heterocyclic ring-containing polymer precursor, in a proportion of 80.0% to 99.7% by mol, which contains a repeating unit represented by Formula (2) and does not contain a group represented by Formula (A) at any of terminal groups of a main chain, and a heterocyclic ring-containing polymer precursor, in a total proportion of 0.3% to 20.0% by mol, which contains the repeating unit represented by Formula (2) and a group represented by Formula (A) at at least one of terminal groups of a main chain. In Formula (2), R121represents a divalent organic group, R122represents a tetravalent organic group, and R123and R124each independently represent a hydrogen atom or a monovalent organic group. In Formula (A), R1and R2each independently represent a hydrogen atom, a linear or branched alkyl group, an aryl group, or a monovalent heterocyclic group which is a ring formed by bonding of R1and R2to each other, and which contains a nitrogen atom. Formula (A) has the same meaning as Formula (A) described in the first embodiment of the polyimide precursor described above, and the preferable range is also the same. A weight-average molecular weight (Mw) of the polybenzoxazole precursor is preferably 2,000 to 500,000, more preferably 5,000 to 100,000, and even more preferably 10,000 to 50,000. In addition, a number average molecular weight (Mn) thereof is preferably 800 to 250,000, more preferably 2,000 to 50,000, and even more preferably 4,000 to 25,000. The degree of dispersion of the polybenzoxazole precursor is preferably 1.5 to 2.5. The content of the heterocyclic ring-containing polymer precursor in the photosensitive resin composition according to the embodiment of the present invention is preferably 20% to 100% by mass with respect to the total solid content of the composition, more preferably 30% to 99% by mass, even more preferably 40% to 98% by mass, still more preferably 50% to 95% by mass, even more preferably 60% to 95% by mass, and even still more preferably, 70% to 95% by mass. For the heterocyclic ring-containing polymer precursor, only one type may be contained, or two or more types may be contained. In a case where two or more types are contained, a total amount thereof is preferably within the above-mentioned range. Photo-Radical Polymerization Initiator The photosensitive resin composition according to the embodiment of the present invention preferably contains a photo-radical polymerization initiator. The photo-radical polymerization initiator that can be used in the present invention is not particularly limited, and can be appropriately selected from known photo-radical polymerization initiators. For example, a photo-radical polymerization initiator having photosensitivity to rays ranging from an ultraviolet ray range to a visible region is preferable. In addition, the photo-radical polymerization initiator may be an activator which produces an active radical by some action with a photo-excited sensitizer. The photo-radical polymerization initiator preferably contains at least one compound having a molar extinction coefficient of at least about 50 within a range of about 300 to 800 nm (preferably 330 to 500 nm). The molar extinction coefficient of the compound can be measured using a known method. For example, it is preferable to perform measurement at a concentration of 0.01 g/L using an ethyl acetate solvent with an ultraviolet-visible spectrophotometer (Cary-5 spectrophotometer manufactured by Varian). Due to the fact that photosensitive resin composition according to the embodiment of the present invention contains the photo-radical polymerization initiator, in a case where the photosensitive resin composition according to the embodiment of the present invention is applied to a substrate such as a semiconductor wafer to form a photosensitive resin composition layer, and then is irradiated with light to cause curing to occur due to radicals, it is possible to reduce solubility in a light-irradiated portion. Therefore, there is an advantage that for example, by exposing the photosensitive resin composition layer through a photo mask having a pattern for masking only an electrode portion, a region having different solubility can be conveniently manufactured according to the pattern of the electrode. As the photo-radical polymerization initiator, a known compound can be optionally used. For example, a halogenated hydrocarbon derivative (for example, a compound having a triazine skeleton, a compound having an oxadiazole skeleton, a compound having a trihalomethyl group), an acylphosphine compound such as an acylphosphine oxide, hexaarylbiimidazole, an oxime compound such as an oxime derivative, an organic peroxide, a thio compound, a ketone compound, an aromatic onium salt, ketoxime ether, an aminoacetophenone compound, hydroxyacetophenone, an azo-based compound, an azide compound, a metallocene compound, an organic boron compound, and an iron arene complex are mentioned. With regard to details thereof, reference can be made to the description of paragraphs 0165 to 0182 of JP2016-027357A, the content of which is incorporated herein. As the ketone compound, for example, the compounds described in paragraph 0087 of JP2015-087611A are exemplified, the content of which is incorporated herein. For commercially available products, KAYACURE DETX (manufactured by Nippon Kayaku Co., Ltd.) is also suitably used. As the photo-radical polymerization initiator, a hydroxyacetophenone compound, an aminoacetophenone compound, and an acylphosphine compound can also be suitably used. More specifically, for example, the aminoacetophenone-based initiators described in JP1998-291969A (JP-H10-291969A) and the acylphosphine oxide-based initiators described in JP4225898B can also be used. As the hydroxyacetophenone-based initiator, IRGACURE 184 (IRGACURE is a registered trademark), DAROCUR 1173, IRGACURE 500, IRGACURE-2959, and IRGACURE 127 (trade names: all manufactured by BASF) can be used. As the aminoacetophenone-based initiator, IRGACURE 907, IRGACURE 369, and IRGACURE 379 (trade names: all manufactured by BASF) which are commercially available products can be used. As the aminoacetophenone-based initiator, the compounds described in JP2009-191179A, of which an absorption maximum wavelength is matched to a light source having a wavelength such as 365 nm or 405 nm, can also be used. As the acylphosphine-based initiator, 2,4,6-trimethylbenzoyl-diphenyl-phosphine oxide and the like are mentioned. In addition, IRGACURE-819 or IRGACURE-TPO (trade names: all manufactured by BASF) which are commercially available products can be used. As the metallocene compound, IRGACURE-784 (manufactured by BASF) and the like are exemplified. As the photo-radical polymerization initiator, an oxime compound is more preferably mentioned. By using the oxime compound, exposure latitude can be more effectively improved. The oxime compound is particularly preferable because the oxime compound has wide exposure latitude (exposure margin) and also works as a photo-base generator. As specific examples of the oxime compound, the compounds described in JP2001-233842A, the compounds described in JP2000-080068A, and the compounds described in JP2006-342166A can be used. As examples of preferable oxime compounds, compounds having the following structures, 3-benzooxyiminobutan-2-one, 3-acetoxyiminobutan-2-one, 3-propionyloxyiminobutan-2-one, 2-acetoxyiminopentan-3-one, 2-acetoxyimino-1-phenylpropan-1-one, 2-benzoyloxyimino-1-phenylpropan-1-one, 3-(4-toluenesulfonyloxy)iminobutan-2-one, 2-ethoxycarbonyloxyimino-1-phenylpropan-1-one, and the like are mentioned. As commercially available products, IRGACURE OXE 01, IRGACURE OXE 02, IRGACURE OXE 03, IRGACURE OXE 04 (all manufactured by BASF), ADEKA OPTOMER N-1919 (manufactured by ADEKA Corporation, Photo-radical polymerization initiator 2 described in JP2012-014052A) are also suitably used. In addition, TR-PBG-304 (manufactured by Changzhou Tronly New Electronic Materials Co., Ltd.), ADEKA ARKLS NCI-831, and ADEKA ARKLS NCI-930 (manufactured by ADEKA Corporation) can also be used. In addition, DFI-091 (manufactured by Daito Chemix Co., Ltd.) can be used. Furthermore, it is also possible to use an oxime compound having a fluorine atom. As specific examples of such oxime compounds, compounds described in JP2010-262028A, compounds 24, 36 to 40 described in paragraph 0345 of JP2014-500852A, a compound (C-3) described in paragraph 0101 of JP2013-164471A, and the like are mentioned. As the most preferable oxime compound, oxime compounds having a specific substituent described in JP2007-269779A or oxime compounds having a thioaryl group shown in JP2009-191061A, and the like are mentioned. From the viewpoint of exposure sensitivity, the photo-radical polymerization initiator is preferably a compound selected from the group consisting of a trihalomethyltriazine compound, a benzyl dimethyl ketal compound, an α-hydroxy ketone compound, an α-aminoketone compound, an acylphosphine compound, a phosphine oxide compound, a metallocene compound, an oxime compound, a triaryl imidazole dimer, an onium salt compound, a benzothiazole compound, a benzophenone compound, an acetophenone compound and a derivative thereof, a cyclopentadiene-benzene-iron complex and a salt thereof, a halomethyl oxadiazole compound, and a 3-aryl substituted coumarin compound. More preferable photo-radical polymerization initiators are a trihalomethyltriazine compound, an α-aminoketone compound, an acylphosphine compound, a phosphine oxide compound, a metallocene compound, an oxime compound, a triarylimidazole dimer, an onium salt compound, a benzophenone compound, and an acetophenone compound, with at least one type of compound being selected from the group consisting of a trihalomethyltriazine compound, an α-aminoketone compound, an oxime compound, a triarylimidazole dimer, and a benzophenone compound being still more preferable, use of a metallocene compound or an oxime compound being still more preferable, and an oxime compound being still further more preferable. In addition, as the photo-radical polymerization initiator, it is possible to use benzophenone, N,N′-tetraalkyl-4,4′-diaminobenzophenone such as N,N′-tetramethyl-4,4′-diaminobenzophenone (Michler's ketone), an aromatic ketone such as 2-benzyl-2-dimethylamino-1-(4-morpholinophenyl)-butanone-1, 2-methyl-1-[4-(methylthio)phenyl]-2-morpholino-propanone-1, quinones condensed with an aromatic ring such as alkylanthraquinone, a benzoin ether compound such as benzoin alkyl ether, a benzoin compound such as benzoin and alkylbenzoin, a benzyl derivative such as benzyl dimethyl ketal, and the like. In addition, a compound represented by Formula (I) may also be used. In Formula (I), R50represents an alkyl group having 1 to 20 carbon atoms, an alkyl group having 2 to 20 carbon atoms which is interrupted by one or more oxygen atoms, an alkoxy group having 1 to 12 carbon atoms, a phenyl group, a phenyl group which is substituted with at least one of an alkyl group having 1 to 20 carbon atoms, an alkoxy group having 1 to 12 carbon atoms, a halogen atom, a cyclopentyl group, a cyclohexyl group, an alkenyl group having 2 to 12 carbon atoms, an alkyl group having 2 to 18 carbon atoms which is interrupted by one or more oxygen atoms, or an alkyl group having 1 to 4 carbon atoms, or biphenylyl, R51is a group represented by Formula (II), or a group identical to R50, and R52to R54each independently represent alkyl having 1 to 12 carbon atoms, alkoxy having 1 to 12 carbon atoms, or halogen. In the formula, R55to R57are the same as R52to R54in Formula (I). In addition, as the photo-radical polymerization initiator, the compounds described in paragraphs 0048 to 0055 of WO2015/125469A can also be used. The content of the photo-radical polymerization initiator is preferably 0.1% to 30% by mass, more preferably 0.1% to 20% by mass, even more preferably 0.5% to 15% by mass, and still more preferably 1.0% to 10% by mass, with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention. For the photo-radical polymerization initiator, only one type may be contained, or two or more types may be contained. In a case where two or more types of photo-radical polymerization initiators are contained, a total thereof is preferably within the above-mentioned range. Thermal-Radical Polymerization Initiator The photosensitive resin composition according to the embodiment of the present invention may contain a thermal-radical polymerization initiator within the scope of the present invention. The thermal-radical polymerization initiator is a compound which generates radicals by heat energy and initiates or promotes a polymerization reaction of a compound having polymerization properties. By adding the thermal-radical polymerization initiator, a polymerization reaction of the heterocyclic ring-containing polymer precursor can proceed together with cyclization of the heterocyclic ring-containing polymer precursor. Thus, a higher degree of heat resistance can be achieved. Specifically, as the thermal-radical polymerization initiator, compounds described in paragraphs 0074 to 0118 of JP2008-063554A are mentioned. In a case where the thermal-radical polymerization initiator is contained, the content thereof is preferably 0.1% to 30% by mass, with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention, more preferably 0.1% to 20% by mass, and even more preferably 5% to 15% by mass. For the thermal-radical polymerization initiator, only one type may be contained, or two or more types may be contained. In a case where two or more types of thermal-radical polymerization initiators are contained, a total thereof is preferably within the above-mentioned range. Solvent The photosensitive resin composition according to the embodiment of the present invention contains a solvent. As the solvent, a known solvent can be optionally used. The solvent is preferably an organic solvent. As the organic solvent, compounds such as esters, ethers, ketones, aromatic hydrocarbons, sulfoxides, and amides are mentioned. As the esters, for example, ethyl acetate, n-butyl acetate, isobutyl acetate, isoamyl formate, isoamyl acetate, butyl propionate, isopropyl butyrate, ethyl butyrate, butyl butyrate, methyl lactate, ethyl lactate, γ-butyrolactone, ε-caprolactone, δ-valerolactone, alkyl alkyloxyacetate (for example, methyl alkyloxyacetate, ethyl alkyloxyacetate, and butyl alkyloxyacetate (for example, methyl methoxyacetate, ethyl methoxyacetate, butyl methoxyacetate, methyl ethoxyacetate, and ethyl ethoxyacetate)), 3-alkyloxypropionic acid alkyl esters (for example, methyl 3-alkyloxypropionate, and ethyl 3-alkyloxypropionate (for example, methyl 3-methoxypropionate, ethyl 3-methoxypropionate, methyl 3-ethoxypropionate, and ethyl 3-ethoxypropionate)), 2-alkyloxypropionic acid alkyl esters (for example, methyl 2-alkyloxypropionate, ethyl 2-alkyloxypropionate, and propyl 2-alkyloxypropionate (for example, methyl 2-methoxypropionate, ethyl 2-methoxypropionate, propyl 2-methoxypropionate, methyl 2-ethoxypropionate, and ethyl 2-ethoxypropionate)), methyl 2-alkyloxy-2-methylpropionate and ethyl 2-alkyloxy-2-methylpropionate (for example, methyl 2-methoxy-2-methylpropionate and ethyl 2-ethoxy-2-methylpropionate), methyl pyruvate, ethyl pyruvate, propyl pyruvate, methyl acetoacetate, ethyl acetoacetate, methyl 2-oxobutanoate, and ethyl 2-oxobutanoate are suitably mentioned. As the ethers, for example, diethylene glycol dimethyl ether, tetrahydrofuran, ethylene glycol monomethyl ether, ethylene glycol monoethyl ether, methyl cellosolve acetate, ethyl cellosolve acetate, diethylene glycol monomethyl ether, diethylene glycol monoethyl ether, diethylene glycol monobutyl ether, propylene glycol monomethyl ether, propylene glycol monomethyl ether acetate, propylene glycol monoethyl ether acetate, and propylene glycol monopropyl ether acetate are suitably mentioned. As the ketones, for example, methyl ethyl ketone, cyclohexanone, cyclopentanone, 2-heptanone, and 3-heptanone are suitably mentioned. As the aromatic hydrocarbons, for example, toluene, xylene, anisole, and limonene are suitably mentioned. As the sulfoxides, for example, dimethyl sulfoxide is suitably mentioned. As the amides, N-methyl-2-pyrrolidone, N-ethyl-2-pyrrolidone, N,N-dimethylacetamide, N,N-dimethylformamide, and the like are suitably mentioned. From the viewpoint of improving properties of a coated surface or the like, it is also preferable to mix two or more types of solvents. In the present invention, the preferable solvents are one type of solvent selected from methyl 3-ethoxypropionate, ethyl 3-ethoxypropionate, ethyl cellosolve acetate, ethyl lactate, diethylene glycol dimethyl ether, butyl acetate, methyl 3-methoxypropionate, 2-heptanone, cyclohexanone, cyclopentanone, γ-butyrolactone, dimethyl sulfoxide, ethyl carbitol acetate, butyl carbitol acetate, N-methyl-2-pyrrolidone, propylene glycol methyl ether, propylene glycol methyl ether acetate, or a mixed solvent composed of two or more types of the solvents. A combination of dimethyl sulfoxide and γ-butyrolactone is particularly preferable. From the viewpoint of coating property, the content of the solvent is such that the total solid content concentration of the photosensitive resin composition according to the embodiment of the present invention is preferably 5% to 80% by mass, more preferably 5% to 70% by mass, and particularly preferably 10% to 60% by mass. The content of the solvent may be adjusted depending on a desired thickness and a coating method. For the solvent, one type may be contained, or two or more types may be contained. In a case where two or more types of solvents are contained, a total thereof is preferably within the above-mentioned range. Radically Polymerizable Compound The photosensitive resin composition according to the embodiment of the present invention preferably contains a radically polymerizable compound (hereinafter, also referred to as a “polymerizable monomer”). By adopting such a constitution, a cured film which is excellent in heat resistance can be formed. As the polymerizable monomer, a compound having a radically polymerizable group can be used. As the radically polymerizable group, a group having an ethylenically unsaturated bond such as a styryl group, a vinyl group, a (meth)acryloyl group, and an allyl group is mentioned. The radically polymerizable group is preferably a (meth)acryloyl group. The polymerizable monomer may have one radically polymerizable group or two or more radically polymerizable groups. The polymerizable monomer preferably has two or more radically polymerizable groups, and more preferably three or more radically polymerizable groups. An upper limit thereof is preferably 15 or lower, more preferably 10 or lower, and even more preferably 8 or lower. A molecular weight of the polymerizable monomer is preferably 2,000 or lower, more preferably 1,500 or lower, and even more preferably 900 or lower. A lower limit of the molecular weight of the polymerizable monomer is preferably 100 or higher. From the viewpoint of developability, the photosensitive resin composition according to the embodiment of the present invention preferably contains at least one bi- or higher-functional polymerizable monomer containing two or more polymerizable groups, and more preferably contains at least one tri- or higher-functional polymerizable monomer. In addition, the polymerizable monomer may be a mixture of a bifunctional polymerizable monomer and a tri- or higher-functional polymerizable monomer. The number of functional groups in the polymerizable monomer means the number of radically polymerizable groups in one molecule. Specific examples of the polymerizable monomer include unsaturated carboxylic acids (for example, acrylic acid, methacrylic acid, itaconic acid, crotonic acid, isocrotonic acid, maleic acid), and esters and amides thereof, and are preferably esters of unsaturated carboxylic acids with polyhydric alcohol compounds, and amides of unsaturated carboxylic acids with polyvalent amine compounds. In addition, addition reaction products of unsaturated carboxylic acid esters or amides having a nucleophilic substituent such as a hydroxyl group, an amino group, and a mercapto group, with monofunctional or polyfunctional isocyanates or epoxies, dehydration condensation reaction products thereof with monofunctional or polyfunctional carboxylic acids, and the like are also suitably used. In addition, addition reaction products of unsaturated carboxylic acid esters or amides having an electrophilic substituent such as an isocyanate group and an epoxy group, with monofunctional or polyfunctional alcohols, amines, or thiols, and substitution reaction products of unsaturated carboxylic acid esters or amides having a leaving substituent such as a halogen group and a tosyloxy group with monofunctional or polyfunctional alcohols, amines, or thiols are also suitable. In addition, as another example, it is also possible to use a group of compounds in which the unsaturated carboxylic acid is substituted with an unsaturated phosphonic acid, a vinylbenzene derivative such as styrene, vinyl ether, allyl ether, or the like. As specific examples, reference can be made to the description of paragraphs 0113 to 0122 of JP2016-027357A, the content of which is incorporated herein. In addition, the polymerizable monomer is also preferably a compound having a boiling point of 100° C. or higher under atmospheric pressure. As examples thereof, compounds obtained by adding ethylene oxide or propylene oxide to a polyfunctional alcohol such as polyethylene glycol di(meth)acrylate, trimethylolethane tri(meth)acrylate, neopentyl glycol di(meth)acrylate, pentaerythritol tri(meth)acrylate, pentaerythritol tetra(meth)acrylate, dipentaerythritol penta(meth)acrylate, dipentaerythritol hexa(meth)acrylate, hexanediol (meth)acrylate, trimethylolpropane tri(acryloyloxypropyl)ether, tri(acryloyloxyethyl)isocyanurate, glycerin, and trimethylolethane, and then being subjected to (meth)acrylation, the urethane (meth)acrylates as described in JP1973-041708B (JP-S48-041708B), JP1975-006034B (JP-S50-006034B), and JP1976-037193A (JP-S51-037193A), polyester acrylates described in JP1973-064183A (JP-S48-064183A), JP1974-043191B (JP-S49-043191B), and JP1977-030490B (JP-S52-030490B), polyfunctional acrylates or methacrylates such as epoxy acrylates which are reaction products of epoxy resins and (meth)acrylic acid, and mixtures thereof can be mentioned. In addition, the compounds described in paragraphs 0254 to 0257 of JP2008-292970A are also suitable. Moreover, the polyfunctional (meth)acrylate or the like obtained by reacting a compound having a cyclic ether group and an ethylenically unsaturated group, such as glycidyl (meth)acrylate, with polyfunctional carboxylic acid can also be mentioned. In addition, as other preferable polymerizable monomers, compounds having two or more groups containing a fluorene ring and an ethylenically unsaturated bond which are described in JP2010-160418A, JP2010-129825A, and JP4364216B, and cardo resins can also be used. Furthermore, as other examples, the specific unsaturated compounds described in JP1971-043946B (JP-S46-043946B), JP1989-040337B (JP-H1-040337B), and JP1989-040336B (JP-H1-040336B), the vinylphosphonic acid-based compounds described in JP1990-025493A (JP-H2-025493A), and the like can also be mentioned. In addition, the compounds containing a perfluoroalkyl group described in JP1986-022048A (JP-S61-022048A) can also be used. Furthermore, photo-polymerizable monomers and oligomers which are described in Journal of Japan Adhesive Association vol. 20, No. 7, pages 300 to 308 (1984) can also be used. In addition to the above, the compounds described in paragraphs 0048 to 0051 of JP2015-034964A can also be preferably used, the content of which is incorporated herein. In addition, the compounds which are described in JP1998-062986A (JP-H10-062986A) as Formulas (1) and (2) together with specific examples thereof and are obtained by adding ethylene oxide or propylene oxide to a polyfunctional alcohol and then being subjected to (meth)acrylation can be used as the polymerizable monomer. Furthermore, the compounds described in paragraphs 0104 to 0131 of JP2015-187211A can also be used as the polymerizable monomer, the content of which is incorporated herein. As the polymerizable monomer, dipentaerythritol triacrylate (as a commercially available product, KAYARAD D-330; manufactured by Nippon Kayaku Co., Ltd.), dipentaerythritol tetraacrylate (as a commercially available product, KAYARAD D-320; Nippon Kayaku Co., Ltd., A-TMMT: manufactured by Shin-Nakamura Chemical Co., Ltd.), dipentaerythritol penta(meth)acrylate (as a commercially available product, KAYARAD D-310; manufactured by Nippon Kayaku Co., Ltd.), dipentaerythritol hexa(meth)acrylate (as commercially available product, KAYARAD DPHA; by Nippon Kayaku Co., Ltd., A-DPH; manufactured by Shin-Nakamura Chemical Co., Ltd.), and structures in which (meth)acryloyl groups thereof are bonded via ethylene glycol or propylene glycol residues are preferable. Oligomer types thereof can also be used. As commercially available products of the polymerizable monomer, for example, SR-494 which is a tetrafunctional acrylate having four ethyleneoxy chains, manufactured by Sartomer, SR-209 which is a bifunctional methacrylate having four ethyleneoxy chains, manufactured by Sartomer, DPCA-60 which is a hexafunctional acrylate having six pentyleneoxy chains, manufactured by Nippon Kayaku Co., Ltd., TPA-330 which is a trifunctional acrylate having three isobutylene oxy chains, urethane oligomers UAS-10, UAB-140 (manufactured by Nippon Paper Industries Co., Ltd.), NK Ester M-40G, NK Ester 4G, NK Ester M-9300, NK Ester A-9300, UA-7200 (manufactured by Shin-Nakamura Chemical Co., Ltd.), DPHA-40H (manufactured by Nippon Kayaku Co., Ltd.), UA-306H, UA-306T, UA-306I, AH-600, T-600, AI-600 (manufactured by Kyoeisha Chemical Co., Ltd.), Brenmer PME400 (manufactured by NOF Corporation), and the like are mentioned. As the polymerizable monomer, the urethane acrylates as described in JP1973-041708B (JP-S48-041708B), JP1976-037193A (JP-S51-037193A), JP1990-032293B (JP-H2-032293B), and JP1990-016765B (JP-H2-016765B), and the urethane compounds having an ethylene oxide-based skeleton described in JP1983-049860B (JP-S58-049860B), JP1981-017654B (JP-S56-017654B), JP1987-039417B (JP-S62-039417B), and JP1987-039418B (JP-S62-039418B) are also suitable. Furthermore, as the polymerizable monomer, the compounds having an amino structure or a sulfide structure in a molecule as described in JP1988-277653A (JP-S63-277653A), JP1988-260909A (JP-S63-260909A), and JP1989-105238A (JP-H1-105238A) can also be used. The polymerizable monomer may be a polymerizable monomer having an acid group such as a carboxyl group, a sulfo group, and a phosphoric acid group. The polymerizable monomer having an acid group is preferably ester of an aliphatic polyhydroxy compound and an unsaturated carboxylic acid, and more preferably a polymerizable monomer obtained by reacting an unreacted hydroxyl group of an aliphatic polyhydroxy compound with a non-aromatic carboxylic acid anhydride so as to have an acid group. Particularly preferably, the polymerizable monomer is a compound in which an aliphatic polyhydroxy compound is pentaerythritol and/or dipentaerythritol in a polymerizable monomer having an acid group obtained by reacting an unreacted hydroxyl group of the aliphatic polyhydroxy compound with a non-aromatic carboxylic acid anhydride. As commercially available products thereof, for example, M-510 and M-520 as polybasic acid-modified acrylic oligomers which are manufactured by Toagosei Co., Ltd. are mentioned. For the polymerizable monomer having an acid group, one type may be used alone, or two or more types may be used in admixture. In addition, if necessary, a polymerizable monomer having no acid group and a polymerizable monomer having an acid group may be used in combination. An acid value of the polymerizable monomer having an acid group is preferably 0.1 to 40 mg KOH/g, and particularly preferably 5 to 30 mg KOH/g. In a case where the acid value of the polymerizable monomer is within the above-mentioned range, excellent production and handling properties are exhibited, and furthermore, excellent developability is exhibited. In addition, good polymerization properties are exhibited. From the viewpoint of good polymerization properties and heat resistance, the content of polymerizable monomer is preferably 1% to 60% by mass, with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention. A lower limit thereof is more preferably 5% by mass or higher. An upper limit is more preferably 50% by mass, and even more preferably 30% by mass or lower. For the polymerizable monomer, one type may be used alone, or two or more types may be used in admixture. In addition, the mass ratio of the heterocyclic ring-containing polymer precursor to the polymerizable monomer (heterocyclic ring-containing polymer precursor/polymerizable monomer) is preferably 98/2 to 10/90, more preferably 95/5 to 30/70 and even more preferably 90/10 to 50/50, and still more preferably 90/10 to 75/25. In a case where the mass ratio of the heterocyclic ring-containing polymer precursor to the polymerizable monomer is within the above-mentioned range, a cured film which is excellent in polymerization properties and heat resistance can be formed. In the photosensitive resin composition according to the embodiment of the present invention, from the viewpoint of suppressing warping due to control of a modulus of elasticity of a cured film, a monofunctional polymerizable monomer can be preferably used. As the monofunctional polymerizable monomer, (meth)acrylic acid derivatives such as n-butyl (meth)acrylate, 2-ethylhexyl (meth)acrylate, 2-hydroxyethyl (meth)acrylate, butoxyethyl (meth)acrylate, carbitol (meth)acrylate, cyclohexyl (meth)acrylate, benzyl (meth)acrylate, phenoxyethyl (meth)acrylate, N-methylol (meth)acrylamide, glycidyl (meth)acrylate, polyethylene glycol mono(meth)acrylate, and polypropylene glycol mono(meth)acrylate, N-vinyl compounds such as N-vinylpyrrolidone and N-vinylcaprolactam, allyl compounds such as allyl glycidyl ether, diallyl phthalate, and triallyl trimellitate, and the like are preferably used. As the monofunctional polymerizable monomer, a compound having a boiling point of 100° C. or higher under atmospheric pressure is also preferable in order to suppress volatilization before exposure. Other Polymerizable Compounds The photosensitive resin composition according to the embodiment of the present invention may further contain other polymerizable compounds, in addition to the heterocyclic ring-containing polymer precursor and the radically polymerizable compound described above. As the other polymerizable compounds, a compound having a hydroxymethyl group, an alkoxymethyl group, or an acyloxymethyl group; an epoxy compound; an oxetane compound; and a benzoxazine compound are mentioned. Compound Having Hydroxymethyl Group, Alkoxymethyl Group, or Acyloxymethyl Group The compound having a hydroxymethyl group, an alkoxymethyl group, or an acyloxymethyl group is preferably a compound represented by Formula (AM1). (In the formula, t represents an integer of 1 to 20, R4represents a t-valent organic group having 1 to 200 carbon atoms, R5represents a group represented by —OR6or —OCO—R7, R6represents a hydrogen atom or an organic group having 1 to 10 carbon atoms, and R7represents an organic group having 1 to 10 carbon atoms.) A content of the compound represented by Formula (AM1) is preferably 5 to 40 parts by mass with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. The content is more preferably 10 to 35 parts by mass. In addition, in the entire amount of the other polymerizable compounds, it is also preferable that a compound represented by Formula (AM4) is contained in an amount of 10% to 90% by mass and a compound represented by Formula (AM5) is contained in an amount of 10% to 90% by mass. (In the formula, R4represents a divalent organic group having 1 to 200 carbon atoms, R5represents a group represented by —OR6or —OCO—R7, R6represents a hydrogen atom or an organic group having 1 to 10 carbon atoms, and R7represents an organic group having 1 to 10 carbon atoms.) (In the formula, u represents an integer of 3 to 8, R4represents a u-valent organic group having 1 to 200 carbon atoms, R5represents a group represented by —OR6or —OCO—R7, R6represents a hydrogen atom or an organic group having 1 to 10 carbon atoms, and R7represents an organic group having 1 to 10 carbon atoms.) By using the compound having a hydroxymethyl group or the like as described above, it is possible to more effectively suppress occurrence of cracks in a case where the photosensitive resin composition according to the embodiment of the present invention is applied onto a substrate having roughness. In addition, it is possible to form a cured film which is excellent in pattern processability and has high heat resistance such that a 5% mass reduction temperature is 350° C. or higher, and more preferably 380° C. or higher. As specific examples of the compound represented by Formula (AM4), 46DMOC, 46DMOEP (all trade names, manufactured by Asahi Organic Chemicals Industry Co., Ltd.), DML-MBPC, DML-MBOC, DML-OCHP, DML-PCHP, DML-PC, DML-PTBP, DML-34X, DML-EP, DML-POP, dimethylol BisOC-P, DML-PFP, DML-PSBP, DML-MTris PC (all trade names, manufactured by Honshu Chemical Industry Co., Ltd.), NIKALAC MX-290 (trade name, manufactured by Sanwa Chemical Co., Ltd.), 2,6-dimethoxymethyl-4-t-butylphenol, 2,6-dimethoxymethyl-p-cresol, 2,6-diacetoxymethyl-p-cresol, and the like are mentioned. In addition, as specific examples of the compound represented by Formula (AM5), TriML-P, TriML-35XL, TML-HQ, TML-BP, TML-pp-BPF, TML-BPA, TMOM-BP, HML-TPPHBA, HML-TPHAP, HMOM-TPPHBA, HMOM-TPHAP (all trade name, manufactured by Honshu Chemical Industry Co., Ltd.), TM-BIP-A (trade name, manufactured by Asahi Organic Materials Industry Co., Ltd.), NIKALAC MX-280, NIKALAC MX-270, and NIKALAC MW-100LM (all trade names, manufactured by Sanwa Chemical Co., Ltd.) are mentioned. Epoxy Compound (Compound Having Epoxy Group) The epoxy compound is preferably a compound having two or more epoxy groups in one molecule. Since the epoxy group undergoes a crosslinking reaction at 200° C. or lower and a dehydration reaction derived from crosslinking does not occur, film shrinkage hardly occurs. Therefore, containing an epoxy compound is effective for low-temperature curing of the composition and suppression of warping thereof. The epoxy compound preferably contains a polyethylene oxide group. As a result, it is possible to further decrease a modulus of elasticity and to suppress warping. The polyethylene oxide group means a group in which the number of repeating units of ethylene oxide is 2 or higher, with the number of repeating units being preferably 2 to 15. As examples of the epoxy compound, bisphenol A type epoxy resin; bisphenol F type epoxy resin; alkylene glycol type epoxy resin such as propylene glycol diglycidyl ether; polyalkylene glycol type epoxy resin such as polypropylene glycol diglycidyl ether; epoxy group-containing silicone such as polymethyl(glycidyloxypropyl) siloxane, and the like can be mentioned, but not limited thereto. Specifically, EPICLON (registered trademark) 850-S, EPICLON (registered trademark) HP-4032, EPICLON (registered trademark) HP-7200, EPICLON (registered trademark) HP-820, EPICLON (registered trademark) HP-4700, EPICLON (registered trademark) EXA-4710, EPICLON (registered trademark) HP-4770, EPICLON (registered trademark) EXA-859CRP, EPICLON (registered trademark) EXA-1514, EPICLON (registered trademark) EXA-4880, EPICLON (registered trademark) EXA-4850-150, EPICLON (registered trademark) EXA-4850-1000, EPICLON (registered trademark) EXA-4816, EPICLON (registered trademark) EXA-4822 (all trade names, manufactured by Dainippon Ink and Chemicals, Inc.), RIKARESIN (registered trademark) BEO-60E (trade name, manufactured by New Japan Chemical Co., Ltd.), EP-4003S, EP-4000S (all trade names, manufactured by ADEKA CORPORATION), and the like are mentioned. Among these, an epoxy resin containing a polyethylene oxide group is preferable from the viewpoint of suppression of warping and excellent heat resistance. For example, EPICLON (registered trademark) EXA-4880, EPICLON (registered trademark) EXA-4822, and RIKARESIN (registered trademark) BEO-60E are preferable due to containing a polyethylene oxide group. A content of the epoxy compound is preferably 5 to 50 parts by mass, more preferably 10 to 50 parts by mass, and even more preferably 10 to 40 parts by mass, with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. In a case where the content of the epoxy compound is 5 parts by mass or higher, warping of the obtained cured film can be further suppressed. In a case where the content is 50 parts by mass or lower, pattern embedment caused by reflow during curing can be further suppressed. Oxetane Compound (Compound Having an Oxetanyl Group) As the oxetane compound, a compound having two or more oxetane rings in one molecule, 3-ethyl-3-hydroxymethyloxetane, 1,4-bis{[(3-ethyl-3-oxetanyl)methoxy]methyl}benzene, 3-ethyl-3-(2-ethylhexylmethyl)oxetane, 1,4-benzenedicarboxylic acid-bis[(3-ethyl-3-oxetanyl)methyl]ester, and the like can be mentioned. As specific examples thereof, ARON OXETANE series (for example, OXT-121, OXT-221, OXT-191, OXT-223) manufactured by Toagosei Co., Ltd. can be suitably used. These may be used alone, or two or more types thereof may be used in admixture. A content of the oxetane compound is preferably 5 to 50 parts by mass, more preferably 10 to 50 parts by mass, and even more preferably 10 to 40 parts by mass, with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. Benzoxazine Compound (Compound Having Benzoxazolyl Group) Due to a crosslinking reaction derived from a ring-opening addition reaction, the benzoxazine compound does not result in degassing during curing, and results in decreased thermal contraction so that occurrence of warping is suppressed, which is preferable. As preferred examples of the benzoxazine compound, B-a type benzoxazine, B-m type benzoxazine (all trade names, manufactured by Shikoku Chemicals Corporation), a benzoxazine adduct of polyhydroxystyrene resin, and a phenol novolak type dihydrobenzo oxazine compound are mentioned. These may be used alone or two or more types thereof may be used in admixture. A content of the benzoxazine compound is preferably 5 to 50 parts by mass, more preferably 10 to 50 parts by mass, and even more preferably 10 to 40 parts by mass, with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. Migration Suppressing Agent The photosensitive resin composition preferably further contains a migration suppressing agent. By containing the migration suppressing agent, it is possible to effectively prevent metal ions derived from a metal layer (metal wiring) from being migrated into a photosensitive resin composition layer. As the migration suppressing agent, there is no particular limitation, and compounds having a heterocycle (a pyrrole ring, a furan ring, a thiophene ring, an imidazole ring, an oxazole ring, a thiazole ring, a pyrazole ring, an isoxazole ring, an isothiazole ring, a tetrazole ring, a pyridine ring, a pyridazine ring, a pyrimidine ring, a pyrazine ring, a piperidine ring, a piperazine ring, a morpholine ring, a 2H-pyran ring and a 6H-pyran ring, a triazine ring), compounds having thioureas and a mercapto group, hindered phenol-based compounds, salicylic acid derivative-based compounds, and hydrazide derivative-based compounds are mentioned. In particular, triazole-based compounds such as 1,2,4-triazole and benzotriazole, and tetrazole-based compounds such as 1H-tetrazole and benzotetrazole can be preferably used. In addition, an ion trapping agent that captures an anion such as a halogen ion can also be used. As other migration suppressing agents, the rust inhibitors described in paragraph 0094 of JP2013-015701A, the compounds described in paragraphs 0073 to 0076 of JP2009-283711A, the compounds described in paragraph 0052 of JP2011-059656A, and the compounds described in paragraphs 0114, 0116, and 0118 of JP2012-194520A, and the like can be used. The following compounds can be mentioned as specific examples of the migration suppressing agent. In a case where the photosensitive resin composition has the migration suppressing agent, a content of the migration suppressing agent is preferably 0.01% to 5.0% by mass, more preferably 0.05% to 2.0% by mass, and even more preferably 0.1% to 1.0% by mass, with respect to a total solid content of the photosensitive resin composition. For the migration suppressing agent, only one type may be used, or two or more types may be used. In a case where two or more types of migration suppressing agents are used, a total thereof is preferably within the above-mentioned range. Polymerization Inhibitor The photosensitive resin composition of the embodiment of the present invention preferably contains a polymerization inhibitor. As the polymerization inhibitor, for example, hydroquinone, 1,4-methoxyphenol, di-tert-butyl-p-cresol, pyrogallol, p-tert-butylcatechol, 1,4-benzoquinone, diphenyl-p-benzoquinone, 4,4′-thiobis(3-methyl-6-tert-butylphenol), 2,2′-methylenebis(4-methyl-6-tert-butylphenol), N-nitroso-N-phenylhydroxyamine aluminum salt, phenothiazine, N-nitrosodiphenylamine, N-phenyl naphthylamine, ethylenediamine tetraacetic acid, 1,2-cyclohexanediamine tetraacetic acid, glycol ether diamine tetraacetic acid, 2,6-di-tert-butyl-4-methyl phenol, 5-nitroso-8-hydroxyquinoline, 1-nitroso-2-naphtoyl, 2-nitroso-1-naphtoyl, 2-nitroso-5-(N-ethyl-N-sulfopropylamino)phenol, N-nitroso-N-(1-naphthyl)hydroxyamine ammonium salt, bis(4-hydroxy-3,5-tert-butyl)phenylmethane, and the like are suitably used. In addition, the polymerization inhibitors described in paragraph 0060 of JP2015-127817A and the compounds described in paragraphs 0031 to 0046 of WO2015/125469A can also be used. In addition, the following compounds can be used (Me is a methyl group). In a case where the photosensitive resin composition according to the embodiment of the present invention has a polymerization inhibitor, the content of the polymerization inhibitor is preferably 0.01% to 5% by mass with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention. For the polymerization inhibitor, only one type may be used, or two or more types may be used. In a case where two or more types of polymerization inhibitors are used, a total thereof is preferably within the above-mentioned range. Metal Adhesiveness Improving Agent The photosensitive resin composition according to the embodiment of the present invention preferably contains a metal adhesiveness improving agent for improving adhesiveness to a metal material used for electrodes, wirings, and the like. As the metal adhesiveness improving agent, a silane coupling agent and the like are mentioned. As examples of the silane coupling agent, the compounds described in paragraphs 0062 to 0073 of JP2014-191002A, the compounds described in paragraphs 0063 to 0071 of WO2011/080992A1, the compounds described in paragraphs 0060 and 0061 of JP2014-191252A, the compounds described in paragraphs 0045 to 0052 of JP2014-041264A, and the compounds described in paragraph 0055 of WO2014/097594A are mentioned. In addition, it is also preferable to use two or more types of the different silane coupling agents as described in paragraphs 0050 to 0058 of JP2011-128358A. In addition, as the silane coupling agent, the following compounds are also preferably used. In the following formulas, Et represents an ethyl group. In addition, as the metal adhesiveness improving agent, the compounds described in paragraphs 0046 to 0049 of JP2014-186186A, and the sulfide-based compounds described in paragraphs 0032 to 0043 of JP2013-072935A can also be used. The content of the metal adhesiveness improving agent is preferably 0.1 to 30 parts by mass, and more preferably 0.5 to 15 parts by mass with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. In a case where the content is 0.1 parts by mass or higher, good adhesiveness between a cured film and a metal layer after a curing step is exhibited. In a case where the content is 30 parts by mass or lower, the cured film after the curing step exhibits good heat resistance and mechanical properties. For the metal adhesiveness improving agent, only one type may be used, or two or more types may be used. In a case where two or more types are used, a total thereof is preferably within the above-mentioned range. Base Generator The photosensitive resin composition according to the embodiment of the present invention may contain a base generator. The base generator may be a thermal-base generator or a photo-base generator, and preferably contains at least the photo-base generator. Thermal-Base Generator As the thermal-base generator, a type and the like thereof are not particularly specified, and the thermal-base generator preferably includes a thermal-base generator that contains at least one type selected from an acidic compound which generates a base in a case of being heated to 40° C. or higher, or an ammonium salt which has an anion having pKa1 of 0 to 4 and an ammonium cation. Herein, pKa1 is a logarithmic expression (−Log10Ka) of the dissociation constant (Ka) of the first proton of the polyvalent acid. By blending such a compound, the cyclization reaction of the heterocyclic ring-containing polymer precursor can be carried out at a low temperature, and a composition with more excellent stability can be obtained. In addition, since the thermal-base generator does not generate a base in a case of being not heated, the cyclization of the heterocyclic ring-containing polymer precursor during storage can be suppressed even in the presence of the heterocyclic ring-containing polymer precursor, which leads to excellent storage stability. The thermal-base generator of the present invention includes at least one type selected from an acidic compound (A1) which generates a base in a case of being heated to 40° C. or higher, or an ammonium salt (A2) which has an ammonium cation and anion of which the pKa1 is 0 to 4. The acidic compound (A1) and the ammonium salt (A2) generate a base in a case of being heated. Thus, the base generated from these compounds makes it possible to promote a cyclization reaction of the heterocyclic ring-containing polymer precursor, and makes it possible to cause cyclization of the heterocyclic ring-containing polymer precursor to be carried out at a low temperature. In addition, the cyclization of the heterocyclic ring-containing polymer precursor hardly progresses even in a case where these compounds and the heterocyclic ring-containing polymer precursor coexist which is cyclized and cured with a base unless heated, a heterocyclic ring-containing polymer precursor excellent in stability can be prepared. The acidic compound of the present specification means a compound having a pH value measured by a pH meter at 20° C. is 7 or lower in a case where 1 g of the compound is taken in a container and 50 mL of mixed solution of deionized water and tetrahydrofuran (mass ratio is water/tetrahydrofuran=¼) is added to the compound, and the solution is stirred for an hour at room temperature. In the present invention, a base generation temperature of the acidic compound (A1) and the ammonium salt (A2) is preferably 40° C. or higher, and more preferably 120° C. to 200° C. An upper limit of the base generation temperature is preferably 190° C. or lower, more preferably 180° C. or lower, and even more preferably 165° C. or lower. A lower limit of the base generation temperature is preferably 130° C. or higher, and more preferably 135° C. or higher. In a case where the base generation temperature of the acidic compound (A1) and ammonium salt (A2) is 120° C. or higher, since the base is hardly generated during storage, a heterocyclic ring-containing polymer precursor having excellent stability can be prepared. In a case where the base generation temperature of the acidic compound (A1) and the ammonium salt (A2) is 200° C. or lower, a cyclization temperature of the heterocyclic ring-containing polymer precursor can be decreased. For example, the base generation temperature may be measured, for example, using differential scanning calorimetry by heating a compound to 250° C. at a rate of 5° C./min in a pressure-resistant capsule, reading a peak temperature of an exothermic peak having the lowest temperature, and taking the peak temperature as a base generation temperature. In the present invention, a base generated by the thermal-base generator is preferably a secondary amine or a tertiary amine, and more preferably a tertiary amine. Since the tertiary amine is highly basic, the cyclization temperature of the heterocyclic ring-containing polymer precursor can be further reduced. In addition, a boiling point of the base generated by the thermal-base generator is preferably 80° C. or higher, more preferably 100° C. or higher, and even more preferably 140° C. or higher. In addition, a molecular weight of the generated base is preferably 80 to 2,000. A lower limit thereof is more preferably 100 or higher. An upper limit thereof is more preferably 500 or lower. A value of the molecular weight is a theoretical value obtained from a structural formula. In the present invention, the acidic compound (A1) preferably contains one or more types selected from an ammonium salt and a compound represented by Formula (101) or (102) described later. In the present invention, the ammonium salt (A2) is preferably an acidic compound. The ammonium salt (A2) may be a compound containing an acidic compound which generates a base in a case of being heated to 40° C. or higher (preferably 120° C. to 200° C.), or may be a compound other than the acidic compound which generates a base in a case of being heated to 40° C. or higher (preferably 120° C. to 200° C.). Ammonium Salt In the present invention, the ammonium salt means a salt of an ammonium cation represented by Formula (101) or Formula (102) with an anion. The anion may be bonded via a covalent bond to any portion of the ammonium cation, or may also be present outside a molecule of the ammonium cation. The anion is preferably present outside a molecule of the ammonium cation. The expression of the anion being present outside the molecule of the ammonium cation refers to a case where the ammonium cation and the anion are not bonded via a covalent bond. Hereinafter, an anion outside a molecule of a cation moiety is also referred to as a counter anion. In the formula, R1to R6each independently represent a hydrogen atom or a hydrocarbon group, and R7represents a hydrocarbon group. R1and R2, R3and R4, R5and R6, and R5and R7may be bonded to each other to form a ring. The ammonium cation is preferably represented by any one of Formulas (Y1-1) to (Y1-5). In Formulas (Y1-1) to (Y1-5), R101represents an n-valent organic group, and R1and R7have the same meanings as R1and R7in Formula (101) or Formula (102). In Formulas (Y1-1) to (Y1-4), Ar101and Ar102each independently represent an aryl group, n represents an integer of 1 or higher, and m represents an integer of 0 to 5. In the present embodiment, the ammonium salt preferably has an anion having pKa1 of 0 to 4 and an ammonium cation. An upper limit of the pKa1 of the anion is more preferably 3.5 or lower, and even more preferably 3.2 or lower. A lower limit thereof is preferably 0.5 or higher, and more preferably 1.0 or higher. In a case where the pKa1 of the anion is within the above-mentioned range, the heterocyclic ring-containing polymer precursor can be cyclized at a lower temperature, and stability of the composition can also be improved. In a case where the pKa1 is 4 or lower, good stability of the thermal-base generator can be exhibited and generation of a base can be suppressed in the absence of heating, so that the composition exhibits good stability. In a case where the pKa1 is 0 or higher, the generated base is hardly neutralized, and cyclization efficiency of the heterocyclic ring-containing polymer precursor or the like is good. A type of the anion is preferably one selected from a carboxylate anion, a phenol anion, a phosphate anion, and a sulfate anion, and a carboxylate anion is more preferable for the reason that both salt stability and thermal decomposability are achieved. That is, the ammonium salt is more preferably a salt of an ammonium cation with a carboxylate anion. The carboxylate anion is preferably an anion of a divalent or higher carboxylic acid having two or more carboxyl groups, and more preferably an anion of a divalent carboxylic acid. According to the present embodiment, it is possible to use a thermal-base generator which can further improve stability, curability, and developability of the composition. In particular, by using an anion of a divalent carboxylic acid, stability, curability, and developability of the composition can be further improved. In the present embodiment, the carboxylate anion is preferably an anion of a carboxylic acid having pKa1 of 4 or lower. The pKa1 is more preferably 3.5 or lower, and even more preferably 3.2 or lower. According to this embodiment, stability of the composition can be further improved. Here, the pKa1 represents logarithm of a reciprocal of a dissociation constant of a first proton of an acid and reference can be made to the values described in Determination of Organic Structures by Physical Methods (written by Brown, H. C., McDaniel, D. H., Hafliger, O., Nachod, F. C.; edited by Braude, E. A., Nachod, F. C.; and Academic Press, New York, 1955), or Data for Biochemical Research (written by Dawson, R. M. C. et al.; and Oxford, Clarendon Press, 1959). For compounds which are not described in these documents, values calculated from structural formulas using a software ACD/pKa (manufactured by ACD/Labs) are used. The carboxylate anion is preferably represented by Formula (X1). In Formula (X1), EWG represents an electron withdrawing group. In the present embodiment, the electron withdrawing group means a group in which a Hammett's substituent constant am shows a positive value. Here, the am is described in detail in the review by TSUNO Yuho, Journal of the Society of Synthetic Organic Chemistry, Japan, Vol. 23, No. 8 (1965), pp. 631 to 642. The electron withdrawing group in the present embodiment is not limited to the substituents described in the document. As examples of a substituent having a positive value of am, a CF3group (σm=0.43), a CF3CO group (σm=0.63), an HC≡C group (σm=0.21), a CH2═CH group (σm=0.06), an Ac group (σm=0.38), a MeOCO group (σm=0.37), a MeCOCH═CH group (σm=0.21), a PhCO group (σm=0.34), and H2NCOCH2group (σm=0.06) are mentioned. Me represents a methyl group, Ac represents an acetyl group, and Ph represents a phenyl group (hereinafter the same applies). EWG is preferably a group represented by Formulas (EWG-1) to (EWG-6). In Formulas (EWG-1) to (EWG-6), Rx1to Rx3each independently represent a hydrogen atom, an alkyl group, an alkenyl group, an aryl group, a hydroxyl group, or a carboxyl group, and Ar represents an aromatic group. In the present embodiment, the carboxylate anion is preferably represented by Formula (XA). In Formula (XA), L10represents a single bond, or a divalent linking group selected from an alkylene group, an alkenylene group, an aromatic group, —NRX—, and a combination thereof, and RXrepresents a hydrogen atom, an alkyl group, an alkenyl group, or an aryl group. As specific examples of the carboxylate anion, a maleate anion, a phthalate anion, an N-phenyliminodiacetate anion, and an oxalate anion are mentioned. The details of the thermal-base generator can be referred to the description in paragraphs 0021 to 0077 of JP2016-027357A, the content of which is incorporated herein. For the thermal-base generator, the following compounds are exemplified. In a case where the thermal-base generator is used, the content of the thermal-base generator in the composition is preferably 0.1% to 50% by mass with respect to the total solid content of the composition. A lower limit thereof is more preferably, 0.5% by mass or higher, and still more preferably 1% by mass or higher. An upper limit thereof is more preferably 30% by mass or lower, and even more preferably 20% by mass or lower. For the thermal-base generator, one type or two or more types may be used. In a case where two or more types are used, a total amount is preferably within the above-mentioned range. Photo-Base Generator The photosensitive resin composition used in the present invention may contain a photo-base generator. The photo-base generator generates a base by exposure and does not show activity under normal conditions of normal temperature and pressure. The photo-base generator is not particularly limited as long as it generates the base (basic substance) in a case where irradiation with electromagnetic waves and heating are performed as external stimuli. Since the base generated by the exposure acts as a catalyst for curing the heterocyclic ring-containing polymer precursor by heating, the base can be suitably used in a case where negative tone development treatment is performed. In the present invention, a known photo-base generator can be used. For example, as described in M. Shirai, and M. Tsunooka, Prog. Polym. Sci., 21, 1 (1996); Masahiro Tsukaoka, polymer processing, 46, 2 (1997); C. Kutal, Coord. Chem. Rev., 211, 353 (2001); Y. Kaneko, A. Sarker, and D. Neckers, Chem. Mater., 11, 170 (1999); H. Tachi, M. Shirai, and M. Tsunooka, J. Photopolym. Sci. Technol., 13, 153 (2000); M. Winkle, and K. Graziano, J. Photopolym. Sci. Technol., 3, 419 (1990); M. Tsunooka, H. Tachi, and S. Yoshitaka, J. Photopolym. Sci. Technol., 9, 13 (1996); K. Suyama, H. Araki, M. Shirai, J. Photopolym. Sci. Technol., 19, 81 (2006), a transition metal compound complex, a compound having a structure such as an ammonium salt, an ionic compound, such as a compound in which an amidine moiety is latent by forming a salt with carboxylic acid, of which the base component forms a salt so as to be neutralized, and a non-ionic compound in which base component is latent by a urethane bond such as acyl compound or oxime bond such as carbamate derivatives, oxime ester derivatives are exemplified. The basic substance generated from the photo-base generator is not particularly limited, and compounds having an amino group, particularly monoamines, polyamines such as diamines, and amidines are exemplified. The generated basic substance is preferably a compound having a more highly basic amino group. This is because the catalytic action on the dehydration condensation reaction and the like in the imidization of the heterocyclic ring-containing polymer precursor is strong, and the addition of a smaller amount makes it possible to exhibit the catalytic effect in the dehydration condensation reaction and the like at a lower temperature. That is, since the catalytic effect of the generated basic substance is large, the apparent sensitivity as a negative type photosensitive resin composition is improved. From the viewpoint of the above-described catalytic effect, amidine and aliphatic amine are preferred. A photo-base generator used in the present invention is preferably a compound containing an aromatic ring and a compound in which the generated basic substance has an amino group. As the photo-base generator according to the present invention, for example, a photo-base generator having a cinnamic acid amide structure as disclosed in JP2009-080452A and WO2009/123122A, a photo-base generator having a carbamate structure as disclosed in JP2006-189591A and JP2008-247747A, a photo-base generator having an oxime structure or a carbamoyl oxime structure disclosed in JP2007-249013A and JP2008-003581A, and like are mentioned, but not limited to thereto. In addition, the structure of a known photo-base generator may be used. In addition, as the photo-base generator, compounds described in paragraphs 0185 to 0188, 0199 to 0200 and 0202 of JP2012-093746A, compounds described in paragraphs 0022 to 0069 of JP2013-194205A, compounds described in paragraphs 0026 to 0074 of JP2013-204019A, and the compounds described in paragraph 0052 of WO2010/064631A are mentioned as examples. As commercially available products of photo-base generators, WPBG-266, WPBG-300, WPGB-345, WPGB-140, WPBG-165, WPBG-027, PBG-018, WPGB-015, WPBG-041, WPGB-172, WPGB-174, WPBG-166, WPGB-158, WPGB-025, WPGB-168, WPGB-167 and WPBG-082 (manufactured by Wako Pure Chemical Industries, Co., Ltd.) can also be used. Additionally, the following compounds are exemplified as a photo-base generator. In a case where the photo-base generator is used, the content of the photo-base generator in the composition is preferably 0.1% to 50% by mass with respect to the total solid content of the composition. A lower limit thereof is more preferably, 0.5% by mass or higher, and still more preferably 1% by mass or higher. An upper limit thereof is more preferably 30% by mass or lower, and still more preferably 20% by mass or lower. For the photo-base generator, one type or two or more types may be used. In a case where two or more types are used, a total amount is preferably within the above-mentioned range. Other Additives Various additives, for example, a thermal-acid generator, a sensitizing dye, a chain transfer agent, a surfactant, a higher fatty acid derivative, an inorganic particle, a curing agent, a curing catalyst, a filler, an antioxidant, an ultraviolet absorbent, and an aggregation inhibitor can be blended, if necessary, with the photosensitive resin composition according to the embodiment of the present invention to the extent that an effect of the present invention is not impaired. In a case where these additives are blended, a total blending amount thereof is preferably 3% by mass or lower of the solid content of the composition. Thermal-Acid Generator The photosensitive resin composition according to the embodiment of the present invention may contain a thermal-acid generator. The thermal-acid generator generates an acid upon heating, and promotes cyclization of the heterocyclic ring-containing polymer precursor, so that mechanical properties of a cured film are further improved. For the thermal-acid generator, compounds described in paragraph 0059 of JP2013-167742A is mentioned. A content of the thermal-acid generator is preferably 0.01 parts by mass or higher, and more preferably 0.1 parts by mass or higher, with respect to 100 parts by mass of the heterocyclic ring-containing polymer precursor. In a case where the thermal-acid generator is contained in an amount of 0.01 parts by mass or higher, crosslinking reaction and cyclization of the heterocyclic ring-containing polymer precursor are promoted, so that mechanical properties and chemical resistance of the cured film can be further improved. In addition, the content of the thermal-acid generator is preferably 20 parts by mass or lower, more preferably 15 parts by mass or lower, and particularly preferably 10 parts by mass or lower, from the viewpoint of electrical insulating properties of the cured film. For the thermal-acid generator, only one type may be used, or two or more types may be used. In a case where two or more types are used, a total amount is preferably within the above-mentioned range. Sensitizing Dye The photosensitive resin composition according to the embodiment of the present invention may contain a sensitizing dye. The sensitizing dye absorbs a specific actinic radiation and becomes an electronically excited state. The sensitizing dye in the electronically excited state is brought into contact with a thermal-base generator, a thermal-radical polymerization initiator, a photo-radical polymerization initiator, or the like, to cause actions such as electron transfer, energy transfer, and heat generation. As a result, the thermal-base generator, the thermal-radical polymerization initiator, or the photo-radical polymerization initiator undergoes a chemical change and decomposes, so that radicals, acids, or bases are generated. For details of the sensitizing dye, reference can be made to the description in paragraphs 0161 to 0163 of JP2016-027357A, the content of which is incorporated herein. In a case where the photosensitive resin composition according to the embodiment of the present invention contains a sensitizing dye, the content of the sensitizing dye is preferably 0.01% to 20% by mass, with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention, more preferably, 0.1% to 15% by mass, and still more preferably 0.5% to 10% by mass. For the sensitizing dye, one type may be used alone, or two or more types may be used in combination. Chain Transfer Agent The photosensitive resin composition according to the embodiment of the present invention may contain a chain transfer agent. The chain transfer agent is defined, for example, in Polymer Dictionary, 3rd Edition, pp. 683 to 684 (edited by The Society of Polymer Science, 2005). As the chain transfer agent, for example, a group of compounds having SH, PH, SiH, or GeH in a molecule is used. These can donate a hydrogen to a low active radical to generate a radical, or can be oxidized and then deprotonated to generate a radical. In particular, thiol compounds (for example, 2-mercaptobenzimidazoles, 2-mercaptobenzothiazoles, 2-mercaptobenzoxazoles, 3-mercaptotriazoles, and 5-mercaptotetrazoles) can be preferably used. In a case where the photosensitive resin composition according to the embodiment of the present invention has the chain transfer agent, the content of the chain transfer agent is preferably 0.01 to 20 parts by mass, more preferably 1 to 10 parts by mass, and even more preferably 1 to 5 parts by mass, with respect to 100 parts by mass of the total solid content of the photosensitive resin composition according to the embodiment of the present invention. For the chain transfer agent, only one type may be used, or two or more types may be used. In a case where two or more types of chain transfer agents are used, a total thereof is preferably within the above-mentioned range. Surfactant From the viewpoint of further improving the coating property, each kind of surfactant may be added to the photosensitive resin composition according to the embodiment of the present invention. As the surfactant, each kind of surfactants such as a fluorine-based surfactant, a nonionic surfactant, a cationic surfactant, an anionic surfactant, and a silicone-based surfactant can be used. In addition, the following surfactants are preferably used. In a case where the photosensitive resin composition according to the embodiment of the present invention has a surfactant, the content of the surfactant is preferably 0.001% to 2.0% by mass, with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention, and more preferably 0.005% to 1.0% by mass. For the surfactant, only one type may be used, or two or more types may be used. In a case where two or more types of surfactants are used, a total thereof is preferably within the above-mentioned range. Higher Fatty Acid Derivative In the photosensitive resin composition according to the embodiment of the present invention, in order to prevent polymerization inhibition due to oxygen, a higher fatty acid derivative such as behenic acid or behenic acid amide is added and is caused to be localized on a surface of the composition in the course of drying after coating. In a case where the photosensitive resin composition according to the embodiment of the present invention has a higher fatty acid derivative, the content of the higher fatty acid derivative is preferably 0.1% to 10% by mass with respect to the total solid content of the photosensitive resin composition according to the embodiment of the present invention. For the higher fatty acid derivative, only one type may be used, or two or more types may be used. In a case where two or more higher fatty acid derivatives are used, a total thereof is preferably within the above-mentioned range. Restriction on Other Substances to be Contained A water content of the photosensitive resin composition according to the embodiment of the present invention is preferably less than 5% by mass, more preferably less than 1% by mass, and particularly preferably less than 0.6% by mass, from the viewpoint of properties of a coated surface. From the viewpoint of insulating properties, the metal content of the photosensitive resin composition according to the embodiment of the present invention is preferably less than 5 parts per million (ppm) by mass, more preferably less than 1 ppm by mass, and particularly preferably less than 0.5 ppm by mass. As the metal, sodium, potassium, magnesium, calcium, iron, chromium, nickel, and the like are mentioned. In a case where a plurality of metals are contained, a total of these metals is preferably within the above-mentioned range. In addition, as a method of reducing metal impurities which are unintentionally contained in the photosensitive resin composition according to the embodiment of the present invention, a method of selecting a raw material which contains a low metal content as the raw material constituting the photosensitive resin composition according to the embodiment of the present invention, a method of filtering the raw material constituting the photosensitive resin composition according to the embodiment of the present invention, a method of distilling under the condition in which the inside of the device is lined with polytetrafluoroethylene or the like to suppress the contamination as little as possible, and the like can be mentioned. In the photosensitive resin composition according to the embodiment of the present invention, a content of halogen atoms is preferably less than 500 ppm by mass, more preferably less than 300 ppm by mass, and particularly preferably less than 200 ppm by mass, from the viewpoint of wiring corrosiveness. Among these, in a case of being present in a halogen ion state, the content is preferably less than 5 ppm by mass, more preferably less than 1 ppm by mass, and even more preferably less than 0.5 ppm by mass. As the halogen atom, a chlorine atom and a bromine atom are mentioned. It is preferable that each of the chlorine atom and the bromine atom, or a total of the chlorine ion and the bromine ion is within the above-mentioned range. As a storage container of the photosensitive resin composition according to the embodiment of the present invention, a conventionally known storage container can be used. In addition, as the storage container, for the purpose of suppressing incorporation of impurities into raw materials and the composition, a multilayer bottle in which an inner wall of a container is composed of six types of six layers of resin, and a bottle with six types of resin being made as a seven-layer structure are preferably used. As such a container, for example, the container described in JP2015-123351A can be mentioned. Preparation of Composition The photosensitive resin composition according to the embodiment of the present invention can be prepared by mixing the above-mentioned components. A mixing method is not particularly limited, and mixing can be carried out by methods known in the related art. In addition, for the purpose of removing foreign substances such as dust and fine particles in the composition, it is preferable to carry out filtration using a filter. A filter pore size is preferably 1 μm or lower, more preferably 0.5 μm or lower, and even more preferably 0.1 μm or lower. A material of the filter is preferably polytetrafluoroethylene, polyethylene, or nylon. As the filter, a filter which has been previously washed with an organic solvent may be used. In a filtration step using a filter, a plurality of type of filters may be connected in series or in parallel and used. In a case where a plurality of types of filters are used, filters having different pore sizes and/or different materials may be used in combination. In addition, various materials may be filtered a plurality of times. In a case of being filtered a plurality of times, circulation filtration may be used. In addition, filtration may be carried out under pressure. In a case of where filtration is carried out under pressure, the pressure is preferably 0.05 MPa to 0.3 MPa. In addition to filtration using a filter, impurity removal treatment using an adsorbing material may be carried out. The filtration using a filter and the impurity removal treatment using an adsorbing material may be combined. As the adsorbing material, a known adsorbing material can be used. For example, an inorganic adsorbing material such as silica gel and zeolite, and an organic adsorbing material such as activated carbon are mentioned. Cured Film, Semiconductor Device, Method for Producing Cured Film, Method for Producing Laminate, and Method for Producing Semiconductor Device Next, a cured film, a semiconductor device, a method for producing a cured film, a method for producing a laminate, and a method for manufacturing a semiconductor device according to the embodiment of the present invention will be described. The cured film according to the embodiment of the present invention is formed by curing the photosensitive resin composition according to the embodiment of the present invention. A film thickness of the cured film according to the embodiment of the present invention can be, for example, 0.5 μm or higher, and 1 μm or higher. As an upper limit value, a film thickness of the cured film according to the embodiment of the present invention can be, for example, 100 μm or lower, and 30 μm or lower. Two or more layers of the cured film according to the embodiment of the present invention may be laminated to form a laminate. A laminate having two or more layers of the cured film according to the embodiment of the present invention preferably has a metal layer between the cured films. Such a metal layer is preferably used as a metal wiring such as a re-distribution layer. As a field to which the cured film according to the embodiment of the present invention can be applied, an insulating film of a semiconductor device, an interlayer insulating film for a re-distribution layer, and the like are mentioned. In particular, due to good resolution properties, the cured film of the present invention can be preferably used for an interlayer insulating film for a re-distribution layer in a three-dimensional mounting device. In addition, the cured film according to the embodiment of the present invention can also be used for photoresist for electronics, galvanic (electrolytic) resist, etching resist, solder top resist, and the like. In addition, the cured film according the embodiment of the present invention can also be used for production of board surfaces such as an offset board surface or a screen board surface, for etching of molded parts, for production of protective lacquers and dielectric layers in electronics, in particular, microelectronics, and the like. The method for producing a cured film according to the embodiment of the present invention includes using the photosensitive resin composition according to the embodiment of the present invention. Preferably, a method for producing a cured film which has a photosensitive resin composition layer forming step of applying the photosensitive resin composition according to the embodiment of the present invention to a substrate to form a layered shape, an exposure step of exposing the photosensitive resin composition layer, and a development treatment step of subjecting the exposed photosensitive resin composition layer (resin layer) to a development treatment is mentioned. The photosensitive resin composition according to the embodiment of the present invention is preferably used in the case of performing negative tone development. The method for producing a laminate according to the embodiment of the present invention includes a method for producing a cured film according to the embodiment of the present invention. The method of producing a laminate according to the embodiment of the present invention preferably includes forming a cured film in accordance with the method of producing a cured film according to the embodiment of the present invention, and then further reperforming the photosensitive resin composition layer forming step, the exposure step, and the development treatment step, in this order. In particular, it is preferable that the photosensitive resin composition layer forming step, the exposure step, and the development treatment step are further carried out, in this order, 2 to 5 times (that is, 3 to 6 times in total). By laminating the cured film in this manner, a laminate can be obtained. In the present invention, in particular, it is preferable to provide a metal layer on a portion which has been developed and removed, after the cured film is provided and developed. The details thereof will be described below. Photosensitive Resin Composition Layer Forming Step The method for producing a laminate according to the embodiment of the present invention includes a photosensitive resin composition layer forming step of applying a photosensitive resin composition to a substrate to form a layered shape. The type of substrate can be appropriately determined depending on the application, such as a semiconductor production substrate such as silicon, silicon nitride, polysilicon, silicon oxide, amorphous silicon, quartz, glass, an optical film, a ceramic material, a vapor deposited film, magnetic film, a metal substrate such as Ni, Cu, Cr, or Fe, paper, a spin on glass (SOG), a thin film transistor (TFT) array substrate, and an electrode plate of a plasma display panel (PDP), and is not particular limited thereto. In the present invention, in particular, the semiconductor production substrate is preferable, and the silicon substrate is more preferable. In a case where the photosensitive resin composition layer is formed on the surface of a resin layer or on a surface of the metal layer, the resin layer or the metal layer is the substrate. As a means to apply the photosensitive resin composition to the substrate, coating is preferable. Specifically, as means for application, a dip coating method, an air knife coating method, a curtain coating method, a wire bar coating method, a gravure coating method, an extrusion coating method, a spray coating method, a spin coating method, a slit coating method, and an inkjet method are exemplified. From the viewpoint of uniformity of the thickness of the photosensitive resin composition layer, the spin coating, the slit coating, the spray coating, and the inkjet are more preferable. A resin layer having a desired thickness can be obtained by appropriately adjusting a concentration of the solid content and application conditions according to the method. In addition, the coating method can be appropriately selected depending on the shape of the substrate. In a case where a circular substrate such as a wafer is used, the spin coating method, the spray coating method, the ink jet method, and the like are preferable, and in a case where a rectangular substrate is used, the slit coating method, the spray coating method, the ink jet method, and the like are preferable. For example, the spin coating method can be applied at a rotational speed of 500 to 2000 rpm for about 10 seconds to 1 minute. Drying Step The method for producing a laminate according to the embodiment of the present invention may include a step of drying to remove the solvent after forming the photosensitive resin composition layer. A preferred drying temperature is 50° C. to 150° C., more preferably 70° C. to 130° C., and even more preferably 90° C. to 110° C. A drying time is, for example, 30 seconds to 20 minutes, preferably 1 minute to 10 minutes, and more preferably 3 minutes to 7 minutes. Exposure Step The method for producing the laminate according to the embodiment of the present invention includes an exposure step of exposing the photosensitive resin composition layer. The amount of exposure is not particularly limited as long as the photosensitive resin composition can be cured, and for example, irradiation with 100 to 10000 mJ/cm2is preferable, and irradiation with 200 to 8000 mJ/cm2is more preferable, in terms of conversion of exposure energy at a wavelength of 365 nm. The exposure wavelength can be appropriately determined in the range of 190 to 1000 nm, preferably 240 to 550 nm. Development Treatment Step The method for producing the laminate according to the embodiment of the present invention includes a development treatment step of performing negative tone development treatment on the exposed photosensitive resin composition layer. By performing negative tone development, an unexposed portion (non-exposed portion) is removed. The development method is not particularly limited as long as the development method can form a desired pattern, and, for example, a development method such as paddle, spray, immersion, and ultrasonic waves can be employed. Development is performed using a developer. The developer can be used without particular limitation as long as the unexposed portion (non-exposed portion) is removed. The developer preferably contains an organic solvent. The organic solvents, for example, esters such as ethyl acetate, n-butyl acetate, amyl formate, isoamyl acetate, isobutyl acetate, butyl propionate, isopropyl butyrate, ethyl butyrate, butyl butyrate, methyl lactate, ethyl lactate, γ-butyrolactone, ε-caprolactone, δ-valerolactone, alkyl alkyl oxyacetate (example: methyl alkyl oxyacetate, ethyl alkyl oxy acetate, butyl alkyl oxy acetate (for example, methyl methoxy acetate, ethyl methoxy acetate, butyl methoxy acetate, methyl ethoxy acetate, ethyl ethoxyacetate, and the like)), 3-alkyloxypropionic acid alkyl esters (example: methyl 3-alkyloxypropionate, ethyl 3-alkyloxypropionate or the like (for example, methyl 3-methoxypropionate, ethyl 3-methoxypropionate), methyl 3-ethoxypropionate, ethyl 3-ethoxypropionate, and the like)), 2-alkyloxypropionic acid alkyl esters (example: methyl 2-alkyloxypropionate, ethyl 2-alkyloxypropionate, propyl 2-alkyl oxypropionate or the like (for example, methyl 2-methoxypropionate, ethyl 2-methoxypropionate, propyl 2-methoxypropionate, methyl 2-ethoxypropionate, ethyl 2-ethoxypropionate)), methyl 2-alkyloxy-2-methylpropionate and ethyl 2-alkyloxy-2-methylpropionate (for example, methyl 2-methoxy-2-methylpropionate, ethyl 2-ethoxy-2-methylpropionate, and the like), methyl pyruvate, ethyl pyruvate, propyl pyruvate, methyl acetoacetate, ethyl acetoacetate, methyl 2-oxobutanoate, ethyl 2-oxobutanoate, and the like, and ethers, for example, such as diethylene glycol dimethyl ether, tetrahydrofuran, ethylene glycol monomethyl ether, ethylene glycol monoethyl ether, methyl cellosolve acetate, ethyl cellosolve acetate, diethylene glycol monomethyl ether, diethylene glycol monoethyl ether, diethylene glycol monobutyl ether, propylene glycol monomethyl ether, propylene glycol monomethyl ether acetate, propylene glycol monoethyl ether acetate, propylene glycol monopropyl ether acetate, and the like, and ketones, for example, such as methyl ethyl ketone, cyclohexanone, cyclopentanone, 2-heptanone, 3-heptanone, N-methyl-2-pyrrolidone, and the like, and aromatic hydrocarbons, for example, such as toluene, xylene, anisole, limonene, and the like, and sulfoxides such as dimethyl sulfoxide are suitably mentioned. In the present invention, cyclopentanone and γ-butyrolactone are particularly preferable, and cyclopentanone is more preferable. The development time is preferably 10 seconds to 5 minutes. The temperature at the time of development is not particularly limited, and the development can usually be performed at 20° C. to 40° C. After treating with the developer, a rinsing may be further performed. The rinsing is preferably performed with a solvent different from the developer. For example, the rinsing can be performed using the solvent contained in the photosensitive resin composition. The rinsing time is preferably 5 seconds to 1 minute. Heating Step The method for producing the laminate according to the embodiment of the present invention preferably includes a heating step. In the heating step, a cyclization reaction of the heterocyclic ring-containing polymer precursor proceeds. In addition, in a case where the photosensitive resin composition according to the embodiment of the present invention contains a radically polymerizable compound, curing of a radically polymerizable compound which is not reacted also proceeds. The heating temperature (maximum heating temperature) is preferably 50° C. to 450° C., more preferably 140° C. to 400° C., and even more preferably 160° C. to 350° C. The heating is preferably performed at a temperature elevation rate of 1 to 12° C./min from the temperature at the start of heating to the maximum heating temperature, more preferably 2 to 10° C./min, and even more preferably 3 to 10° C./min. By setting the temperature elevation rate to 2° C./min or higher, excessive volatilization of the amine can be prevented while securing productivity, and by setting the temperature elevation rate to 12° C./min or lower, residual stress of the cured film can be relieved. The temperature at the start of heating is preferably 20° C. to 150° C., more preferably 20° C. to 130° C., and even more preferably 25° C. to 120° C. The temperature at the start of heating refers to a temperature at which the step of heating to the maximum heating temperature is started. For example, in a case where the photosensitive resin composition is applied on a substrate and then dried, the temperature at the start of heating is the temperature after drying, and for example it is preferable to gradually raise the temperature which is lower by 30° C. to 200° C. than the boiling point of the solvent contained in the photosensitive resin composition. The heating time (heating time at the maximum heating temperature) is preferably 10 to 360 minutes, more preferably 20 to 300 minutes, and particularly preferably 30 to 240 minutes. Particularly, in the case of forming a multilayer laminate, the heating is preferably performed at 180° C. to 320° C., and more preferably 180° C. to 260° C., from the viewpoint of adhesiveness between the cured films. Although the reason is not clear, it is considered that, at this temperature, the ethynyl groups of the heterocyclic ring-containing polymer precursor between layers mutually undergo a crosslinking reaction. The heating may be performed stepwise. As an example, a pretreatment step may be performed, in which the temperature is raised from 25° C. to 180° C. at 3° C./min, held at 180° C. for 60 minutes, raised from 180° C. to 200° C. at 2° C./min, held at 200° C. for 120 minutes, and the like may be performed. The heating temperature as the pretreatment step is preferably 100° C. to 200° C., more preferably 110° C. to 190° C., and even more preferably 120° C. to 185° C. In the pretreatment step, it is also preferable to perform the treatment while irradiating with ultraviolet ray as described in U.S. Pat. No. 9,159,547B. By such a pretreatment step, it is possible to improve the property of the film. The pretreatment step may be performed for a short time of about 10 seconds to 2 hours, and more preferably 15 seconds to 30 minutes. The pretreatment may be performed in two or more steps, for example, the pretreatment step 1 may be performed in the range of 100° C. to 150° C., and then the pretreatment step 2 may be performed in the range of 150° C. to 200° C. Furthermore, it may be cooled after heating, and the cooling rate in this case is preferably 1 to 5° C./min. It is preferable that the heating step is performed in an atmosphere of low oxygen concentration by flowing an inert gas such as nitrogen, helium, argon or the like, from the viewpoint of preventing the decomposition of the heterocyclic ring-containing polymer precursor. The oxygen concentration is preferably 50 ppm (volume ratio) or lower, and more preferably 20 ppm (volume ratio) or lower. Metal Layer Forming Step It is preferable that the method for producing the laminate according to the embodiment of the present invention includes a metal layer forming step of forming a metal layer on the surface of the photosensitive resin composition layer after development treatment. As the metal layer, while existing metal types can be used without particular limitation, copper, aluminum, nickel, vanadium, titanium, chromium, cobalt, gold and tungsten are exemplified, copper and aluminum are more preferable, and copper is even more preferable. The method of forming the metal layer does not have a particular limitation, and the existing method can be applied. For example, the methods disclosed in JP2007-157879A, JP2001-521288A, JP2004-214501A, and JP2004-101850A can be used. For example, photolithography, lift-off, electrolytic plating, electroless plating, etching, printing, and methods combining these may be considered. More specifically, a patterning method combining sputtering, photolithography and etching, and a patterning method combining photolithography and electrolytic plating may be mentioned. The thickness of the metal layer is preferably 0.1 to 50 μm, and more preferably 1 to 10 μm at the thickest part. Laminating Step The production method according to the embodiment of the present invention preferably further includes a laminating step. The laminating step is a series of steps including performing the photosensitive resin composition layer forming step, the exposure step, and the development treatment step in the above order. It goes without saying that the laminating step may further include the above-mentioned drying step, heating step, and the like. In a case where the laminating step is further performed after the laminating step, a surface activation treatment step may be further performed after the exposure step or the metal layer forming step. Plasma treatment is exemplified as the surface activation treatment. The laminating step is preferably performed 2 to 5 times, and more preferably 3 to 5 times. For example, a configuration having 3 to 7 resin layers such as a resin layer/a metal layer/a resin layer/a metal layer/a resin layer/a metal layer is preferable, and a configuration having 3 to 5 resin layers is more preferable. That is, particularly in the present invention, it is preferable that the photosensitive resin composition layer forming step, the exposure step, and the development treatment step are further performed in the above-mentioned order so as to cover the metal layer, after the metal layer is provided. By alternately performing the laminating step of laminating the photosensitive resin composition layer (resin) and the metal layer forming step, the photosensitive resin composition layer (resin layer) and the metal layer can be alternately laminated. In the present invention, a semiconductor device having the cured film or the laminate according to the embodiment of the present invention is also disclosed. As a specific example of a semiconductor device using the photosensitive resin composition according to the embodiment of the present invention, for forming an interlayer insulating film for a re-distribution layer, reference can be made to the description in paragraphs 0213 to 0218 and the description of FIG. 1 of JP2016-027357A can be referred, the content of which is incorporated herein. EXAMPLES Hereinafter, the present invention will be described more specifically with reference to examples. Materials, amounts used, proportions, treatment details, treatment procedures, and the like shown in the following examples can be appropriately changed without departing from the gist of the present invention. Accordingly, a scope of the present invention is not limited to the following specific examples. “Parts” and “%” are on a mass basis unless otherwise stated. Synthesis of Heterocyclic Ring-Containing Polymer Precursor Composition (Photosensitive Resin Composition) Synthesis Example 1 Synthesis of polyimide precursor A-1 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and diamine (a) shown below 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine, and 250 mL of diglyme (diethylene glycol dimethyl ether) were mixed. The mixture was stirred for 4 hours at a temperature of 60° C. to produce a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate. The reaction mixture was then cooled to −10° C. and 17.0 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 38.0 g of the hydroxyl group-containing diamine (a) shown below was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C. Then, the mixture was stirred for 2 hours and then 20 mL of ethyl alcohol was added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 31,000, a number-average molecular weight of 10,200 and an amine value of 0.0052 mmol/g. Synthesis Example 2 Synthesis of polyimide precursor A-2 from pyromellitic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl 29.8 g of pyromellitic acid dianhydride, 36.4 g of 2-hydroxyethyl methacrylate, 22.07 g of pyridine and 100 mL of tetrahydrofuran were mixed and stirred at a temperature of 60° C. for 4 hours. The reaction mixture was then cooled to −10° C., a solution in which 34.35 g of dicyclohexylcarbodiimide was dissolved in 80 mL of γ-butyrolactone was added dropwise to the reaction mixture over 60 minutes at −10° C., and the mixture was stirred for 30 minutes. Subsequently, a solution in which 40.2 g of 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl was dissolved in 200 mL of γ-butyrolactone was added dropwise to the reaction mixture over 30 minutes at −5° C. The mixture was stirred for 1 hour, and then 20 mL of ethyl alcohol and 200 mL of γ-butyrolactone were added. The sediment formed in the reaction mixture was removed by filtration to obtain a reaction solution. The resulting reaction solution was added with 14 L of water to sediment a polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of polyimide precursor was filtered and dried at 45° C. under reduced pressure for 2 days. The polyimide precursor had a weight-average molecular weight of 24,900, a number-average molecular weight of 8,300 and an amine value of 0.0071 mmol/g. Synthesis Example 3 Synthesis of polyimide precursor A-3 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 42.4 g of 4,4′-oxydiphthalic acid dianhydride, 36.4 g of 2-hydroxyethyl methacrylate, and 22.07 g of pyridine and 100 mL of tetrahydrofuran were mixed and stirred at a temperature of 60° C. for 4 hours. The reaction mixture was then cooled to −10° C., a solution in which 34.35 g of diisopropylcarbodiimide was dissolved in 80 mL of γ-butyrolactone was added dropwise to the reaction mixture over 60 minutes at −10° C., and the mixture was stirred for 30 minutes. Subsequently, a solution in which 25.1 g of 4,4′-diaminodiphenyl ether was dissolved in 200 mL of γ-butyrolactone was added dropwise to the reaction mixture over 30 minutes at −5° C., the mixture was stirred for 1 hour, and then 20 mL of ethyl alcohol and 200 mL of γ-butyrolactone were added. The sediment formed in the reaction mixture was removed by filtration to obtain a reaction solution. The resulting reaction solution was added with 14 L of water to sediment a polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of polyimide precursor was filtered and dried at 45° C. under reduced pressure for 2 days. The polyimide precursor had a weight-average molecular weight of 26,400, a number-average molecular weight of 8,800 and an amine value of 0.0064 mmol/g. Synthesis Example 4 Synthesis of polyimide precursor A-4 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine, and 250 mL of diglyme were mixed and stirred at a temperature of 60° C. for 4 hours. The reaction mixture was then cooled to −10° C. and 17.0 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 20.1 g of 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C., the mixture was stirred for 1 hour, and then 20 mL of ethyl alcohol was added. 6 L of water was added to the obtained reaction solution to sediment the polyimide precursor, the solid was filtered, and dissolved in 380 g of tetrahydrofuran. The resulting solution was added with 6 L of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 2 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 25,400, a number-average molecular weight of 8,600 and an amine value of 0.0019 mmol/g. Synthesis Example 5 Synthesis of polybenzoxazole precursor composition A-5 from 2,2′-bis (3-amino-4-hydroxyphenyl) hexafluoropropane and 4,4′-oxydibenzoyl chloride 28.0 g of 2,2′-bis (3-amino-4-hydroxyphenyl) hexafluoropropane was stirred and dissolved in 200 mL of N-methylpyrrolidone. Subsequently, 25.0 g of 4,4′-oxydibenzoyl chloride was added dropwise over 30 minutes while maintaining the temperature at 0° C., and stirring was continued for 60 minutes. 6 L of water was added to the resulting reaction solution to sediment the polybenzoxazole precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was filtered and dried at 45° C. for 2 days under reduced pressure. The obtained powdery polybenzoxazole precursor had a weight-average molecular weight of 28,500, a number-average molecular weight of 9,800, and an amine value of 0.0252 mmol/g. Synthesis Example 6 Synthesis of polyimide precursor A-6 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed, stirred at a temperature of 60° C. for 4 hours, and then a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate was produced. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C. The mixture was stirred for 2 hours and then 20 mL of ethyl alcohol was added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 25,200, a number-average molecular weight of 8,000 and an amine value of 0.0050 mmol/g. Synthesis Example 7 Synthesis of polyimide precursor A-7 from 3,3′,4,4′-biphenyltetracarboxylic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 20.1 g of 3,3′,4,4′-biphenyltetracarboxylic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed. The mixture was stirred at a temperature of 60° C. for 4 hours to produce a diester of 3,3′,4,4′-biphenyltetracarboxylic acid dianhydride and 2-hydroxyethyl methacrylate. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. A solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 50 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C., the mixture was stirred for 2 hours, and then 20 mL of ethyl alcohol was added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 22,100, a number-average molecular weight of 7,800 and an amine value of 0.0219 mmol/g. Synthesis Example 8 Synthesis of polyimide precursor A-8 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 1,4-diaminobenzene (paraphenylene diamine) 42.4 g of 4,4′-oxydiphthalic acid dianhydride, 36.4 g of 2-hydroxyethyl methacrylate, and 22.07 g of pyridine and 100 mL of tetrahydrofuran were mixed and stirred at a temperature of 60° C. for 4 hours. The reaction mixture was then cooled to −10° C., a solution in which 34.35 g of diisopropylcarbodiimide was dissolved in 80 mL of γ-butyrolactone was added dropwise to the reaction mixture over 60 minutes at −10° C., and the mixture was stirred for 30 minutes. Subsequently, a solution in which 13.6 g of 1,4′-diaminodiphenyl ether was dissolved in 400 mL of γ-butyrolactone was added dropwise to the reaction mixture over 30 minutes at −5° C., the mixture was stirred for 1 hour, and then 20 mL of ethyl alcohol and 200 mL of γ-butyrolactone were added. The sediment formed in the reaction mixture was removed by filtration to obtain a reaction solution. The resulting reaction solution was added with 14 L of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of polyimide precursor was filtered and dried at 45° C. under reduced pressure for 2 days. The polyimide precursor had a weight-average molecular weight of 21,400, a number-average molecular weight of 7,800 and an amine value of 0.0023 mmol/g. Synthesis Example 9 Synthesis of polyimide precursor A-9 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed, stirred at a temperature of 60° C. for 4 hours, and then a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate was produced. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C. The mixture was stirred for 2 hours, and then 0.2 g of 2-(dimethylamino) ethanol and 20 mL of ethyl alcohol were added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 21,200, a number-average molecular weight of 7,600 and an amine value of 0.0118 mmol/g. Synthesis Example 10 Synthesis of polyimide precursor A-10 from pyromellitic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl 29.8 g of pyromellitic acid dianhydride, 36.4 g of 2-hydroxyethyl methacrylate, 22.07 g of pyridine and 100 mL of tetrahydrofuran were mixed and stirred at a temperature of 60° C. for 4 hours. The reaction mixture was then cooled to −10° C., a solution in which 34.35 g of dicyclohexylcarbodiimide was dissolved in 80 mL of γ-butyrolactone was added dropwise to the reaction mixture over 60 minutes at −10° C., and the mixture was stirred for 30 minutes. Subsequently, a solution in which 40.2 g of 4,4′-diamino-2,2′-bis (trifluoromethyl) biphenyl was dissolved in 200 mL of γ-butyrolactone was added dropwise to the reaction mixture over 30 minutes at 0° C. The mixture was stirred for 1 hour, and then 20 mL of ethyl alcohol and 200 mL of γ-butyrolactone were added. The sediment formed in the reaction mixture was removed by filtration to obtain a reaction solution. 14 L of water was added to the obtained reaction solution to sediment the polyimide precursor, which was filtered and dried at 45° C. for 2 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 26,900, a number-average molecular weight of 8,500 and an amine value of 0.0505 mmol/g. Synthesis Example 11 Synthesis of polyimide precursor A-11 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed, stirred at a temperature of 60° C. for 4 hours, and then a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate was produced. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 120 minutes at −10° C. The mixture was stirred for 2 hours and then 20 mL of ethyl alcohol was added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor and the water-polyimide precursor mixture was stirred for 15 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 25,200, a number-average molecular weight of 8,000 and an amine value of 0.0005 mmol/g. Synthesis Example 12 Synthesis of polyimide precursor A-12 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed, stirred at a temperature of 60° C. for 4 hours, and then a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate was produced. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C. The mixture was stirred for 2 hours and then 20 mL of ethyl alcohol was added. The polyimide precursor was then sedimented in 6 liters of water, filtered, and dried at 45° C. for 2 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 22,300, a number-average molecular weight of 7,500 and an amine value of 0.0421 mmol/g. Synthesis Example 13 Synthesis of polyimide precursor A-13 from 4,4′-oxydiphthalic acid dianhydride, 2-hydroxyethyl methacrylate and 4,4′-diaminodiphenyl ether 21.2 g of 4,4′-oxydiphthalic acid dianhydride, 18.0 g of 2-hydroxyethyl methacrylate, 23.9 g of pyridine and 250 mL of diglyme were mixed, stirred at a temperature of 60° C. for 4 hours, and then a diester of 4,4′-oxydiphthalic acid dianhydride and 2-hydroxyethyl methacrylate was produced. The reaction mixture was then cooled to −10° C. and 16.5 g of SOCl2was added over 60 minutes, keeping the temperature at −10° C. After diluting with 50 mL of N-methylpyrrolidone, a solution in which 12.6 g of 4,4′-diaminodiphenyl ether was dissolved in 100 mL of N-methylpyrrolidone was added dropwise to the reaction mixture over 30 minutes at −5° C. The mixture was stirred for 2 hours, and then 0.2 g of N,N-dimethylethylenediamine and 20 mL of ethyl alcohol were added. The polyimide precursor was then sedimented in 6 liters of water and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 60 minutes. The solid of the polyimide precursor was filtered and dissolved in 380 g of tetrahydrofuran. The resulting solution was placed in 6 liters of water to sediment the polyimide precursor in 6 liters of water, and the water-polyimide precursor mixture was vigorously stirred at a speed of 500 rpm for 15 minutes. The solid of the polyimide precursor was again filtered and dried at 45° C. for 3 days under reduced pressure. The polyimide precursor had a weight-average molecular weight of 23,200, a number-average molecular weight of 8,500 and an amine value of 0.0082 mmol/g. Measurement of Weight-Average Molecular Weight and Number-Average Molecular Weight The weight-average molecular weight (Mw) and the number-average molecular weight (Mn) of the heterocyclic ring-containing polymer precursor are polystyrene equivalent values by gel permeation chromatography (GPC) measurement, and were measured by the following method. HLC-8220 (manufactured by Tosoh Co., Ltd.) was used as a measuring device, and guard column HZ-L, TSKgel Super HZM-M, TSKgel Super HZ4000, TSKgel Super HZ 3000, and TSKgel Super HZ 2000 (manufactured by Tosoh Co., Ltd) were used as a column. Measurement was performed at 40° C. at a flow rate of 0.35 mL/min using tetrahydrofuran (THF) as an eluent. Detection was performed using an ultraviolet-ray (UV) 254 nm detector. As a measurement sample, a sample in which the heterocyclic ring-containing polymer precursor was diluted to 0.1% by mass with THF was used. Method of Measuring Amine Value of Heterocyclic Ring-Containing Polymer Precursor The amine value of the heterocyclic ring-containing polymer precursor was measured according to the following method. 0.3 g of the heterocyclic ring-containing polymer precursor (solid) obtained in the above synthesis example was dissolved in 50 mL of diglyme and 10 mL of acetic acid, and titration was performed with a 0.001 N perchloric acid solution to calculate an amine value. The amount of amine (mmol/g) per 1 g of the heterocyclic ring-containing polymer precursor is shown in Table 1 as an amine value. In Table 1, the term “without component (A)” refers to a proportion of the heterocyclic ring-containing polymer precursor not containing, at a terminal, a group represented by the above formula (A) in the total amount of the heterocyclic ring-containing polymer precursor, and “with component (A)” refers to the proportion of the heterocyclic ring-containing polymer precursor containing, at a terminal, a group represented by the above formula (A) in the total amount of the heterocyclic ring-containing polymer precursor. TABLE 1HeterocyclicWithoutWithring-containingAminecomponentcomponentpolymer precursorvalue(A)(A)A-10.005297.8%2.2%A-20.007197.2%2.8%A-30.006497.4%2.6%A-40.001999.3%0.8%A-50.025290.0%10.0%A-60.005098.0%2.0%A-70.021991.3%8.7%A-80.002399.1%0.9%A-90.011895.3%4.7%A-100.050579.9%20.1%A-110.000599.8%0.2%A-120.042183.2%16.8%A-130.008296.7%3.3% Preparation of Photosensitive Resin Composition Each component described in the following Table 2 or Table 3 was mixed, and the photosensitive resin composition was prepared as a uniform solution. The obtained photosensitive resin composition was pressure-filtered at a pressure of 0.3 MPa through a filter having a pore width of 0.8 μm. Method of Measuring Amine Value of Solid Content of Photosensitive Resin Composition An amine value of the solid content in the photosensitive resin composition was measured according to the following method. 0.3 g of the photosensitive resin composition was dissolved in 50 mL of diglyme and 10 mL of acetic acid, and titration was performed with a 0.001 N perchloric acid solution to calculate the amine value. The amount of amine (mmol/g) per 1 g of the solid content of the photosensitive resin composition is shown as the amine value. Cyclization Time The photosensitive resin composition obtained above was applied on a silicon wafer by spinning (1,200 rpm, 30 seconds). A silicon wafer to which the photosensitive resin composition was applied was dried on a hot plate at 100° C. for 5 minutes to form a uniform film having a thickness of 10 μm on the silicon wafer. The film coated on the silicon wafer was then exposed using an aligner (Karl-Suss MA150). The exposure was performed with a high pressure mercury lamp, and was irradiated at 500 mJ/cm2in terms of conversion of exposure energy at a wavelength of 365 nm. After exposure, the photosensitive resin composition was scraped, heated to 180° C. at a rate of 5° C./min in a nitrogen atmosphere, subjected to thermogravimetric analysis (TGA measurement) in a state maintained at 180° C., and cyclization time was evaluated. Although a loss of mass occurs as the heterocyclic ring-containing polymer precursor undergoes a cyclization reaction, the time until the loss of mass does not occur was evaluated based on the following criteria. The shorter the time, the faster the cyclization rate is, which is a preferable result.A: More than 0 minutes and 30 minutes or lessB: More than 30 minutes and 60 minutes or lessC: More than 60 minutes and 120 minutes or lessD: More than 120 minutes. Storage Stability 10 g of the photosensitive resin composition was sealed in a container (container material: light shielding glass, volume: 100 mL), and was allowed to stand in an environment of 25° C. and a relative humidity of 65%. The stability was evaluated by the time until a solid precipitated from the photosensitive resin composition. The longer the time to precipitation, the higher the stability of the photosensitive resin composition is, which is a more preferable result. For precipitation of a solid, three samples stored in containers were prepared for one type of photosensitive resin composition, and the container of one sample was opened at timings exceeding 30, 60, and 120 days. The entire content of the photosensitive resin composition was pressure-filtered with a mesh having a pore size of 0.8 μm, the presence or absence of foreign materials on the mesh was visually observed, and the presence or absence of precipitates was determined as follows.A: Solid precipitation was not observed even after 120 days.B: Solids were precipitated within 120 days over 60 days.C: Solids were precipitated within 60 days over 30 days.D: Solids were precipitated within 30 days. TABLE 2(A)Heterocyclicring-containing(B) Radically(C) Photo-radical(D) Thermal-polymerpolymerizablepolymerizationbase(E) PolymerizationprecursorcompoundinitiatorgeneratorinhibitorParts byParts byParts byParts byParts byTypemassTypemassTypemassTypemassTypemassExample 1A-132.9B-15.0C-11.2E-10.08Example 2A-133.0B-25.0C-11.2Example 3A-132.4B-1/B-33.0 + 2.0C-51.2D-10.5E-10.08Example 4A-231.6B-17.0C-11.2E-20.08Example 5A-232.5B-25.0C-41.2D-30.5E-1/E-20.05 + 0.03Example 6A-233.0B-35.0C-1/C-50.6 + 0.6Example 7A-332.4B-15.0C-31.2D-30.5E-10.08Example 8A-332.9B-25.0C-11.2E-10.08Example 9A-333.2B-45.0C-11.2D-10.5Example 10A-432.9B-15.0C-21.2E-10.08Example 11A-433.2B-25.0C-11.2D-20.5E-20.08Example 12A-430.9B-37.0C-11.2E-10.08Example 13A-522.9B-115.0C-31.2E-10.08Example 14A-517.4B-220.0C-11.2D-20.5E-10.08Example 15A-518.6B-320.0C-11.2E-10.08Example 16A-632.4B-25.0C-11.2D-10.5E-10.08(G) Metal(F) MigrationadhesivenesssuppressingimprovingAmine valueagentagent(H) SolventofParts byParts byParts bycompositionCyclizationTypemassTypemassTypemassmmol/gtimeStorage stabilityExample 1F-10.12G-10.7H-1/H-248 + 120.0032BAExample 2F-20.12G-20.7H-3600.0042BAExample 3F-10.12G-10.7H-1/H-248 + 120.0051ABExample 4F-20.12H-1/H-248 + 120.0052BAExample 5G-30.7H-1/H-248 + 120.0068AAExample 6F-20.12G-20.7H-1/H-448 + 120.0055BAExample 7F-20.12G-20.7H-1/H-248 + 120.0042AAExample 8F-10.12G-20.7H-1/H-248 + 120.0044BAExample 9F-20.12H-4/H-248 + 120.0049ABExample 10F-10.12G-10.7H-1/H-248 + 120.0006CAExample 11H-3600.0008CBExample 12F-1/F-20.06 + 0.06G-10.7H-1/H-248 + 120.0007CAExample 13F-20.12G-20.7H-1/H-248 + 120.0121BCExample 14F-20.12G-20.7H-1/H-248 + 120.0148ACExample 15F-10.12H-3/H-248 + 120.0189BCExample 16F-20.12G-20.7H-1/H-248 + 120.0031AA TABLE 3(A) Heterocyclicring-containing(B) Radically(C) Photo-radical(E)polymerpolymerizablepolymerization(D) Thermal-basePolymerizationprecursorcompoundinitiatorgeneratorinhibitorParts byParts byParts byParts byParts byTypemassTypemassTypemassTypemassTypemassExample 17A-632.5B-25.0C-11.2D-30.5Example 18A-632.9B-35.0C-21.2E-10.08Example 19A-732.4B-25.0C-11.2D-30.5E-10.08Example 20A-732.9B-35.0C-1/C-51.1 + 0.1E-10.08Example 21A-832.9B-25.0C-11.2E-10.08Example 22A-832.4B-35.0C-41.2D-1/D-30.3 + 0.2E-10.08Example 23A-932.4B-25.0C-11.2D-10.5E-10.08Example 24A-932.4B-35.0C-51.2D-30.5E-10.08Example 25A-2/A-1026.3 + 6.6B-15.0C-11.2E-10.08Example 26A-11/A-1223.0 + 9.9B-25.0C-11.2E-10.08Example 27A-1332.4B-35.0C-51.2D-30.5E-10.08Example 28A-1332.4B-1/B-32.0 + 3.0C-51.2D-30.5E-10.08ComparativeA-1032.9B-25.0C-11.2E-10.08example 1ComparativeA-1132.9B-25.0C-11.2E-10.08example 2ComparativeA-1232.4B-25.0C-11.2D-10.5E-10.08example 3(F) Migration(G) Metalsuppressingadhesivenessagentimproving agent(H) SolventAmine value ofParts byParts byParts bycompositionCyclizationTypemassTypemassTypemassmmol/gtimeStorage stabilityExample 17F-20.12G-10.7H-1/H-248 + 120.0051AAExample 18F-10.12G-10.7H-3600.0032BAExample 19F-10.12G-30.7H-1/H-248 + 120.0182ACExample 20F-10.12G-10.7H-1/H-248 + 120.0178BCExample 21F-10.12G-20.7H-1/H-248 + 120.0006CAExample 22F-20.12G-30.7H-1/H-248 + 120.0008CBExample 23F-20.12G-20.7H-1/H-248 + 120.0082AAExample 24F-20.12G-30.7H-1/H-248 + 120.0092AAExample 25F-10.12G-20.7H-1/H-248 + 120.0089AAExample 26F-20.12G-30.7H-1/H-248 + 120.0068AAExample 27F-20.12G-30.7H-1/H-248 + 120.0052AAExample 28F-20.12G-1/G-30.2 + 0.5H-1/H-248 + 120.0052AAComparativeF-20.12G-10.7H-3600.0285BDexample 1ComparativeF-10.12G-20.7H-1/H-248 + 120.0001DAexample 2ComparativeF-10.12G-20.7H-1/H-248 + 120.0255ADexample 3 (A) Heterocyclic Ring-Containing Polymer Precursor Composition A-1 to A-12: Heterocyclic ring-containing polymer precursor produced in Synthesis Examples 1 to 12 (B) Radically Polymerizable Compound B-1: NK Ester M-40G (manufactured by Shin-Nakamura Chemical Co., Ltd.)B-2: SR-209 (manufactured by Sartomer Co., Ltd.)B-3: NK Ester A-9300 (manufactured by Shin-Nakamura Chemical Co., Ltd.)B-4: A-TMMT (Shin-Nakamura Chemical Co., Ltd.) (C) Photo-Radical Polymerization Initiator C-1: IRGACURE OXE 01 (manufactured by BASF)C-2: IRGACURE OXE 02 (manufactured by BASF)C-3: IRGACURE OXE 04 (manufactured by BASF)C-4: IRGACURE-784 (manufactured by BASF)C-5: NCI-831 (manufactured by ADEKA Corporation) (D) Thermal-Base Generator D-1: The following compoundD-2: The following compoundD-3: The following compound (E) Polymerization Inhibitor E-1: 1,4-BenzoquinoneE-2: 1,4-Methoxyphenol (F) Migration Suppressing Agent F-1: 1,2,4-TriazoleF-2: 1H-Tetrazole (G) Metal Adhesiveness Improving Agent G-1: The following compoundG-2: The following compoundG-3: The following compound (H) Solvent H-1: γ-ButyrolactoneH-2: Dimethyl sulfoxideH-3: N-methyl-2-pyrrolidoneH-4: Ethyl lactate Additionally, about the solvent in Table 2 or Table 3, in a case where the column of a kind is “H-1/H-2” and the column of parts by mass is “48+12”, this means that 48 parts by mass of H-1 and 12 parts by mass of H-2 are included. As is clear from the above results, the photosensitive resin composition according to the embodiment of the present invention had a fast progress rate in cyclization reaction and an excellent storage stability over time. Example 100 The photosensitive resin composition of Example 1 was pressure-filtered through a filter having a pore width of 0.8 μm, and then the photosensitive resin composition was coated onto a silicon wafer by a spin coating method. The silicon wafer coated with the photosensitive resin composition layer was dried at 100° C. for 5 minutes on a hot plate to form a uniform photosensitive resin composition layer with a thickness of 15 μm on the silicon wafer. The photosensitive resin composition layer on the silicon wafer was exposed with an exposure energy of 500 mJ/cm2using a stepper (Nikon NSR 2005 i9C), and the exposed photosensitive resin composition layer (resin layer) was developed with cyclopentanone for 60 seconds to form a hole having a diameter of 10 μm. The temperature was then raised at a temperature elevation rate of 10° C./min under a nitrogen atmosphere, and after reaching 250° C., heating was performed for 3 hours. After cooling to room temperature, a copper thin layer (metal layer) having a thickness of 2 μm was formed on a part of the surface of the photosensitive resin composition layer by a vapor deposition method so as to cover the above-mentioned hole portion. Furthermore, the procedure from filtering the photosensitive resin composition to heating of the patterned film for 3 hours was performed, in the same manner as described above, on the surface of the metal layer and the photosensitive resin composition layer, using the same type of photosensitive resin composition. This procedure was repeated to produce a laminate including a resin layer/a metal layer/a resin layer. This interlayer insulating film for a re-distribution layer had an excellent insulating properties. In addition, this interlayer insulating film for a re-distribution layer was used to produce a semiconductor device. As a result, it was confirmed that the semiconductor device operates without problems. | 167,387 |
11860539 | DETAILED DESCRIPTION OF THE PRESENT INVENTION In accordance with this invention, it is now possible to provide a photosensitive polymer composition of aqueous emulsion type in which the above-mentioned problems encountered in conventional techniques are addressed, and which can produce a photo-hardened product having good water resistance and abrasion resistance and has excellent photo-sensitivity and image resolving power. Accordingly, the present invention provides a photosensitive polymer composition in the form of an aqueous emulsion comprising a non-hydrolyzed or non-saponified vinyl acetate polymer (polyvinyl acetate) added with a styrylpyridinium and/or styrylquinolinium group. The term “addition” means a chemical addition. By the term “vinyl acetate polymer” herein is meant to be polyvinyl acetate also note as PVAc, and a copolymer of vinyl acetate with a monomer. In the case where the vinyl acetate polymer is a copolymer, examples of the monomer copolymerizable with vinyl acetate which can be used are ethylene, acrylates such as methyl acrylate and methyl methacrylate, acrylamides such as acrylamide, methacrylamide, N-methylolacrylamide and N,N-dimethylacrylamide, unsaturated carboxylic acids and salts thereof such as acrylic acid, methacrylic acid, crotonic acid, fumaric acid, itaconic acid, maleic acid and salts thereof, and cationic monomers such as dimethylaminoethyl methacrylates, vinylimidazole, vinylpyridine and vinylsuccinimide. To form a composition for which water development is possible and which gives a hardened product of excellent water resistance after photo-hardening, it is required that polyvinyl acetate is 0% hydrolyzed or 0% saponified. The use of 0% hydrolyzed polyvinyl acetate is the new art instead of using PVAc that is hydrolyzed or saponified to 80-90%, which is polyvinyl alcohol (PVA) as a polymer chain to which a styrylpyridinium or styrylquinolinium group is grafted in the prior art. The styrylpyridinium or styrylquinolinium group-added polymer used in the present invention is prepared by adding a styrylpyridinium and/or styrylquinolinium group to non-hydrolyzed or non-saponified vinyl acetate polymer (PVAc) by an acid catalyzed reaction. The styrylpyridinium-grafted PVA, polyvinyl alcohol which is saponified or hydrolyzed PVAc to 80-90% hydrolysis as disclosed in the prior art and their preparations are known by, for example, U.S. Pat. Nos. 4,339,524, 4,564,580, and 4,272,620. In the prior art it is stated that the ratio of grafting of the styrylpyridinium or styrylquinolinium group to PVA (polyvinyl alcohol) to be 0.5-20 mole % based on a unit mole of PVA. But in actual practice, when the grafting ratio to PVA (degree of polymerization is between 1700 and 2400) is lower than 1 mole %, a water-soluble photo-dimerizable PVA polymer having a desirable photo-crosslinking property cannot be obtained because of inadequate photo-hardening due to insufficient amount of grafted styryl groups. Also when the styryl groups are added to PVA (degree of polymerization between 1700 and 2400) at a mole % of over 1.8, the grafted PVA solution increase its viscosity greatly to the point of gelation at room temperature. According to the present invention, when 0.05%-0.20 mole % based on a unit mole of PVAc of the photodimerizable group is added to non-saponified PVAc, photo-crosslinking or photo-dimerization is more than adequate to create very water-resistant hardened material. In accordance with the present invention Examples of the compound of to be added to non-hydrolyzed PVAc(polyvinyl acetate) emulsion include but not limited to: 1-methyl-2-(p-formylstyryl)pyridinium, 1-methyl-4-(p-formylstyryl)pyridinium, 1-ethyl-2-(p-formylstyryl)pyridinium, 1-ethyl-4-(p-formylstyryl)pyridinium, 1-allyl-4-(p-formylstyryl)pyridinium, 1-(2-hydroxyethyl)-2-(p-formylstyryl)pyridinium, 1-(2-hydroxyethyl)-4-(p-formylstyryl)pyridinium, 1-carbamoylmethyl-2-(p-formylstyryl)pyridinium, 1-carbamoylmethyl-4-(p-formylstyryl)pyridinium, 1-methyl-2-(m-formylstyryl)pyridinium, 1-benzyl-2-(p-formylstyryl)pyridinium, 1-benzyl-4-(p-formylstyryl)pyridinium, 1-methyl-4-(p-formylstyryl)-5-ethylpyridinium, 1-methyl-2-(p-formylstyryl)quinolinium, 1-ethyl-2-(p-formylstyryl)quinolinium and 1-ethyl-4-(p-formylstyryl)quinolinium. The invention can be further modified by formulating with other materials such as free radical polymerization materials (one or more unsaturated monomers or oligomers that contain vinyl unsaturation and initiators capable of forming free radicals when exposed with light), polyvinyl alcohol (hydrolyzed PVAc to 70-90%) grafted with photosensitive groups, non-light sensitive components such as non-photosensitive polyvinyl alcohol and polyvinyl acetate emulsion, diazo resins, other compatible polymer systems, surfactants, biocide, and pigments. As it can be clearly understood regarding this invention from the description of the prior art information above, the striking discovery is that styryl pyridinium and/or styryl quinolinium group are added to non-hydrolyzed PVAc whereas all the prior art photosensitive compositions of this type are made with polyvinyl alcohol, made by hydrolyzing PVAc to 80-90%, grafted with styryl pyridinium or styryl quinolinium compound. The polyvinyl acetate used in this invention is 0% saponified or 0% hydrolyzed polyvinyl acetate, preferably a polyvinyl acetate homopolymer which is added with the photodimerizable group of the styryl base in the presence of an acid catalyst. The reaction is usually carried out at a pH of around 2.0 using hydrochloric, sulfuric, phosphoric or an organic sulphonic acid. The reaction is usually carried out at about 50° C. for a period of 4-8 hours. However, the time and temperature can be varied a great deal, if required. The reaction can be carried out at room temperature also. When the reaction is completed, the reaction mixture is then quenched by an alkaline solution such as potassium hydroxide, sodium hydroxide, or ammonia water to pH of around 7. The invention is very adaptable in that it can be changed in terms of photoreaction speed, and UV light absorption property by changing mole % added to PVAc; and changing the ratio between styrylpyridinium and styrylquinolinium group, so that the invention can be used for various industrial applications with many types of exposure equipment such as direct-to-plate with UV laser, UV-LED, and other light sources. Photosensitive polymer compositions comprised of the present invention have a very high photo reactivity, allowing thick layers (100-700 μm) of the photosensitive polymer composition to be hardened throughout the entire thickness of the coating, even with very short exposure time to light of a suitable wavelength. They display a faster photo speed than diazo or polyvinyl alcohol grafted with the photodimerizable group of the styryl base and other systems as described previously. The mechanism of photocross-linking in this patent is by photodimerization, and hence none of the photosensitive polymer compositions characterized by the patent, exhibit the susceptibility to oxygen, moisture and temperature of some other systems. As previously stated, the most notable and significant part of this invention is that PVAc (polyvinyl acetate) is the base polymer for adding the photodimerizable group of the styryl base unlike previously noted use of polyvinyl alcohol which is hydrolyzed PVAc to 80-90%. It is also significant that the amount of the photodimerizable group of the styryl base can be as little as 0.05-0.5 mole % based on a unit mole of PVAc. Even with the very small amount of the photosensitive group added to PVAc (vinyl acetate polymer), it produces highly water resistant material that has strong adhesion to many types of substrate such as polyester and stainless steel nickel, and various other surfaces with 15-30 mj/cm2UV exposure whereas Polyvinyl alcohol grafted with the same styryl groups at 1-1.5 mole % based on the unit mole of PVA would require over 200 mj/cm2UV exposure to form not-very-water resistant hardened film with very weak adhesion to polyester, stainless steel and other types of materials. The reaction mechanism under which PVAc and the photodimerizable group of the styryl base are reacted to produce such a photosensitive polymer composition with great water resistance and extremely fast photoreaction speed with such a miniscule amount of photosensitive groups reacted is not clearly understood but it is suspected that the photosensitive groups are reacted with PVAc emulsion in such a way to increase its water resistance upon exposure to UV light to make image development with water possible and the resulting film is extremely water resistant. The photosensitive polymer compositions described in the present invention are suitable for a very wide range of applications and can be used in photoreactive processes where a resist, stencil or relief image is required, for example as an etching photoresist for various etching applications, as photo-resists for plating processes (preparation of printed circuit boards), photolithographic compositions and as stencils for screen printing stencils as noted previously. It should also be noted that there exists a synergistic relationship in terms of photospeed and water resistance of photo-hardened film between styryl pyridinium and styryl quinolinium photosensitive group when they are added to PVAc at the same time. When PVAc reacted with only one photosensitive group is compared to PVAc emulsion reacted with two photosensitive groups, the latter photopolymer is faster in exposure speed by 5-10 mj/cm2, and more water resistant than the one with only one photosensitive group In summary, the present invention is a significant improvement over the prior art technologies including the polyvinyl alcohol based photosensitive polymer compositions with the photodimerizable group of the styryl base or diazo based systems in the following aspects:(1) The use of non-hydrolyzed PVAc (polyvinyl acetate) instead of PVA (polyvinyl alcohol), which is hydrolyzed or saponifiedPVAc, which gives higher solid, faster photoreaction, and water resistance(2) Very fast photo-reaction speed, 4 or 5 times faster than PVA based photosensitive compositions with very low amount ofphotosensitive groups, 0.05-0.5 mole % based on a unit mole of PVAc whereas PVA grafted with the same styryl group at 1.0mole % based on a unit mole of PVA is not adequate to produce water resistant photo-hardened film(3) High solid content at 50-59% compared to 14-18% of PVA based photosensitive compositions(4) Highly water resistant but water processable(5) Very stable composition chemically and physically maintaining photosensitive functionalities, and viscosity without any sign of phase separation for over 12 months(5) Good base photosensitive composition with which formulating many types of photosensitive compositions to develop various desired functionalities such as very fine image resolution and chemical and physical resistance which may be required in many industrial applications.(6) Synergistic effect in the use of styryl pyridinium and styryl quinolinium group promoting faster photoreaction and water resistance EXAMPLES The following specific examples which contain the best mode known to the inventor further illustrate the invention. All parts are by weight unless otherwise stated. These examples are merely illustrative of the invention and are not intended to limit its scope. Polyvinyl Acetate (PVAC) Vinysol 2501 (manufactured by Daido Chemical Corporation, 50% solid, viscosity=3000 cps, pH=5, particle size=1.2 μm and degree of saponification of 0%) is noted as PVAc in the following examples. Polyvinyl Alcohol GH-24 (manufactured by Nippon Synthetic Chemical Industry, the degree of polymerization 2400 and the degree of saponification of 88%) is noted as PVA N-Methyl-4-(p-Formylstyryl) pyridinium Methyl Sulfate made as per the published procedures, is noted as SBQ in the following examples. 4-[2-(4-Formylphenyl) Ethynyl]1-Methylquinolinium Methyl Sulfate made as per the published procedures, is noted as 4QP in the following examples. PVAc added with 4QP is noted as PVAc-4QP PVAc added with SBQ is noted as PVAc-SBQ PVAc added with 4QP and SBQ is noted as PVAc-4QP/SBQ Example 1 0.45 g each of SBQ (0.051 mole % based on a unit mole of PVAc) and 4QP (0.050 mole % based on a unit mole of PVAc) was added to 450 g of PVAc under agitation at 1100 rpm (revolution per minute). Let the mixture mix for 5 min to ensure complete dissolution of SBQ and 4QP. The mixture was heated with agitation to 50° C. When the mixture reached 50° C., 40% phosphoric acid was added to the mixture to adjust the pH to 2. Then mixture was mixed at 50° C. for 8 hours, then heating was stopped, and the mixture was left for 12 hours under agitation without heating. The mixture was quenched with 10% ammonia water to pH of 7 to complete the addition reaction. Example 2 25 g of SBQ (0.26 mole %) was added to 5 kg of PVAc under agitation at 500 rpm. The mixture was mixed for 30 min to dissolve SBQ. The emulsion was heated to 50° C. Then 40% phosphoric acid was added to the pH of 2 and the emulsion was kept at 50° C. for 8 hours with agitation and the mixture was left for 12 hours under agitation without heating. The mixture was then quenched with 10% potassium hydroxide solution to pH of 7 to complete the addition reaction. Example 3 0.90 g 4QP (0.10 mole %) was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of 4QP. The mixture was heated with agitation to 50° C. When the mixture reached 50° C., 40% phosphoric acid was added to the mixture to adjust the pH to 2. Then mixture was mixed at 50° C. for 8 hours, then heating was stopped, and the mixture was left for 12 hours under agitation without heating. The mixture was quenched with 10% ammonia water to pH of 7 to complete the addition reaction Example 4 0.45 g each of SBQ (0.051 mole %) and 4QP (0.050 mole %) was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of SBQ and 4QP. Then 40% phosphoric acid was added to the mixture to adjust the pH to 2 at room temperature. Then mixing continued for 1 hour and the mixture was left for 3 days without agitation. Then mixture was quenched with 10% ammonia water to pH of 7 at room temperature under agitation to complete the addition reaction. Example 5 2.25 g (0.256 mole %) of SBQ was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of SBQ. Then 40% phosphoric acid was added to the mixture at room temperature to adjust the pH to 2. Then mixing continued for 1 hour and the mixture was left for 3 days without agitation. Then mixture was then quenched with 10% potassium hydroxide to pH of 7 under agitation to complete the addition reaction. Example 6 0.45 g (0.05 mole %) of 4QP was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of 4QP. Then 40% phosphoric acid was added to the mixture at room temperature to adjust the pH to 2. Then mixing continued for 1 hour and the mixture was left for 3 days without agitation. Then mixture was then quenched with 10% ammonia water to pH of 7 under agitation to complete the addition reaction. Example 7 0.9 g (0.10 mole %) of 4QP was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of 4QP. The mixture was heated with agitation to 50° C. When the mixture reached 50° C., 40% phosphoric acid was added to the mixture to adjust the pH to 2. Then mixture was mixed at 50° C. for 8 hours, then heating was stopped, and the mixture was left for 12 hours under agitation without heating. The mixture was quenched with 10% ammonia water to pH of 7 to complete the addition reaction. Example 8 0.9 g (0.10 mole %) of SBQ was added to 450 g of PVAc under agitation at 1100 rpm. Let the mixture mix for 5 min to ensure complete dissolution of 4QP. The mixture was heated with agitation to 50° C. When the mixture reached 50° C., 40% phosphoric acid was added to the mixture to adjust the pH to 2. Then mixture was mixed at 50° C. for 8 hours, then heating was stopped, and the mixture was left for 12 hours under agitation without heating. The mixture was quenched with 10% ammonia water to pH of 7 to complete the addition reaction. Example 9 R (Reference) PVA-SBQ solution was prepared as per the prior-art procedures, by acid catalyzed grafting to PVA (100 g of 13% solution) with 1.2 g of SBQ.(1.37 mole % based on a unit mole of PVA) All of the photopolymer materials from the examples were coated as is after the completion of the reaction on 150/48 (150 threads per 2.54 cm, 48 μm thread diameter) yellow polyester mesh stretched over aluminum frame, a typical screen used for screen printing, by an emulsion coater once on the print side(front) and once on the squeegee side (back side) resulting in a coating weight of about 4 g, dried, and exposed by 3 kw metal halide lamp exposure equipment at 1 m from the screen. An exposure calculator photopositive was used, so that one exposure trial gives 5 levels of exposure light energy. The exposure calculator photopositive has 50 μm-250 μm lines and spaces. The exposure equipment has a light measuring sensor and measures light energy emitted by metal halide lamp in the equipment (1 light unit=21 mj/cm2measured by Hamamatsu UV power meter C12144 with UV sensor H12684-385 that measures UV light energy between 320 nm-400 nm). Exposed screens were soaked in water bath for 2 minutes and washed out with fanned spray head and with tap water at room temperature at 10 cm for 1 min. 30 seconds. Water resistance was determined by hitting the hardened image immediately after image development, while the hardened material was still wet, by concentrated stream of high-pressure water approximately at 21.1 kg-force/cm2for 5 seconds at the distance of 30 cm. The extent of damage was visually checked. The exposed, developed, and dried screens were studied under a video microscope at 40X and 1000X. TABLE 1Examples/Totalmole % ofExposureHardenedphotosensitiveSpeedWaterHardened ImageImagegroups(mj/cm2)ResistanceResolutionSharpness1/0.1115-21very good50 μmwell defined2/0.2621-31very good50 μmwell defined3/0.1015-21very good50 μmWell defined4/0.1021-31very good50 μmwell defined5/0.2621-31very good50 μmwell defined6/0.0521-31very good50 μmwell defined7/0.1021-31very good50 μmwell defined8/0.1021-31very good100 μmwell defined9 R/1.37over 200very poorcouldn't bevery poordeterminedbecausehardened imagewas very poor. As it can be seen in the Table 1, the combination of 4QP (0.05 mole %) and SBQ (0.051 mol %) in the example 1 produced the composition with faster exposure by 6-10 mj/cm2than 0.1 mole % 4QP (example 3) or 0.1 mole % SBQ (example 8). The combination addition appeared to have a synergistic effect on exposure speed. Another very significant observation is that PVA-grafted with SBQ at 1.37 mole % was very inferior to the invention in terms of photo hardening speed, water resistance, adhesion to mesh and hardened image sharpness even though PVA was grafted at 1.37 mole and the invention with 0.05-0.25 mole%/. Furthermore, the image created with PVA-SBQ was so poorly defined that proper exposure energy required to make hardened images was very difficult to determine. The image resolution achieved was 100 μm at best whereas all of the examples of this invention were able to resolve 50 μm lines and spaces. The invention may be used as is for some applications but further compounding with other materials such as PVA-SBQ, PVA, non-photosensitive PVAc homo or copolymer, photopolymerizable ethylenically unsaturated compounds with photopolymerization initiators, other compatible resins as well as surfactants and pigment dispersions to gain specific attributes may be required. Compounding Examples Polyvinyl Acetate (PVAC) Vinysol 2501 (manufactured by Daido Chemical Corporation, 50% solid, 3000 cps, pH=5, particle size=1.2 μm and degree of saponification of 0%) is noted as PVAc in the following examplesPVAc added with 4QP is noted as PVAc-4QPPVAc added with SBQ is noted as PVAc-SBQPVAc added with 4QP and SBQ is noted as PVAc-4QP/SBQPVA-SBQ: described in Example 9R (Reference)Omniad 819, 184, and 651 photopolymerization initiator (manufactured by IGN B.V.): noted as 819,184, 651Pentaerythritol triacrylate, photopolymerizable monomer(Aldrich): noted as PETA2-Isopropylthioxanthone((TCI) noted as ITXDiazo SY-2(manufactured by Showa Kako Corporation) noted as SY-2 All of the photosensitive compositions were mixed for 8 minutes at 1100 rpm (revolution per minute). The mixes were left untouched over-night to remove all air-bubbles before coating. The mixed and deaerated compositions described in Example 11 through 16 were coated on 300/34 (300 threads per 2.54 cm, 34 μm thread diameter), yellow polyester mesh stretched over aluminum frame by an emulsion coater once on the print side(front) and once on the squeegee side (back side) resulting in the coating weight of about 2 g, dried, and exposed by 3 kw metal halide lamp exposure equipment at 1 m from the screen for 3 light units (63 mj/cm2), 4 light units (84 mj/cm2), and 5 light units (105 mj/cm2) with an exposure calculator photopositive that had 4 level light transmission density filter film, placed on the front side of the screen, so that one exposure trial gives 5 different as describe in the Table2 below. TABLE 23 light units =4 light units =5 light units =57 mj/cm274 mj/cm291 mj/cm2No filter 100% T57 mj/cm2749194% T filter54 mj/cm2708680% T filter46 mj/cm2597360% T filter34 mj/cm2445550% T filter29 mj/cm23746 Exposure was done under vacuum to ensure tight contact between the photopositive and the front side of the screen to reduce light scattering which reduces image resolution. The exposure calculator photopositive has 50 μm-250 μm lines and spaces. The exposure equipment has a light measuring sensor and measures light energy emitted by metal halide lamp in the equipment (1 light unit=21 mj/cm2measured by Hamamatsu UV power meter C12144 with UV sensor H12684-385 that measures UV light energy between 320 nm-400 nm). Exposed screens were all washed out with fanned spray head and with tap water at room temperature at 10 cm from the front side of the screen for 1 min. 30 seconds. Water resistance was determined by hitting the hardened image at all five exposure steps immediately after image development, while the hardened material was still wet, by concentrated stream of high pressure water approximately at 21.1 kg-force/cm2for 5 seconds at the distance of 30 cm from the front of the screen. The extent of damage was visually checked. The exposed, developed (washed out), and dried screens were studied under a video microscope at 40X and 1000X. Unless otherwise stated, all are in wt. % Example 10: Noted as RCT PETA 95.81819 1.33184 1.33651 1.33ITX 0.2 Example 11 PVAc SBQ from Example 5 15PVA-SBQ 39PVAc 46 Example 12 PVAc 4QP/SBQ from Example 1 30PVA-SBQ 35PVAc 35 Example 13 PVAc 4QP from Example 3 30PVA-SBQ 56PVAc 14 Example 14 PVAc-4QP/SBQ from Example 1 30PVA-SBQ 60PVAc 10 Example 15 PVAc-SBQ from Example 5 30PVA-SBQ 45PVAc 25 Example 16 PVAc-4QP from Example 7 30PVA-SBQ 50PVAc 20 Example 17 PVAc-SBQ from Example 2 35PVA-SBQ 45PVAc 20RC1 10 Reference Example 17 PVA-SBQ 20PVAc 80 Reference Example 18 PVA-SBQ 35PVA 20PVAc 35Diazo 0.3 Reference Example 19 PVA-SBQ 60PVA 10PVAc 30Diazo 0.3RC1 10 TABLE 3ExposureWaterImageImageExampleSpeed: mj/cm2ResistanceResolution: μmSharpness1139-41Good20Very Sharp1244-48Good20Very Sharp1337-41Good20Very Sharp1455 59Good20Very Sharp1537-41Good20Very Sharp1655-59Good20Very Sharp1790-100Good50Very Sharp18220 240Poor50Good19240-260Poor50Good As the experimental results from above examples clearly shows the invention's contribution to the speed of photoreaction, water resistance, and hardened image sharpness. Furthermore, the compositions with the invention were able to resolve 20 μm lines and spaces which is very difficult with conventional composition with PVA-SBQ and/or with diazo. Currently smallest image resolution with the conventional photosensitive materials are limited to 50-100 μm lines and spaces at best. The forgoing description, examples and data are illustrative of the invention described herein, and they should not be constured to unduly limit the scope of invention or the claims, since many embodiments and variations can be made while remaining within the spirit and scope of the invention. The invention resides in the claims hereinafter appended. | 24,900 |
11860540 | DESCRIPTION OF EMBODIMENTS As used herein, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. “Optional” or “optionally” means that the subsequently described event or circumstances may or may not occur, and that description includes instances where the event or circumstance occurs and instances where it does not. The notation (Cn-Cm) means a group containing from n to m carbon atoms per group. As used herein, the term “iodized” compound indicates a compound containing iodine or a compound substituted with iodine. The terms “group” and “moiety” are interchangeable. In chemical formulae, the broken line denotes a valence bond, Me stands for methyl, and Ac for acetyl. The abbreviations and acronyms have the following meaning. EB: election beam EUV: extreme ultraviolet Mw: weight average molecular weight Mn: number average molecular weight Mw/Mn: molecular weight dispersity GPC: gel permeation chromatography PEB: post-exposure bake PAG: photoacid generator LER: line edge roughness LWR: line width roughness CDU: critical dimension uniformity Positive Resist Composition One embodiment of the invention is a positive resist composition comprising a base polymer comprising recurring units (a) having the structure of an ammonium salt of a sulfonamide having an iodized aromatic ring and recurring units of at least one type selected from recurring units (b1) having a carboxyl group in which the hydrogen atom is substituted by an acid labile group and recurring units (b2) having a phenolic hydroxyl group in which the hydrogen atom is substituted by an acid labile group. Base Polymer Preferably, the recurring units (a) have the formula (a). In formula (a), m is an integer of 1 to 5, n is an integer of 0 to 3, 1≤m+n≤5, p is 1 or 2, and q is 1 or 2. In formula (a), RAis hydrogen or methyl. X1Ais a single bond, ester bond or amide bond. X1Bis a single bond or a C1-C20(p+1)-valent hydrocarbon group which may contain an ether bond, carbonyl moiety, ester bond, amide bond, suit one moiety, lactam moiety, carbonate bond, halogen, hydroxyl moiety or carboxyl moiety. The C1-C20(p+1)-valent hydrocarbon group represented by X1Bis a group obtained by removing (p+1) number of hydrogen atoms from a C1-C20aliphatic hydrocarbon or C6-C20aromatic hydrocarbon, and may be straight, branched or cyclic. Examples thereof include alkanediyl groups such as methanediyl, ethane-1,1-diyl, ethane-1,2-diyl, propane-1,2-diyl, propane-1,3-diyl, propane-2,2-diyl, butane-1,2-diyl, butane-1,3-diyl, butane 1,4-diyl, butane-2,2-diyl, butane-2,3-diyl, 2-methylpropane 1,3-diyl, pentane 1,5-diyl, hexane-1,6-diyl, heptane 1,7-diyl, octane 1,8-diyl, nonane-1,9-diyl, decane-1,10-diyl, undecane 1,11-diyl, and dodecane 1,12-diyl; C3-C10cyclic saturated hydrocarbylene groups such as cyclopentanediyl, cyclohexanediyl, norbomanediyl and adamantanediyl; arylene groups such as phenylene and naphthylene; and groups obtained by combining the foregoing groups; as well as trivalent groups obtained by further removing one hydrogen atom from the foregoing. In formula (a), R1, R2and R3are each independently hydrogen, a C1-C12alkyl group, C2-C12alkenyl group, C1-C12aryl group, or C7-C12aralkyl group. R1and R2, or R1and X1Bmay bond together to form a ring with the nitrogen atom to which they are attached, the ring may contain oxygen, sulfur, nitrogen or a double bond, with the ring being preferably of 3 to 12 carbon atoms. Of the groups represented by R1, R2and R3, the C1-C12alkyl group may be straight, branched or cyclic and examples thereof include methyl, ethyl, n-propyl, isopropyl, n-butyl, isobutyl, sec-butyl, tert-butyl, n-pentyl, n-hexyl, n-heptyl, n-octyl, n-nonyl, n-decyl, and n-dodecyl. Examples of the C2-C12alkenyl group include vinyl, 1-propenyl, 2-propenyl, butenyl and hexenyl. Examples of the C6-C12aryl group include phenyl, tolyl, xylyl, 1-naphthyl and 2-naphthyl. Typical of the C7-C12aralkyl group is benzyl. In formula (a), R4is a hydroxyl group, optionally halogenated C1-C6saturated hydrocarbyl group, optionally halogenated C1-C6saturated hydrocarbyloxy group, optionally halogenated C2-C7saturated hydrocarbylcarbonyloxy group, optionally halogenated C2-C7saturated hydrocarbyloxy carbonyl group, optionally halogenated C1-C4saturated hydrocarbylsulfonyloxy group, fluorine, chlorine, bromine, amino, nitro, cyano, —N(R4A)—C(═O)—R4B, or —N(R4A)—C(═O)—O—R4B. R4Ais hydrogen or a C1-C6saturated hydrocarbyl group. R4Bis a C1-C6saturated hydrocarbyl group, C2-C8unsaturated aliphatic hydrocarbyl group, C6-C14aryl group or C7-C15aralkyl group. The C1-C6saturated hydrocarbyl group represented by R4, R4Aand R4Bmay be straight, branched or cyclic, and examples thereof include C1-C6alkyl groups such as methyl, ethyl, n-propyl, isopropyl, n-butyl, isobutyl, sec-butyl, tert-butyl, n-pentyl, n-hexyl; and C3-C6cycloalkyl groups such as cyclopropyl, cyclobutyl, cyclopentyl, and cyclohexyl. The saturated hydrocarbyl moiety in the C1-C6saturated hydrocarbyloxy group, C2-C7saturated hydrocarbylcarbonyloxy group, or C2-C7saturated hydrocarbyloxycarbonyl group, represented by R4, is as exemplified above for the saturated hydrocarbyl group. The saturated hydrocarbyl moiety in the C1-C4saturated hydrocarbylsulfonyloxy group represented by R4is as exemplified above for the saturated hydrocarbyl group, but of 1 to 4 carbon atoms. Of the groups represented by R4B, the C2-C8unsaturated aliphatic hydrocarbyl group may be straight, branched or cyclic, and examples thereof include C2-C8alkenyl groups such as vinyl, 1-propenyl, 2-propenyl, butenyl and hexenyl; and C3-C8unsaturated cycloaliphatic hydrocarbyl groups such as cyclohexenyl. Examples of the C6-C14aryl group include phenyl, naphthyl, and fluorenyl. Examples of the C7-C15aralkyl group include benzyl, phenethyl, naphthylmethyl, naphthylethyl, fluorenylmethyl and fluorenylethyl. Among others, R4is preferably selected from fluorine, chlorine, bromine, hydroxyl, amino, C1-C3saturated hydrocarbyl, C1-C3saturated hydrocarbyloxy, C2-C4saturated hydrocarbylcarbonyloxy, —N(R4A)—C(═O)—R4B, and —N(R4A)—C(═O)—O—R4B. In formula (a), R5is a C1-C10(q+1)-valent hydrocarbon group. The (q+1)-valent hydrocarbon group is a group obtained by removing (q+1) number of hydrogen atoms from a C1-C10aliphatic hydrocarbon or C6-C10aromatic hydrocarbon and may be straight, branched or cyclic. Examples thereof include C1-C10alkanediyl groups such as methanediyl, ethane-1,1-diyl, ethane-1,2-diyl, propane-1,2-diyl, propane-1,3-diyl, propane-2,2-diyl, butane-1,2-diyl, butane-1,3-diyl, butane-1,4-diyl, butane-2,2-diyl, butane-2,3-diyl, 2-methylpropane-1,3-diyl, pentane-1,5-diyl, hexane-1,6-diyl, heptane-1,7-diyl, octane-1,8-diyl, nonane-1,9-diyl, and decane-1,10-diyl; C3-C10cyclic saturated hydrocarbylene groups such as cyclopentanediyl, cyclohexanediyl, norbomanediyl, and adamantanediyl; C6-C10arylene groups such as phenylene and naphthylene; combinations thereof; and trivalent forms of the foregoing groups with one hydrogen atom being further removed. In formula (a), R6is a C1-C6fluorinated saturated hydrocarbyl group or C6-C10fluorinated aryl group. The C1-C6fluorinated saturated hydrocarbyl group may be straight, branched or cyclic and examples thereof are those exemplified above for the C1-C6saturated hydrocarbyl group in which some or all hydrogen atoms are substituted by fluorine. Examples of the C6-C10fluorinated aryl group include phenyl, naphthyl and other aryl groups in which some or all hydrogen atoms are substituted by fluorine, and groups obtained by combining the foregoing. In formula (a), L1is a single bond, ether bond, carbonyl group, ester bond, amide bond, carbonate bond, or C1-C20hydrocarbylene group. The hydrocarbylene group may be saturated or unsaturated, and straight, branched or cyclic, and may contain an ether bond, carbonyl moiety, ester bond, amide bond, sultone ring, lactam ring, carbonate bond, halogen, hydroxyl moiety or carboxyl moiety. Examples of the cation in the monomer from which recurring units (a) are derived are shown below, but not limited thereto. Herein RAis as defined above. Examples of the anion in the monomer from which recurring units (a) are derived are shown below, but not limited thereto. The recurring unit (a) functions as a quencher due to the structure of an ammonium salt of a sulfonamide having an iodized aromatic ring. In this sense, the base polymer may be referred to as a quencher-bound polymer. The quencher-bound polymer has the advantages of a remarkable acid diffusion-suppressing effect and improved resolution. In addition, since the recurring unit (a) contains an iodine atom or atoms having high absorption, it generates secondary electrons to promote decomposition of the acid generator during exposure, leading to a high sensitivity. As a result, a high sensitivity, high resolution, and low LWR or improved CDU are achieved at the same time. Iodine is less soluble in alkaline developer because of a large atomic weight. When iodine is attached to the polymer backbone, a resist film in the exposed region is reduced in alkaline solubility, leading to losses of resolution and sensitivity and causing defect formation. When the recurring unit (a) is in an alkaline developer, the iodized sulfonamide in recurring unit (a) forms a salt with an alkaline compound in the developer, separating from the polymer backbone. This ensures sufficient alkaline dissolution and minimizes defect formation. The monomer from which recurring units (a) are derived is a polymerizable ammonium salt monomer. The ammonium salt monomer is obtainable from neutralization reaction of a monomer or amine compound of the structure corresponding to the cation moiety in the recurring unit from which one nitrogen-bonded hydrogen atom has been eliminated with a sulfonamide. The recurring unit (a) is formed from polymerization reaction using the ammonium salt monomer. Alternatively, the recurring unit (a) is formed by carrying out polymerization reaction of the monomer or amine compound to synthesize a polymer, adding a sulfonamide to the reaction solution or a solution of the purified polymer, and carrying out neutralization reaction. The preferred recurring units (b1) and (b2) are recurring units having the formulae (b1) and (b2), respectively. In formulae (b1) and (b2), RAis each independently hydrogen or methyl. Y1is a single bond, phenylene, naphthylene, or a C1-C12linking group containing an ester bond and/or lactone ring. Y2is a single bond or ester bond. Y3is a single bond, ether bond or ester bond. R11and R12each are an acid labile group. R13is a C1-C6saturated hydrocarbyl group, C1-C6saturated hydrocarbyloxy group, C2-C6saturated hydrocarbylcarbonyl group, C2-C6saturated hydrocarbylcarbonyloxy group, C2-C6saturated hydrocarbyloxycarbonyl group, halogen, nitro group, or cyano group. R14is a single bond or a C1-C6saturated hydrocarbylene group in which some carbon may be replaced by an ether bond or ester bond. The subscript “a” is 1 or 2, “b” is an integer of 0 to 4, and 1≤a+b≤5. Examples of the monomer from which recurring units (b1) are derived are shown below, but not limited thereto. Herein RAand R11are as defined above. Examples of the monomer from which recurring units (b2) are derived are shown below, but not limited thereto. Herein RAand R12are as defined above. The acid labile groups represented by R11and R12may be selected from a variety of such groups, for example, groups of the following formulae (AL-1) to (AL-3). In formula (AL-1), c is an integer of 0 to 6. RL1is a C4-C20, preferably C4-C15tertiary hydrocarbyl group, a trihydrocarbylsilyl group in which each hydrocarbyl moiety is a C1-C6saturated hydrocarbyl moiety, a C4-C20saturated hydrocarbyl group containing a carbonyl moiety, ether bond or ester bond, or a group of formula (AL-3). The tertiary hydrocarbyl group refers to a group obtained by removing hydrogen on tertiary carbon atom in a hydrocarbon. The tertiary hydrocarbyl group RL1may be saturated or unsaturated and branched or cyclic. Examples thereof include tert-butyl, tert-pentyl, 1,1-diethylpropyl, 1-ethylcyclopentyl, 1-butylcyclopentyl, 1-ethylcyclohexyl, 1-butylcyclohexyl, 1-ethyl-2-cyclopentenyl, 1-ethyl-2-cyclohexenyl, and 2-methyl-2-adamantyl. Examples of the trihydrocarbylsilyl group include trimethylsilyl, triethylsilyl, and dimethyl-tert-butylsilyl. The saturated hydrocarbyl group containing a carbonyl moiety, ether bond or ester bond may be straight, branched or cyclic, preferably cyclic, and examples thereof include 3-oxocyclohexyl, 4-methyl-2-oxooxan-4-yl, 5-methyl-2-oxooxolan-5-yl, 2-tetrahydropyranyl and 2-tetrahydrofuranyl. Examples of the acid labile group having formula (AL-1) include tert-butoxycarbonyl, tert-butoxycarbonylmethyl, tert-pentyloxycarbonyl, tert-pentyloxycarbonylmethyl, 1,1-diethylpropyloxycarbonyl, 1,1-diethylpropyloxycarbonylmethyl, 1-ethylcyclopentyloxycarbonyl, 1-ethylcyclopentyloxycarbonylmethyl, 1-ethyl-2-cyclopentenyloxycarbonyl, 1-ethyl-2-cyclopentenyloxycarbonylmethyl, 1-ethoxyethoxycarbonylmethyl, 2-tetrahydropyranyloxycarbonylmethyl, and 2-tetrahydrofuranyloxycarbonylmethyl. Other examples of the acid labile group having formula (AL-1) include groups having the formulae (AL-1)-1 to (AL-1)-10. In formulae (AL-1)-1 to (AL-1)-10, c is as defined above. RL8is each independently a C1-C10saturated hydrocarbyl group or C6-C20aryl group. R15is hydrogen or a C1-C10saturated hydrocarbyl group. RL10is a C2-C10saturated hydrocarbyl group or C6-C20aryl group. The saturated hydrocarbyl group may be straight, branched or cyclic. In formula (AL-2), RL2and RL3are each independently hydrogen or a C1-C18, preferably C1-C10saturated hydrocarbyl group. The saturated hydrocarbyl group may be straight, branched or cyclic and examples thereof include methyl, ethyl, propyl, isopropyl, n-butyl, sec-butyl, tert-butyl, cyclopentyl, cyclohexyl, 2-ethylhexyl and n-octyl. In formula (AL-2), RL4is a C1-C18, preferably C1-C10hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic and typical examples thereof include C1-C18saturated hydrocarbyl groups, in which some hydrogen may be substituted by hydroxyl, alkoxy, oxo, amino or alkylamino. Examples of the substituted saturated hydrocarbyl group are shown below. A pair of RL2and RL3, RL3, RL2and RL4, or RL3and RL4may bond together to form a ring with the carbon atom or carbon and oxygen atoms to which they are attached. RL2and RL3, RL2and RL4, or RL3and RL4to form a ring are each independently a C1-C18, preferably C1-C10alkanediyl group. The ring thus formed is preferably of 3 to 10, more preferably 4 to 10 carbon atoms. Of the acid labile groups having formula (AL-2), suitable straight or branched groups include those having formulae (AL-2)-1 to (AL-2)-69, but are not limited thereto. Of the acid labile groups having formula (AL-2), suitable cyclic groups include tetrahydrofuran-2-yl, 2-methyltetrahydrofuran-2-yl, tetrahydropyran-2-yl, and 2-methyltetrahydropyran-2-yl. Also included are acid labile groups having the following formulae (AL-2a) and (AL-2b). The base polymer may be crosslinked within the molecule or between molecules with these acid labile groups. In formulae (AL-2a) and (AL-2b), RL11and RL12are each independently hydrogen or a C1-C8saturated hydrocarbyl group which may be straight, branched or cyclic. Also, RL11and RL12may bond together to form a ring with the carbon atom to which they are attached, and in this case, RL11and RL12are each independently a C1-C8alkanediyl group. RL13is each independently a C1-C10saturated hydrocarbylene group which may be straight, blanched or cyclic. The subscripts d and e are each independently an integer of 0 to 10, preferably 0 to 5, and f is an integer of 1 to 7, preferably 1 to 3. In formulae (AL-2a) and (AL-2b), LAis a (f+1)-valent C1-C50aliphatic saturated hydrocarbon group, (f+1)-valent C3-C50alicyclic saturated hydrocarbon group, (f+1)-valent C6-C50aromatic hydrocarbon group or (f+1)-valent C3-C50heterocyclic group. In these groups, some carbon may be replaced by a heteroatom-containing moiety, or some carbon-bonded hydrogen may be substituted by a hydroxyl, carboxyl, acyl moiety or fluorine. LAis preferably a C1-C20saturated hydrocarbon group such as saturated hydrocarbylene, trivalent saturated hydrocarbon or tetravalent saturated hydrocarbon group, or C6-C30arylene group. The saturated hydrocarbon group may be straight, branched or cyclic. LBis —C(═O)—O—, —NH—C(═O)—O— or —NH—C(═O)—NH—. Examples of the crosslinking acetal groups having formulae (AL-2a) and (AL-2b) include groups having the formulae (AL-2)-70 to (AL-2)-77. In formula (AL-3), RL5, RL6and RL7are each independently a C1-C20hydrocarbyl group which may contain a heteroatom such as oxygen, sulfur, nitrogen or fluorine. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C20alkyl groups, C3-C20cyclic saturated hydrocarbyl groups, C2-C20alkenyl groups, C3-C20cyclic unsaturated aliphatic hydrocarbyl groups, and C6-C10aryl groups. A pair of RL5and RL6, RL5and RL7, or RL6and RL7may bond together to form a C3-C20aliphatic ring with the carbon atom to which they are attached. Examples of the group having formula (AL-3) include tert-butyl, 1,1-diethylpropyl, 1-ethylnorbornyl, 1-methylcyclopentyl, 1-ethylcyclopentyl, 1-isopropylcyclopentyl, 1-methylcyclohexyl, 2-(2-methyl)adamantyl, 2-(2-ethyl)adamantyl, and tert-pentyl. Examples of the group having formula (AL-3) also include groups having the formulae (AL-3)-1 to (AL-3)-19. In formulae (AL-3)-1 to (AL-3)-19, RL14is each independently a C1-C8saturated hydrocarbyl group or C6-C20aryl group. RL15and RL17are each independently hydrogen or a C1-C20saturated hydrocarbyl group. RL16is a C6-C20aryl group. The saturated hydrocarbyl group may be straight, branched or cyclic. Typical of the aryl group is phenyl. RFis fluorine or trifluoromethyl, and g is an integer of 1 to 5. Other examples of the group having formula (AL-3) include groups having the formulae (AL-3)-20 and (AL-3)-21. The base polymer may be crosslinked within the molecule or between molecules with these acid labile groups. In formulae (AL-3)-20 and (AL-3)-21, RL14is as defined above. RL18is a C1-C20(h+1)-valent saturated hydrocarbylene group or C6-C20(h+1)-valent arylene group, which may contain a heteroatom such as oxygen, sulfur or nitrogen, wherein h is an integer of 1 to 3. The saturated hydrocarbylene group may be straight, branched or cyclic. Examples of the monomer from which recurring units containing an acid labile group of formula (AL-3) are derived include (meth)acrylates having an exo-form structure represented by the formula (AL-3)-22. In formula (AL-3)-22, RAis as defined above. RLc1is a C1-C8saturated hydrocarbyl group or an optionally substituted C6-C20aryl group; the saturated hydrocarbyl group may be straight, branched or cyclic. RLc2to RLc11are each independently hydrogen or a C1-C15hydrocarbyl group which may contain a heteroatom; oxygen is a typical heteroatom. Suitable hydrocarbyl groups include C1-C15alkyl groups and C6-C15aryl groups. Alternatively, a pair of RLc2and RLc3, RLc4and RLc6, RLc4and RLc7, RLc5and RLc7, RLc3and RLc11, RLc6and RLc10, RLc8and RLc9, or RLc9and RLc10, taken together, may form a ring with the carbon atom to which they are attached, and in this event, the ring-forming group is a C1-C15hydrocarbylene group which may contain a heteroatom. Also, a pair of RLc2and RLc11, RLc8and RLc11, or RLc4and RLc6which are attached to vicinal carbon atoms may bond together directly to form a double bond. The formula also represents an enantiomer. Examples of the monomer from which reclining units having formula (AL-3)-22 are derived are described in U.S. Pat. No. 6,448,420 (JP-A 2000-327633). Illustrative non-limiting examples of suitable monomers are given below. RAis as defined above. Examples of the monomer from which the recurring units having an acid labile group of formula (AL-3) are derived include (meth)acylates having a furandiyl, tetrahydrofurandiyl or oxanorbornanediyl group as represented by the following formula (AL-3)-23. In formula (AL-3)-23, RAis as defined above. RLc12and RLc13are each independently a C1-C10hydrocarbyl group, or RLc12and RLc13, taken together, may form an aliphatic ring with the carbon atom to which they are attached. RLc14is furandiyl, tetrahydrofurandiyl or oxanorbornanediyl. RLc15is hydrogen or a C1-C10hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be straight, branched or cyclic, and is typically a C1-C10saturated hydrocarbyl group. Examples of the monomer from which the recurring units having formula (AL-3)-23 are derived are shown below, but not limited thereto. Herein RAis as defined above. In the base polymer, recurring units (c) having an adhesive group may be incorporated. The adhesive group is selected from hydroxyl, carboxyl, lactone ring, carbonate, thiocarbonate, carbonyl, cyclic acetal, ether bond, ester bond, sulfonic acid ester bond, cyano, amide, —O—C(═O)—S— and —O—C(═O)—NH—. Examples of the monomer from which recurring units (c) are derived are given below, but not limited thereto. Herein RAis as defined above. In a further embodiment, recurring units (d) of at least one type selected from recurring units having the following formulae (d1), (d2) and (d3) may be incorporated in the base polymer. These units are simply referred to as recurring units (d1), (d2) and (d3), which may be used alone or in combination of two or more types. In formulae (d1) to (d3), RAis each independently hydrogen or methyl. Z1is a single bond, or a C1-C6aliphatic hydrocarbylene group, phenylene group, naphthylene group, or a C7-C18group obtained by combining the foregoing, or —O—Z11—, —C(═O)—O—Z11— or —C(═O)—NH—Z11—, wherein Z11is a C1-C6aliphatic hydrocarbylene group, phenylene group, naphthylene group, or a C7-C18group obtained by combining the foregoing, which may contain a carbonyl moiety, ester bond, ether bond or hydroxyl moiety. Z2is a single bond or ester bond. Z3is a single bond, —Z31—C(═O)—O—, —Z31—O—, or —Z31—O—C(═O)—, wherein Z31is a C1-C12hydrocarbylene group, phenylene group or a C7-C18group obtained by combining the foregoing, which may contain a carbonyl moiety, ester bond, ether bond, bromine or iodine Z4is methylene, 2,2,2-trifluoro-1,1-ethanediyl or carbonyl. Z5is a single bond, methylene, ethylene, phenylene, fluorinated phenylene, trifluoromethyl-substituted phenylene group, —O—Z51—, —C(═O)—O—Z51— or —C(═O)—NH—Z51—, wherein Z51is a C1-C6aliphatic hydrocarbylene group, phenylene group, fluorinated phenylene group, or trifluoromethyl-substituted phenylene group, which may contain a carbonyl moiety, ester bond, ether bond or hydroxyl moiety. In formulae (d1) to (d3), R21to R28are each independently a C1-C20hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be straight, branched or cyclic and examples thereof are as exemplified above for the hydrocarbyl group represented by R101to R105in formulae (1-1) and (1-2). Also, a pair of R23and R24, or R26and R27may bond together to form a ring with the sulfur atom to which they are attached. Examples of the ring are as will be exemplified later for the ring that R101and R102in formula (1-1), taken together, form with the sulfur atom to which they are attached. In formula (d1), M−is a non-nucleophilic counter ion. Examples of the non-nucleophilic counter ion include halide ions such as chloride and bromide ions; fluoroalkylsulfonate ions such as triflate, 1,1,1-trifluoroethanesulfonate, and nonafluorobutanesulfonate; arylsulfonate ions such as tosylate, benzenesulfonate, 4-fluorobenzenesulfonate, and 1,2,3,4,5-pentafluorobenzenesulfonate; alkylsulfonate ions such as mesylate and butanesulfonate; imide ions such as bis(trifluoromethylsulfonyl)imide, bis(perfluoroethylsulfonyl)imide and bis(perfluorobutylsulfonyl)imide; methide ions such as tris(trifluoromethylsulfonyl)methide and tris(perfluoroethylsulfonyl)methide. Also included are sulfonate ions having fluorine substituted at α-position as represented by the formula (d1-1) and sulfonate ions having fluorine substituted at α-position and trifluoromethyl at β-position as represented by the formula (d1-2). In formula (d1-1), R31is hydrogen, or a C1-C20hydrocarbyl group which may contain an ether bond, ester bond, carbonyl moiety, lactone ring, or fluorine atom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic and examples thereof are as will be exemplified later for the hydrocarbyl group R111in formula (1A′). In formula (d1-2), R32is hydrogen, or a C1-C30hydrocarbyl group or C2-C30hydrocarbylcarbonyl group which may contain an ether bond, ester bond, carbonyl moiety or lactone ring. The hydrocarbyl group and hydrocarbyl moiety in the hydrocarbylcarbonyl group may be saturated or unsaturated and straight, branched or cyclic and examples thereof are as will be exemplified later for the hydrocarbyl group R111in formula (1A′). Examples of the cation in the monomer from which recurring unit (d1) is derived are shown below, but not limited thereto. RAis as defined above. Examples of the cation in the monomer from which recurring unit (d2) or (d3) is derived are as will be exemplified later for the cation in the sulfonium salt having formula (1-1). Examples of the anion in the monomer from which recurring unit (d2) is derived are shown below, but not limited thereto. RAis as defined above. Examples of the anion in the monomer from which recurring unit (d3) is derived are shown below, but not limited thereto. RAis as defined above. Recurring units (d1) to (d3) have the function of acid generator. The attachment of an acid generator to the polymer main chain is effective in restraining acid diffusion, thereby preventing a reduction of resolution due to blur by acid diffusion. Also LER, LWR and CDU are improved since the acid generator is uniformly distributed. When a base polymer comprising recurring units (d), i.e., polymer-bound acid generator is used, an acid generator of addition type (to be described later) may be omitted. The base polymer may further include recurring units (e) which contain iodine, but not amino group. Examples of the monomer from which recurring units (e) are derived are shown below, but not limited thereto. RAis as defined above. Besides the reclining units described above, further recurring units (f) may be incorporated in the base polymer, which are derived from such monomers as styrene, vinylnaphthalene, indene, acenaphthylene, coumarin, and coumarone. In the base polymer comprising recurring units (a), (b1), (b2), (c), (d1), (d2), (d3), (e), and (f), a fraction of these units is: preferably 0<a<1.0, 0≤b1≤0.9, 0≤b2≤0.9, 0<b1+b2≤0.9, 0≤c≤0.9, 0≤d1≤0.5, 0≤d2≤0.5, 0≤d3≤0.5, 0≤d1+d2+d3≤0.5, 0≤e≤0.5, and 0≤f≤0.5; more preferably 0.001≤a≤0.8, 0≤b1≤0.8, 0<b2≤0.8, 0≤b1+b2≤0.8, 0≤c≤0.8, 0≤d1≤0.4, 0≤d2≤0.4, 0≤d3≤0.4, 0≤d1+d2+d3≤0.4, 0≤e≤0.4, and 0≤f≤0.4; and even more preferably 0.01≤a≤; 0.7, 0<b1≤0.7, 0≤b2≤0.7, 0≤b1+b2≤0.7, 0≤; c≤0.7, 0≤d1≤0.3, 0≤d2≤0.3, 0≤d3≤0.3, 0≤d1+d2+d3≤0.3, 0≤e≤0.3, and 0≤f≤0.3. Notably, a+b1+b2+c+d1+d2+d3+e+f=1.0. The base polymer may be synthesized by any desired methods, for example, by dissolving suitable monomer's selected from the monomers corresponding to the foregoing recurring units in an organic solvent, adding a radical polymerization initiator thereto, and heating for polymerization. Examples of the organic solvent which can be used for polymerization include toluene, benzene, tetrahydrofuran (THF), diethyl ether, and dioxane. Examples of the polymerization initiator used herein include 2,2′-azobisisobutyronitrile (AIBN), 2,2′-azobis(2,4-dimethylvaleronitrile), dimethyl 2,2-azobis(2-methylpropionate), benzoyl peroxide, and lauroyl peroxide. Preferably the reaction temperature is 50 to 80° C., and the reaction time is 2 to 100 hours, more preferably 5 to 20 hours. In the case of a monomer having a hydroxyl group, the hydroxyl group may be replaced by an acetal group susceptible to deprotection with acid, typically ethoxyethoxy, prior to polymerization, and the polymerization be followed by deprotection with weak acid and water. Alternatively, the hydroxyl group may be replaced by an acetyl, formyl, pivaloyl or similar group prior to polymerization, and the polymerization be followed by alkaline hydrolysis. When hydroxystyrene or hydroxyvinylnaphthalene is copolymerized, an alternative method is possible. Specifically, acetoxystyrene or acetoxyvinylnaphthalene is used instead of hydroxystyrene or hydroxyvinylnaphthalene, and after polymerization, the acetoxy group is deprotected by alkaline hydrolysis, for thereby converting the polymer product to hydroxystyrene or hydroxyvinylnaphthalene. For alkaline hydrolysis, a base such as aqueous ammonia or triethylamine may be used. Preferably the reaction temperature is −20° C. to 100° C., more preferably 0° C. to 60° C., and the reaction time is 0.2 to 100 hours, more preferably 0.5 to 20 horns. The base polymer should preferably have a weight average molecular weight (Mw) in the range of 1,000 to 500,000, and more preferably 2,000 to 30,000, as measured by GPC versus polystyrene standards using tetrahydrofuran (THF) solvent. With too low a Mw, the resist composition may become less heat resistant. A polymer with too high a Mw may lose alkaline solubility and give rise to a footing phenomenon after pattern formation. If a base polymer has a wide molecular weight distribution or dispersity (Mw/Mn), which indicates the presence of lower and higher molecular weight polymer fractions, there is a possibility that foreign matter is left on the pattern or the pattern profile is degraded. The influences of Mw and Mw/Mn become stronger as the pattern rale becomes finer. Therefore, the base polymer should preferably have a narrow dispersity (Mw/Mn) of 1.0 to 2.0, especially 1.0 to 1.5, in order to provide a resist composition suitable for micropatterning to a small feature size. The base polymer may be a blend of two or more polymers which differ in compositional ratio, Mw or Mw/Mn. It may also be a blend of a polymer containing recurring units (a) and a polymer not containing reclining units (a). Acid Generator The positive resist composition may contain an acid generator capable of generating a strong acid, also referred to as acid generator of addition type. As used herein, the “strong acid” is a compound having a sufficient acidity to induce deprotection reaction of acid labile groups on the base polymer. The acid generator is typically a compound (PAG) capable of generating an acid upon exposure to actinic ray or radiation. Although the PAG used herein may be any compound capable of generating an acid upon exposure to high-energy radiation, those compounds capable of generating sulfonic acid, imidic acid (imide acid) or methide acid are preferred. Suitable PAGs include sulfonium salts, iodonium salts, sulfonyldiazomethane, N-sulfonyloxyimide, and oxime-O-sulfonate acid generators. Suitable PAGs are as exemplified in U.S. Pat. No. 7,537,880 (JP-A 2008-111103, paragraphs [0122]-[0142]). Also sulfonium salts having the formula (1-1) and iodonium salts having the formula (1-2) are useful PAGs. In formulae (1-1) and (1-2), R101to R105are each independently a halogen atom or a C1-C20hydrocarbyl group which may contain a heteroatom. Suitable halogen atoms include fluorine, chlorine, bromine and iodine. The C1-C20hydrocarbyl group represented by R101to R105may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C20alkyl groups such as methyl, ethyl, n-propyl, isopropyl, n-butyl, isobutyl, sec-butyl, tert-butyl, n-pentyl, n-hexyl, n-octyl, n-nonyl, n-decyl, undecyl, dodecyl, tridecyl, tetradecyl, pentadecyl, heptadecyl, octadecyl, nonadecyl and icosyl; C3-C20cyclic saturated hydrocarbyl groups such as cyclopropyl, cyclopentyl, cyclohexyl, cyclopropylmethyl, 4-methylcyclohexyl, cyclohexylmethyl, norbornyl, adamantyl; C2-C20alkenyl groups such as vinyl, propenyl, butenyl, hexenyl; C2-C20alkynyl groups such as ethynyl, propynyl, butynyl; C3-C20unsaturated alicyclic hydrocarbyl groups such as cyclohexenyl and norbornenyl; C6-C20aryl groups such as phenyl, methylphenyl, ethylphenyl, n-propylphenyl, isopropylphenyl, n-butylphenyl, isobutylphenyl, sec-butylphenyl, tert-butylphenyl, naphthyl, methylnaphthyl, ethylnaphthyl, n-propylnaphthyl, isopropylnaphthyl, n-butylnaphthyl, isobutylnaphthyl, sec-butylnaphthyl, tert-butylnaphthyl; and C7-C20aralkyl groups such as benzyl and phenethyl as well as groups obtained by combining the foregoing. In these groups, some or all of the hydrogen atoms may be substituted by a moiety containing a heteroatom such as oxygen, sulfur, nitrogen or halogen, or some carbon may be replaced by a moiety containing a heteroatom such as oxygen, sulfur or nitrogen, so that the group may contain a hydroxyl moiety, cyano moiety, carbonyl moiety, ether bond, ester bond, sulfonate bond, carbonate bond, lactone ring, sultone ring, carboxylic anhydride or haloalkyl moiety. Also, a pair of R101and R102may bond together to form a ring with the sulfur atom to which they are attached. Preferred examples of the ring include the following structures. Herein the broken line denotes a point of attachment to R103. Examples of the cation of the sulfonium salt having formula (1-1) are shown below, but not limited thereto. Examples of the cation of the iodonium salt having formula (1-2) are shown below, but not limited thereto. In formulae (1-1) and (1-2), Xa−is an anion selected from the formulae (1A) to (1D). In formula (1A), Rfais fluorine or a C1-C40hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic, and examples thereof are as will be exemplified below for the hydrocarbyl group R111in formula (1A′). Of the anions of formula (1A), a structure having formula (1A′) is preferred. In formula (1A′), RHFis hydrogen or trifluoromethyl, preferably trifluoromethyl. R111is a C1-C38hydrocarbyl group which may contain a heteroatom. Suitable heteroatoms include oxygen, nitrogen, sulfur and halogen, with oxygen being preferred. Of the hydrocarbyl groups, those of 6 to 30 carbon atoms are preferred because a high resolution is available in fine pattern formation. The hydrocarbyl group R111may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C38alkyl groups such as methyl, ethyl, propyl, isopropyl, butyl, isobutyl, sec-butyl, tert-butyl, pentyl, neopentyl, hexyl, heptyl, 2-ethylhexyl, nonyl, undecyl, tridecyl, pentadecyl, heptadecyl, icosanyl; C3-C38cyclic saturated hydrocarbyl groups such as cyclopentyl, cyclohexyl, 1-adamantyl, 2-adamantyl, 1-adamantylmethyl, norbornyl, norbornylmethyl, tricyclodecanyl, tetracyclododecanyl, tetracyclododecanylmethyl, dicyclohexylmethyl; C2-C38unsaturated aliphatic hydrocarbyl groups such as allyl and 3-cyclohexenyl; C2-C38aryl groups such as phenyl, 1-naphthyl and 2-naphthyl; C7-C38aralkyl groups such as benzyl and diphenylmethyl; and groups obtained by combining the foregoing. In the foregoing groups, some or all of the hydrogen atoms may be substituted by a moiety containing a heteroatom such as oxygen, sulfur, nitrogen or halogen, or some carbon may be replaced by a moiety containing a heteroatom such as oxygen, sulfur or nitrogen, so that the group may contain a hydroxyl, cyano, carbonyl, ether bond, ester bond, sulfonic acid ester bond, carbonate, lactone ring, sultone ring, carboxylic anhydride or haloalkyl moiety. Examples of the heteroatom-containing hydrocarbyl group include tetrahydrofuryl, methoxymethyl, ethoxymethyl, methylthiomethyl, acetamidomethyl, trifluoroethyl, (2-methoxyethoxy)methyl, acetoxymethyl, 2-carboxy-1-cyclohexyl, 2-oxopropyl, 4-oxo-1-adamantyl, and 3-oxocyclohexyl. With respect to the synthesis of the sulfonium salt having an anion of formula (1A′), reference is made to JP-A 2007-145797, JP-A 2008-106045, JP-A 2009-007327, and JP-A 2009-258695. Also useful are the sulfonium salts described in JP-A 2010-215608, JP-A 2012-041320, JP-A 2012-106986, and JP-A 2012-153644. Examples of the anion having formula (1A) are as exemplified for the anion having formula (1A) in US 20180335696 (JP-A 2018-197853). In formula (IB), Rfb1and Rfb2are each independently fluorine or a C1-C40hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic, and examples thereof are as exemplified above for the hydrocarbyl group R111in formula (1A′). Preferably Rfb1and Rfb2each are fluorine or a straight C1-C4fluorinated alkyl group. A pair of Rfb1and Rfb2may bond together to form a ring with the linkage (—CF2—SO2—N—SO2—CF2—) to which they are attached, and preferably the pair is a fluorinated ethylene or fluorinated propylene group. In formula (1C), Rfc1, Rfc2and Rfc3are each independently fluorine or a C1-C40hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic, and examples thereof are as exemplified above for the hydrocarbyl group R111in formula (1A′). Preferably Rfc1, Rfc2and Rfc3each are fluorine or a straight C1-C4fluorinated alkyl group. A pair of Rfc1and Rfc2may bond together to form a ring with the linkage (—CF2—SO2—C−—SO2—CF2—) to which they are attached, and preferably the pair is a fluorinated ethylene or fluorinated propylene group. In formula (1D), Rfdis a C1-C40hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic, and examples thereof are as exemplified above for the hydrocarbyl group R111in formula (1A′). With respect to the synthesis of the sulfonium salt having an anion of formula (1D), reference is made to JP-A 2010-215608 and JP-A 2014-133723. Examples of the anion having formula (1D) are as exemplified for the anion having formula (1D) in US 20180335696 (JP-A 2018-197853). The compound having the anion of formula (1D) has a sufficient acid strength to cleave acid labile groups in the base polymer because it is free of fluorine at α-position of sulfo group, but has two trifluoromethyl groups at β-position. Thus the compound is a useful PAG. A compound having the formula (2) is also a useful PAG. In formula (2), R201and R202are each independently a halogen atom or a C1-C30hydrocarbyl group which may contain a heteroatom. R203is a C1-C30hydrocarbylene group which may contain a heteroatom. Any two of R201, R202and R203may bond together to form a ring with the sulfur atom to which they are attached. Examples of the ring are as exemplified above for the ring that R101and R102in formula (1-1), taken together, form with the sulfur atom to which they are attached. The hydrocarbyl group represented by R201and R202may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C30alkyl groups such as methyl, ethyl, propyl, isopropyl, n-butyl, sec-butyl, tert-butyl, n-pentyl, tert-pentyl, n-hexyl, n-octyl, 2-ethylhexyl, n-nonyl, and n-decyl; C3-C30cyclic saturated hydrocarbyl groups such as cyclopentyl, cyclohexyl, cyclopentylmethyl, cyclopentylethyl, cyclopentylbutyl, cyclohexylmethyl, cyclohexylethyl, cyclohexylbutyl, norbornyl, oxanorbornyl, tricyclo[5.2.1.02,6]decanyl, and adamantyl; C6-C30aryl groups such as phenyl, methylphenyl, ethylphenyl, n-propylphenyl, isopropylphenyl, n-butylphenyl, isobutylphenyl, sec-butylphenyl, tert-butylphenyl, naphthyl, methylnaphthyl, ethylnaphthyl, n-propylnaphthyl, isopropylnaphthyl, n-butylnaphthyl, isobutylnaphthyl, sec-butylnaphthyl, tert-butylnaphthyl and anthracenyl; and groups obtained by combining the foregoing. Also included are substituted forms of the foregoing groups in which SOUK or all of the hydrogen atoms are substituted by a moiety containing a heteroatom such as oxygen, sulfur, nitrogen or halogen, and some carbon is replaced by a moiety containing a heteroatom such as oxygen, sulfur or nitrogen, so that the group may contain a hydroxyl moiety, cyano moiety, carbonyl moiety, ether bond, ester bond, sulfonate bond, carbonate bond, lactone ring, sultone ring, carboxylic anhydride or haloalkyl moiety. The hydrocarbylene group represented by R203may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C30alkanediyl groups such as methanediyl, ethane-1,1-diyl, ethane-1,2-diyl, propane-1,3-diyl, butane-1,4-diyl, pentane-1,5-diyl, hexane-1,6-diyl, heptane-1,7-diyl, octane-1,8-diyl, nonane-1,9-diyl, decane-1,10-diyl, undecane-1,11-diyl, dodecane-1,12-diyl, tridecane-1,13-diyl, tetradecane-1,14-diyl, pentadecane-1,15-diyl, hexadecane-1,16-diyl and heptadecane-1,17-diyl; C3-C30cyclic saturated hydrocarbylene groups such as cyclopentanediyl, cyclohexanediyl, norbomanediyl and adamantanediyl; C6-C30arylene groups such as phenylene, methylphenylene, ethylphenylene, n-propylphenylene, isopropylphenylene, n-butylphenylene, isobutylphenylene, sec-butylphenylene, tert-butylphenylene, naphthylene, methylnaphthylene, ethylnaphthylene, n-propylnaphthylene, isopropylnaphthylene, n-butylnaphthylene, isobutylnaphthylene, sec-butylnaphthylene, and tert-butylnaphthylene; and groups obtained by combining the foregoing groups. Also included are substituted forms of the foregoing groups in which some or all of the hydrogen atoms are substituted by a moiety containing a heteroatom such as oxygen, sulfur, nitrogen or halogen, and some carbon is replaced by a moiety containing a heteroatom such as oxygen, sulfur or nitrogen, so that the group may contain a hydroxyl moiety, cyano moiety, carbonyl moiety, ether bond, ester bond, sulfonate bond, carbonate bond, lactone ring, sultone ring, carboxylic anhydride or haloalkyl moiety. The preferred heteroatom is oxygen. In formula (2), LCis a single bond, ether bond or a C1-C20hydrocarbylene group which may contain a heteroatom. The hydrocarbylene group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof are as exemplified above for the hydrocarbylene group R203. In formula (2), XA, XB, XCand XDare each independently hydrogen, fluorine or trifluoromethyl, with the proviso that at least one of XA, XB, XCand XDis fluorine or trifluoromethyl, and k is an integer of 0 to 3. Of the PAGs having formula (2), those having formula (2′) are preferred. In formula (2′), LCis as defined above. RHFis hydrogen or trifluoromethyl, preferably trifluoromethyl. R301, R302and R303are each independently hydrogen or a C1-C20hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof are as exemplified above for the hydrocarbyl group R111in formula (1A′). The subscripts x and y each are an integer of 0 to 5, and z is an integer of 0 to 4. Examples of the PAG having formula (2) are as described for the PAG of formula (2) in U.S. Pat. No. 9,720,324 (JP-A 2017-026980). Of the foregoing PAGs, those compounds having an anion of formula (1A′) or (1D) are especially preferred because of reduced acid diffusion and high solubility in solvent, and those compounds having an anion of formula (2′) are especially preferred because of minimized acid diffusion. Also sulfonium and iodonium salts having an anion containing an iodized or brominated aromatic ring are useful PAGs. These salts typically have the formulae (3-1) and (3-2). In formulae (3-1) and (3-2), r is an integer of 1 to 3; s is an integer of 1 to 5, and t is an integer of 0 to 3, meeting 1≤s+t≤5. Preferably, s is an integer of 1 to 3, more preferably 2 or 3, and t is an integer of 0 to 2. XBIis iodine or bromine, and groups XBImay be identical or different when s is 2 or more. L11is a single bond, ether bond, ester bond, or a C1-C6saturated hydrocarbylene group which may contain an ether bond or ester bond. The saturated hydrocarbylene group may be straight, branched or cyclic. L12is a single bond or C1-C20divalent linking group in case of r=1, and a C1-C20(r+1)-valent linking group in case of r=2 or 3. The linking group may contain oxygen, sulfur or nitrogen. R401is hydroxyl, carboxyl, fluorine, chlorine, bromine, amino or a C1-C20saturated hydrocarbyl group, C1-C20saturated hydrocarbyloxy group, C2-C20saturated hydrocarbylcarbonyl, C2-C20saturated hydrocarbyloxycarbonyl group, C2-C20saturated hydrocarbylcarbonyloxy group, or C1-C20saturated hydrocarbylsulfonyloxy group, which may contain fluorine, chlorine, bromine, hydroxyl, amino or ether bond, or —N(R401A)(R401B), —N(R401C)—C(═O)—R401Dor —N(R401C)—C(═O)—O—R401D. R401Aand R401Bare each independently hydrogen or a C1-C6saturated hydrocarbyl group. R401Cis hydrogen or a C1-C6saturated hydrocarbyl group which may contain halogen, hydroxyl, C1-C6saturated hydrocarbyloxy, C2-C6saturated hydrocarbylcarbonyl or C2-C6saturated hydrocarbylcarbonyloxy moiety. R401Dis a C1-C16aliphatic hydrocarbyl group, C6-C14aryl group or C7-C15aralkyl group, which may contain halogen, hydroxyl, a C1-C6saturated hydrocarbyloxy, C2-C6saturated hydrocarbylcarbonyl or C2-C6saturated hydrocarbylcarbonyloxy moiety. The aliphatic hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. The saturated hydrocarbyl, saturated hydrocarbyloxy, saturated hydrocarbyloxycarbonyl, saturated hydrocarbylcarbonyl and saturated hydrocarbylcarbonyloxy groups may be straight, branched or cyclic. Groups R401may be identical or different when r and/or t is 2 or 3. Inter alia, R401is preferably selected from hydroxyl, —N(R401C)—C(═O)—R401D, —N(R401C)—C(═O)—O—R401D, fluorine, chlorine, bromine, methyl, and methoxy. Rf1to Rf4are each independently hydrogen, fluorine or trifluoromethyl, at least one thereof being fluorine or trifluoromethyl. Also Rf1and Rf2, taken together, may form a carbonyl group. Most preferably both Rf3and Rf4are fluorine. R402to R406are each independently halogen or a C1-C20hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof are as exemplified above for the hydrocarbyl groups R101to R105in formulae (1-1) and (1-2). In these groups, some or all hydrogen may be substituted by hydroxyl, carboxyl, halogen, cyano, nitro, mercapto, sultone, sulfone, or sulfonium salt-containing moiety; or some carbon may be replaced by an ether bond, ester bond, carbonyl, amide, carbonate or sulfonic acid ester bond. R402and R403may bond together to form a ring with the sulfur atom to which they are attached. Examples of the ring are as exemplified above for the ring that R101and R102in formula (1-1), taken together, form with the sulfur atom to which they are attached. The cation in the sulfonium salt having formula (3-1) is as exemplified above for the cation in the sulfonium salt having formula (1-1). The cation in the iodonium salt having formula (3-2) is as exemplified above for the cation in the iodonium salt having formula (1-2). Examples of the anion in the onium salts having formulae (3-1) and (3-2) are given below, but not limited thereto. Herein XBIis as defined above. In the positive resist composition, the acid generator of addition type is preferably used in an amount of 0.1 to 50 parts, more preferably 1 to 40 parts by weight per 100 parts by weight of the base polymer. When the base polymer contains recurring units (d1) to (d3) and/or the acid generator of addition type is added, the positive resist composition functions as a chemically amplified positive resist composition. Organic Solvent The positive resist composition may contain an organic solvent. The organic solvent is not particularly limited as long as the foregoing components and other components are dissolvable therein. Examples of the organic solvent used herein are described in U.S. Pat. No. 7,537,880 (JP-A 2008-111103, paragraphs [0144]-[0145]). Exemplary solvents include ketones such as cyclohexanone, cyclopentanone, methyl-2-n-pentyl ketone, and 2-heptanone; alcohols such as 3-methoxybutanol, 3-methyl-3-methoxybutanol, 1-methoxy-2-propanol, 1-ethoxy-2-propanol, and diacetone alcohol (DAA); ethers such as propylene glycol monomethyl ether, ethylene glycol monomethyl ether, propylene glycol monoethyl ether, ethylene glycol monoethyl ether, propylene glycol dimethyl ether, and diethylene glycol dimethyl ether; esters such as propylene glycol monomethyl ether acetate (PGMEA), propylene glycol monoethyl ether acetate, ethyl lactate, ethyl pyruvate, butyl acetate, methyl 3-methoxypropionate, ethyl 3-ethoxypropionate, tert-butyl acetate, tert-butyl propionate, and propylene glycol mono-tert-butyl ether acetate; and lactones such as γ-butyrolactone, and mixtures thereof. The organic solvent is preferably added in an amount of 100 to 10,000 parts, and more preferably 200 to 8,000 parts by weight per 100 parts by weight of the base polymer. Other Components In addition to the foregoing components, other components such as quencher, surfactant, dissolution inhibitor and water repellency improver may be blended in any desired combination to formulate a positive resist composition. This positive resist composition has a very high sensitivity in that the dissolution rate in developer of the base polymer in exposed areas is accelerated by catalytic reaction. In addition, the resist film has a high dissolution contrast, resolution, exposure latitude, and process adaptability, and provides a good pattern profile after exposure, and minimal proximity bias because of restrained acid diffusion. By virtue of these advantages, the composition is fully useful in commercial application and suited as a pattern-forming material for the fabrication of VLSIs. The quencher is typically selected from conventional basic compounds. Conventional basic compounds include primary, secondary, and tertiary aliphatic amines, mixed amines, aromatic amines, heterocyclic amines, nitrogen-containing compounds with carboxyl group, nitrogen-containing compounds with sulfonyl group, nitrogen-containing compounds with hydroxyl group, nitrogen-containing compounds with hydroxyphenyl group, alcoholic nitrogen-containing compounds, amide derivatives, imide derivatives, and carbamate derivatives. Also included are primary, secondary, and tertiary amine compounds, specifically amine compounds having a hydroxyl, ether bond, ester bond, lactone ring, cyano, or sulfonic acid ester bond as described in JP-A 2008-111103, paragraphs [0146]-[0164], and compounds having a carbamate group as described in JP 3790649. Addition of a basic compound may be effective for further suppressing the diffusion rate of acid in the resist film or correcting the pattern profile. Suitable quenchers also include onium salts such as sulfonium salts, iodonium salts and ammonium salts of sulfonic acids which are not fluorinated at α-position and similar onium salts of carboxylic acid, as described in JP-A 2008-158339. While an α-fluorinated sulfonic acid, imide acid, and methide acid are necessary to deprotect the acid labile group of carboxylic acid ester, an α-non-fluorinated sulfonic acid or a carboxylic acid is released by salt exchange with an α-non-fluorinated onium salt. An α-non-fluorinated sulfonic acid and a carboxylic acid function as a quencher because they do not induce deprotection reaction. Examples of the quencher include a compound (onium salt of α-non-fluorinated sulfonic acid) having the formula (4) and a compound (onium salt of carboxylic acid) having the formula (5). In formula (4), R501is hydrogen or a C1-C40hydrocarbyl group which may contain a heteroatom, exclusive of the hydrocarbyl group in which the hydrogen bonded to the carbon atom at α-position of the sulfone group is substituted by fluorine or fluoroalkyl group. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include alkyl groups such as methyl, ethyl, propyl, isopropyl, n-butyl, sec-butyl, tert-butyl, tert-pentyl, n-pentyl, n-hexyl, n-octyl, 2-ethylhexyl, n-nonyl, n-decyl; cyclic saturated hydrocarbyl groups such as cyclopentyl, cyclohexyl, cyclopentylmethyl, cyclopentylethyl, cyclopentylbutyl, cyclohexylmethyl, cyclohexylethyl, cyclohexylbutyl, norbornyl, tricyclo[5.2.1.02,6]decanyl, adamantyl, and adamantylmethyl; alkenyl groups such as vinyl, allyl, propenyl, butenyl and hexenyl; cyclic unsaturated aliphatic hydrocarbyl groups such as cyclohexenyl; aryl groups such as phenyl, naphthyl, alkylphenyl groups (e.g., 2-methylphenyl, 3-methylphenyl, 4-methylphenyl, 4-ethylphenyl, 4-tert-butylphenyl, 4-n-butylphenyl), dialkylphenyl groups (e.g., 2,4-dimethylphenyl and 2,4,6-triisopropylphenyl), alkylnaphthyl groups (e.g., methylnaphthyl and ethylnaphthyl), dialkylnaphthyl groups (e.g., dimethylnaphthyl and diethylnaphthyl); heteroaryl groups such as thienyl; and aralkyl groups such as benzyl, 1-phenylethyl and 2-phenylethyl. In these groups, some hydrogen may be substituted by a moiety containing a heteroatom such as oxygen, sulfur, nitrogen or halogen, and some carbon may be replaced by a moiety containing a heteroatom such as oxygen, sulfur or nitrogen, so that the group may contain a hydroxyl moiety, cyano moiety, carbonyl moiety, ether bond, ester bond sulfonic acid ester bond, carbonate bond, lactone ring, sultone ring, carboxylic anhydride, or haloalkyl moiety. Suitable heteroatom-containing hydrocarbyl groups include 4-hydroxyphenyl, alkoxyphenyl groups such as 4-methoxyphenyl, 3-methoxyphenyl, 2-methoxyphenyl, 4-ethoxyphenyl, 4-tert-butoxyphenyl, 3-tert-butoxyphenyl; alkoxynaphthyl groups such as methoxynaphthyl, ethoxynaphthyl, n-propoxynaphthyl and n-butoxynaphthyl; dialkoxynaphthyl groups such as dimethoxynaphthyl and diethoxynaphthyl; and aryloxoalkyl groups, typically 2-aryl-2-oxoethyl groups such as 2-phenyl-2-oxoethyl, 2-(l-naphthyl)-2-oxoethyl and 2-(2-naphthyl)-2-oxoethyl. In formula (5), R502is a C1-C40hydrocarbyl group which may contain a heteroatom. Examples of the hydrocarbyl group R502are as exemplified above for the hydrocarbyl group R501. Also included are fluorinated alkyl groups such as trifluoromethyl, trifluoroethyl, 2,2,2-trifluoro-1-methyl-1-hydroxyethyl, 2,2,2-trifluoro-1-(trifluoromethyl)-1-hydroxyethyl, and fluorinated aryl groups such as pentafluorophenyl and 4-trifluoromethylphenyl. In formulae (4) and (5), Mq−is an onium cation. The onium cation is preferably a sulfonium, iodonium or ammonium cation, more preferably sulfonium or iodonium cation. Examples of the sulfonium cation are as exemplified above for the cation in the sulfonium salt having formula (1-1). Examples of the iodonium cation are as exemplified above for the cation in the iodonium salt having formula (1-2). A sulfonium salt of iodized benzene ring-containing carboxylic acid having the formula (6) is also useful as the quencher. In formula (6), R601is hydroxyl, fluorine, chlorine, bromine, amino, nitro, cyano, or a C1-C6saturated hydrocarbyl, C1-C6saturated hydrocarbyloxy, C2-C6saturated hydrocarbylcarbonyloxy or C1-C4saturated hydrocarbylsulfonyloxy group, in which some or all hydrogen may be substituted by halogen, or —N(R601A)—C(═O)—R601B, or —N(R601A)—C(═O)—O—R601B. R601Ais hydrogen or a C1-C6saturated hydrocarbyl group. R601Bis a C1-C6saturated hydrocarbyl or C2-C8unsaturated aliphatic hydrocarbyl group. In formula (6), x′ is an integer of 1 to 5, y′ is an integer of 0 to 3, and z′ is an integer of 1 to 3. L21is a single bond, or a C1-C20(z′+1)-valent linking group which may contain at least one moiety selected from ether bond, carbonyl moiety, ester bond, amide bond, sultone ring, lactam ring, carbonate bond, halogen, hydroxyl moiety, and carboxyl moiety. The saturated hydrocarbyl, saturated hydrocarbyloxy, saturated hydrocarbylcarbonyloxy, and saturated hydrocarbylsulfonyloxy groups may be straight, branched or cyclic. Groups R601may be the same or different when y′ and/or z′ is 2 or 3. In formula (6), R602, R603and R604are each independently halogen or a C1-C20hydrocarbyl group which may contain a heteroatom. The hydrocarbyl group may be saturated or unsaturated and straight, branched or cyclic. Examples thereof include C1-C20alkyl, C2-C20alkenyl, C6-C20aryl, and C7-C20aralkyl groups. In these groups, some or all hydrogen may be substituted by hydroxyl, carboxyl, halogen, oxo, cyano, nitro, sultone, sulfone, or sulfonium salt-containing moiety, or some carbon may be replaced by an ether bond, ester bond, carbonyl moiety, amide bond, carbonate moiety or sulfonic acid ester bond. Also R602and R603may bond together to form a ring with the sulfur atom to which they are attached. Examples of the compound having formula (6) include those described in U.S. Pat. No. 10,295,904 (JP-A 2017-219836). These compounds are highly absorptive and exert a high sensitizing effect and acid diffusion controlling effect. Also useful are quenchers of polymer type as described in U.S. Pat. No. 7,598,016 (JP-A 2008-239918). The polymeric quencher segregates at the resist film surface and thus enhances the rectangularity of resist pattern. When a protective film is applied as is often the case in the immersion lithography, the polymeric quencher is also effective for preventing a film thickness loss of resist pattern or rounding of pattern top. In the resist composition, the quencher is preferably added in an amount of 0 to 5 parts, more preferably 0 to 4 parts by weight per 100 parts by weight of the base polymer. The quenchers may be used alone or in admixture. Exemplary surfactants are described in JP-A 2008-111103, paragraphs [0165]-[0166], Inclusion of a surfactant may improve or control the coating characteristics of the resist composition. The surfactant may be used alone or in admixture. The surfactant is preferably added in an amount of 0.0001 to 10 parts by weight per 100 parts by weight of the base polymer. The inclusion of a dissolution inhibitor may lead to an increased difference in dissolution rate between exposed and unexposed areas and a further improvement in resolution. The dissolution inhibitor which can be used herein is a compound having at least two phenolic hydroxyl groups on the molecule, in which an average of from 0 to 100 mol % of all the hydrogen atoms on the phenolic hydr oxyl groups are replaced by acid labile groups or a compound having at least one carboxyl group on the molecule, in which an average of 50 to 100 mol % of all the hydrogen atoms on the carboxyl groups are replaced by acid labile groups, both the compounds having a molecular weight of 100 to 1,000, and preferably 150 to 800. Typical are bisphenol A, trisphenol, phenolphthalein, cresol novolac, naphthalenecarboxylic acid, adamantanecarboxylic acid, and cholic acid derivatives in which the hydrogen atom on the hydroxyl or carboxyl group is replaced by an acid labile group, as described in U.S. Pat. No. 7,771,914 (JP-A 2008-122932, paragraphs [0155]-[0178]). The dissolution inhibitor is preferably added in an amount of 0 to 50 parts, more preferably 5 to 40 parts by weight per 100 parts by weight of the base polymer. The dissolution inhibitor may be used alone or in admixture. To the resist composition, a water repellency improver may also be added for improving the water repellency on surface of a resist film. The water repellency improver may be used in the topcoatless immersion lithography. Suitable water repellency unprovers include polymers having a fluoroalkyl group and polymers having a specific structure with a 1,1,1,3,3,3-hexafluoro-2-propanol residue and are described in JP-A 2007-297590 and JP-A 2008-111103, for example. The water repellency improver to be added to the resist composition should be soluble in the alkaline developer or organic solvent developer. The water repellency improver of specific structure with a 1,1,1,3,3,3-hexafluoro-2-propanol residue is well soluble in the developer. A polymer having an amino group or amine salt copolymerized as recurring units may serve as the water repellent additive and is effective for preventing evaporation of acid during PEB, thus preventing any hole pattern opening failure after development. An appropriate amount of the water repellency improver is 0 to parts, preferably 0.5 to 10 parts by weight per 100 parts by weight of the base polymer. The water repellency improver may be used alone or in admixture. Also, an acetylene alcohol may be blended in the resist composition. Suitable acetylene alcohols are described in JP-A 2008-122932, paragraphs [0179]-[0182]. An appropriate amount of the acetylene alcohol blended is 0 to 5 parts by weight per 100 parts by weight of the base polymer. The acetylene alcohol may be used alone or in admixture. Process The positive resist composition is used in the fabrication of various integrated circuits. Pattern formation using the resist composition may be performed by well-known lithography processes. The process generally involves the steps of applying the positive resist composition to form a resist film on a substrate, exposing the resist film to high-energy radiation, and developing the exposed resist film in a developer. First, the positive resist composition is applied onto a substrate on which an integrated circuit is to be formed (e.g., Si, SiO2, SiN, SiON, TiN, WSi, BPSG, SOG, or organic antireflective coating) or a substrate on which a mask circuit is to be formed (e.g., Cr, CrO, CrON, MoSi2, or SiO2) by a suitable coating technique such as spin coating, roll coating, flow coating, dipping, spraying or doctor coating. The coating is prebaked on a hotplate preferably at a temperature of 60 to 150° C. for 10 seconds to 30 minutes, more preferably at 80 to 120° C. for 30 seconds to 20 minutes. The resulting resist film is generally 0.01 to 2 μm thick. The resist film is then exposed to a desired pattern of high-energy radiation such as UV, deep-UV, EB, EUV of wavelength 3 to 15 nm, x-ray, soft x-ray, excimer laser fight, γ-ray or synchrotron radiation. When UV, deep-UV, EUV, x-ray, soft x-ray, excimer laser light, γ-ray or synchrotron radiation is used as the high-energy radiation, the resist film is exposed thereto through a mask having a desired pattern in a dose of preferably about 1 to 200 mJ/cm2, more preferably about 10 to 100 mJ/cm2. When EB is used as the high-energy radiation, the resist film is exposed thereto through a mask having a desired pattern or directly in a dose of preferably about 0.1 to 100 μC/cm2, more preferably about 0.5 to 50 μC/cm2. It is appreciated that the inventive resist composition is suited in micropatterning using KrF excimer laser, ArF excimer laser, EB, EUV, x-ray, soft x-ray, γ-ray or synchrotron radiation, especially in micropatterning using EB or EUV. After the exposure, the resist film may be baked (PEB) on a hotplate or in an oven to preferably at 50 to 150° C. for 10 seconds to 30 minutes, more preferably at 60 to 120° C. for seconds to 20 minutes. After the exposure or PEB, the resist film is developed in a developer in the form of an aqueous base solution for 3 seconds to 3 minutes, preferably 5 seconds to 2 minutes by conventional techniques such as dip, puddle and spray techniques. A typical developer is a 0.1 to 10 wt %, preferably 2 to 5 wt % aqueous solution of tetramethylammonium hydroxide (TMAH), tetraethylammonimn hydroxide (TEAH), tetrapropylammonium hydroxide (TPAH), or tetrabutylammonium hydroxide (TBAH). The resist film in the exposed area is dissolved in the developer whereas the resist film in the unexposed area is not dissolved. In this way, the desired positive pattern is formed on the substrate. In an alternative embodiment, a negative pattern may be formed via organic solvent development using a positive resist composition comprising a base polymer having an acid labile group. The developer used herein is preferably selected from among 2-octanone, 2-nonanone, 2-heptanone, 3-heptanone, 4-heptanone, 2-hexanone, 3-hexanone, diisobutyl ketone, methylcyclohexanone, acetophenone, methylacetophenone, propyl acetate, butyl acetate, isobutyl acetate, pentyl acetate, butenyl acetate, isopentyl acetate, propyl formate, butyl formate, isobutyl formate, pentyl formate, isopentyl formate, methyl valerate, methyl pentenoate, methyl crotonate, ethyl crotonate, methyl propionate, ethyl propionate, ethyl 3-ethoxypropionate, methyl lactate, ethyl lactate, propyl lactate, butyl lactate, isobutyl lactate, pentyl lactate, isopentyl lactate, methyl 2-hydroxyisobutyrate, ethyl 2-hydroxyisobutyrate, methyl benzoate, ethyl benzoate, phenyl acetate, benzyl acetate, methyl phenylacetate, benzyl formate, phenylethyl formate, methyl 3-phenylpropionate, benzyl propionate, ethyl phenylacetate, and 2-phenylethyl acetate, and mixtures thereof. At the end of development, the resist film is rinsed. As the rinsing liquid, a solvent which is miscible with the developer and does not dissolve the resist film is preferred. Suitable solvents include alcohols of 3 to 10 carbon atoms, ether compounds of 8 to 12 carbon atoms, alkanes, alkenes, and alkynes of 6 to 12 carbon atoms, and aromatic solvents. Specifically, suitable alcohols of 3 to 10 carbon atoms include n-propyl alcohol, isopropyl alcohol, 1-butyl alcohol, 2-butyl alcohol, isobutyl alcohol, tert-butyl alcohol, 1-pentanol, 2-pentanol, 3-pentanol, tert-pentyl alcohol, neopentyl alcohol, 2-methyl-1-butanol, 3-methyl-1-butanol, 3-methyl-3-pentanol, cyclopentanol, 1-hexanol, 2-hexanol, 3-hexanol, 2,3-dimethyl-2-butanol, 3,3-dimethyl-1-butanol, 3,3-dimethyl-2-butanol, 2-ethyl-1-butanol, 2-methyl-1-pentanol, 2-methyl-2-pentanol, 2-methyl-3-pentanol, 3-methyl-1-pentanol, 3-methyl-2-pentanol, 3-methyl-3-pentanol, 4-methyl-1-pentanol, 4-methyl-2-pentanol, 4-methyl-3-pentanol, cyclohexanol, and 1-octanol. Suitable ether compounds of 8 to 12 carbon atoms include di-n-butyl ether, diisobutyl ether, di-sec-butyl ether, di-n-pentyl ether, diisopentyl ether, di-sec-pentyl ether, di-tert-pentyl ether, and di-n-hexyl ether. Suitable alkanes of 6 to 12 carbon atoms include hexane, heptane, octane, nonane, decane, undecane, dodecane, methylcyclopentane, dimethylcyclopentane, cyclohexane, methylcyclohexane, dimethylcyclohexane, cycloheptane, cyclooctane, and cyclononane. Suitable alkenes of 6 to 12 carbon atoms include hexene, heptene, octene, cyclohexene, methylcyclohexene, dimethylcyclohexene, cycloheptene, and cyclooctene. Suitable alkynes of 6 to 12 carbon atoms include hexyne, heptyne, and octyne. Suitable aromatic solvents include toluene, xylene, ethylbenzene, isopropylbenzene, tert-butylbenzene and mesitylene. Rinsing is effective for minimizing the risks of resist pattern collapse and defect formation. However, rinsing is not essential. If rinsing is omitted, the amount of solvent used may be reduced. A hole or trench pattern after development may be shrunk by tire thermal flow, RELACS® or DSA process. A hole pattern is shrunk by coating a shrink agent thereto, and baking such that the shrink agent may undergo crosslinking at the resist surface as a result of the acid catalyst diffusing from the resist layer during bake, and the shrink agent may attach to the sidewall of the hole pattern. The bake is preferably at a temperature of 70 to 180° C., more preferably 80 to 170° C., for a time of 10 to 300 seconds. The extra shrink agent is shipped and the hole pattern is shrunk. EXAMPLES Examples of the invention are given below by way of illustration and not by way of limitation. All parts are by weight (pbw). Mw and Mw/Mn are determined by GPC versus polystyrene standards using THF solvent. [1] Synthesis of Monomers Synthesis Examples 1-1 to 1-12 Synthesis of Monomers M-1 to M-12 Each of Monomers M-1 to M-12 of the formula shown below was prepared by mixing a nitrogen-containing monomer with a sulfonamide having iodized aromatic ring. [2] Synthesis of Polymers PAG Monomers 1 to 3 identified below were used in the synthesis of polymers. Synthesis Example 2-1 Synthesis of Polymer P-1 A 2-L flask was charged with 4.2 g of Monomer M-1, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 5.4 g of 4-hydroxystyrene, and 40 g of tetrahydrofuran (THF) as solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of azobisisobutyronitrile (AIBN) was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-1. Polymer P-1 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-2 Synthesis of Polymer P-2 A 2-L flask was charged with 2.7 g of M-2, 7.3 g of 1-methyl-1-cyclohexyl methacrylate, 5.0 g of 4-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-2. Polymer P-2 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-3 Synthesis of Polymer P-3 A 2-L flask was charged with 3.8 g of M-3, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.6 g of 3-hydroxystyrene, 11.9 g of PM-1, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried to at 60° C., yielding Polymer P-3. Polymer P-3 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-4 Synthesis of Polymer P-4 A 2-L flask was charged with 4.0 g of M-4, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.6 g of 3-hydroxystyrene, 12.1 g of PM-3, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-4. Polymer P-4 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-5 Synthesis of Polymer P-5 A 2-L flask was charged with 4.0 g of M-5, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.6 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-5. Polymer P-5 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-6 Synthesis of Polymer P-6 A 2-L flask was charged with 6.2 g of M-6, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.4 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 horn s. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-6. Polymer P-6 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-7 Synthesis of Polymer P-7 A 2-L flask was charged with 6.0 g of M-7, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.4 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-7. Polymer P-7 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-8 Synthesis of Polymer P-8 A 2-L flask was charged with 5.4 g of M-8, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.4 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-8. Polymer P-8 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-9 Synthesis of Polymer P-9 A 2-L flask was charged with 5.7 g of M-9, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.4 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-9. Polymer P-9 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-10 Synthesis of Polymer P-10 A 2-L flask was charged with 4.5 g of M-10, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.8 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 horn s. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-10. Polymer P-10 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-11 Synthesis of Polymer P-11 A 2-L flask was charged with 4.4 g of M-11, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.8 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 horn s. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-11. Polymer P-11 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Synthesis Example 2-12 Synthesis of Polymer P-12 A 2-L flask was charged with 3.2 g of M-12, 8.4 g of 1-methyl-1-cyclopentyl methacrylate, 3.8 g of 3-hydroxystyrene, 11.0 g of PM-2, and 40 g of THF solvent. The reactor was cooled at −70° C. in nitrogen atmosphere, after which vacuum pumping and nitrogen blow were repeated three times. The reactor was warmed up to room temperature, whereupon 1.2 g of AIBN was added. The reactor was heated at 60° C., whereupon reaction ran for 15 hours. The reaction solution was poured into 1 L of isopropyl alcohol for precipitation. The precipitated white solid was collected by filtration and vacuum dried at 60° C., yielding Polymer P-12. Polymer P-12 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Comparative Synthesis Example 1 Comparative Polymer cP-1 was obtained by the same procedure as in Synthesis Example 2-1 except that Monomer M-1 was omitted. Comparative Polymer cP-1 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Comparative Synthesis Example 2 Comparative Polymer cP-2 was obtained by the same procedure as in Synthesis Example 2-1 except that 2-(dimethylamino)ethyl methacrylate was used instead of M-1. Comparative Polymer cP-2 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. Comparative Synthesis Example 3 Comparative Polymer cP-3 was obtained by the same procedure as in Synthesis Example 2-2 except that Monomer M-2 was omitted and 1-methyl-1-cyclopentyl methacrylate was used instead of 1-methyl-1-cyclohexyl methacrylate. Comparative Polymer cP-3 was analyzed for composition by13C- and1H-NMR and for Mw and Mw/Mn by GPC. [3] Preparation and Evaluation of Resist Composition Examples 1 to 12 and Comparative Examples 1 to 3 (1) Preparation of Resist Composition Positive resist compositions were prepared by dissolving components in a solvent in accordance with the recipe shown in Table 1, and filtering through a filter having a pore size of 0.2 μm. The solvent contained 100 ppm of surfactant FC-4430 (3M). The components in Table 1 are as identified below. Organic Solvents: PGMEA (propylene glycol monomethyl ether acetate) DAA (diacetone alcohol) Acid generator: PAG-1 of the following structural formula Quencher: Q-1 of the following structural formula (2) Evaluation by EUV Lithography Each of the resist compositions in Table 1 was spin coated on a silicon substrate having a 20-nm coating of silicon-containing spin-on hard mask SHB-A940 (Shin-Etsu Chemical Co., Ltd., Si content 43 wt %) and prebaked on a hotplate at 105° C. for 60 seconds to form a resist film of 50 nm thick. Using an EUV scanner NXE3300 (ASML, NA 0.33, σ 0.9/0.6, quadrupole illumination), the resist film was exposed to EUV through a mask bearing a hole pattern at a pitch 46 nm (on-wafer size) and +20% bias. The resist film was baked (PEB) on a hotplate at the temperature shown in Table 1 for 60 seconds and developed in a 2.38 wt % TMAH aqueous solution for 30 seconds to form a hole pattern having a size of 23 nm. The resist pattern was observed under CD-SEM (CG-5000, Hitachi High-Technologies Corp.). The exposure dose that provides a hole pattern having a size of 23 nm is reported as sensitivity. The size of 50 holes was measured, from which a 3-fold value (3σ) of standard deviation (a) was computed and reported as size variation or CDU. The resist composition is shown in Table 1 together with the sensitivity and CDU of EUV lithography. TABLE 1PolymerAcid generatorQuencherOrganic solventPEB temp.SensitivityCDU(pbw)(pbw)(pbw)(pbw)(° C.)(mJ/cm2)(nm)Example1P-1PAG-1—PGMEA (2,000)95293.1(100)(25.0)DAA (500)2P-2—Q-1PGMEA (2,000)95272.3(100)(0.20)DAA (500)3P-3——PGMEA (2,000)95262.8(100)DAA (500)4P-4——PGMEA (2,000)95252.6(100)DAA (500)5P-5——PGMEA (2,000)95262.5(100)DAA (500)6P-6——PGMEA (2,000)95262.6(100)DAA (500)7P-7——PGMEA (2,000)95242.5(100)DAA (500)8P-8——PGMEA (2,000)95242.5(100)DAA (500)9P-9——PGMEA (2,000)95262.6(100)DAA (500)10P-10——PGMEA (2,000)95252.6(100)DAA (500)11P-11——PGMEA (2,000)95262.6(100)DAA (500)12P-12——PGMEA (2,000)95282.6(100)DAA (500)Comparative1cP-1PAG-1Q-1PGMEA (2,000)95355.6Example(100)(25.0)(3.00)DAA (500)2cP-2PAG-1—PGMEA (2,000)95384.7(100)(25.0)DAA (500)3cP-3—Q-1PGMEA (2,000)95353.9(100)(3.00)DAA (500) It is demonstrated in Table 1 that positive resist compositions comprising a base polymer comprising recurring units having the structure of an ammonium salt of a sulfonamide having an iodized aromatic ring offer a high sensitivity and improved CDU. Japanese Patent Application No. 2020-086623 is incorporated herein by reference. Although some preferred embodiments have been described, many modifications and variations may be made thereto in light of the above teachings. It is therefore to be understood that the invention may be practiced otherwise than as specifically described without departing from the scope of the appended claims. | 82,087 |
11860541 | DETAILED DESCRIPTION OF THE EMBODIMENTS Embodiments of the present disclosure provide a silicon-based nanowire, a method for preparing the silicon-based nanowire, and a thin film transistor. To make the objects, technical solutions and advantages of the present disclosure clearer, the present disclosure will be further described in detail below in conjunction with the accompanying drawings. Apparently, the embodiments described are part of, rather than all of, the embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present disclosure. The shapes and sizes of the components in the drawings do not reflect the true scale, and are merely intended to illustrate the present disclosure. An embodiment of the present disclosure provides a method for preparing a silicon-based nanowire, as shown inFIG.1, including the following. S101, forming catalyst particles, guide walls and retaining walls on a base substrate. The guide walls extend in a first direction. The retaining walls and the guide walls intersect and are located in a same layer. The catalyst particles are located in areas defined by the retaining walls and the guide walls, and the catalyst particles have a eutectic point with silicon; S102, forming an amorphous silicon film covering the catalyst particles, the guide walls and the retaining walls. S103, forming silicon-based nanowires by annealing the amorphous silicon film to a growth of amorphous silicon from the catalyst particles along the direction of the guide walls. In the preparation method provided by the embodiment of the present disclosure, by using a eutectic point of the catalyst particles and silicon, and a driving factor that the Gibbs free energy of amorphous silicon is greater than that of crystalline silicon (silicon-based nanowire), and due to absorption of the amorphous silicon by the molten catalyst particles to form a supersaturated silicon eutectoid, the silicon nucleates and grows into silicon-based nanowires. Moreover, during the growth of the silicon-based nanowire, the amorphous silicon film grows linearly along guide walls under the action of the catalyst particles, and reverse growth of the silicon-based nanowire is restricted by the retaining walls, thus obtaining silicon-based nanowires with a high density and high uniformity. Furthermore, by controlling the size of the catalyst particles and the thickness of the amorphous silicon film, the width of the silicon-based nanowire may also be controlled. Optionally, in the preparation method provided by the embodiment of the present disclosure, the eutectic point of the catalyst particles and silicon is generally within a temperature range from 200° C. to 1000° C. In annealing of the amorphous silicon film, an annealing temperature may be controlled between 200° C. and 600° C., such as 200° C., 300° C., 500° C., or 600° C., which is not limited here. In specific implementation, in the preparation method provided by the embodiment of the present disclosure, before the formation of the amorphous silicon film, the catalyst particles may be formed before the guide walls and the retaining walls are formed; of course, the retaining walls and the guide walls may also be formed before the catalyst particles are formed, which is not limited here. Moreover, the finally formed retaining walls and guide walls being in a same layer means that in a same plane parallel to the base substrate, the guide walls and the retaining walls are not limited to be only in a same film layer. Specifically, the guide walls and the retaining walls may be formed in a same film layer, and may also be formed in adjacent two film layers, which is not limited here. Optionally, in the preparation method provided by the embodiment of the present disclosure, catalyst particles, guide walls and retaining walls are formed on a base substrate by:forming a dielectric layer on the base substrate;patterning the dielectric layer to form a pattern of the guide walls and a pattern of the retaining walls;forming a catalyst film layer on the base substrate formed with the pattern of the guide walls and the pattern of the retaining walls; andforming a pattern of the catalyst particles in the catalyst film layer. Or optionally, in the preparation method provided by the embodiment of the present disclosure, catalyst particles, guide walls and retaining walls are formed on a base substrate by:forming a catalyst film layer on the base substrate;forming a pattern of the catalyst particles in the catalyst film layer;forming a dielectric layer on the base substrate formed with the pattern of the catalyst particles; andpatterning the dielectric layer to form a pattern of the guide walls and a pattern of the retaining walls. In specific implementation, in the case where the catalyst particles are formed before the guide walls and the retaining walls are formed, in the process of patterning the dielectric layer to form a pattern of the guide walls and a pattern of the retaining walls, the dielectric layer in the areas defined by the guide walls and the retaining walls need to be removed to expose the catalyst particles, and the requirement on process accuracy is relatively high. Therefore, in the preparation method provided by the embodiment of the present disclosure, forming the retaining walls and the guide walls before the catalyst particles are formed is technically relatively easy to achieve. In specific implementation, in the preparation method provided by the embodiment of the present disclosure, the dielectric layer may be formed by a layered deposition process, which is not limited herein. Optionally, in the preparation method provided by the embodiment of the present disclosure, a pattern of the catalyst particles is formed in the catalyst film layer by:forming an imprint resist on the catalyst film layer;performing a nanoimprint process on the imprint resist to form a pattern of imprint resist particles; andetching the catalyst film layer by using the pattern of the imprint resist particles as a mask pattern, to form the pattern of the catalyst particles. In specific implementation, using a nanoimprint process to form the pattern of the catalyst particles can achieve high refineness of the pattern and ensure the uniformity and controllability of the catalyst particles, to ensure the uniform growth of silicon-based nanowire. Optionally, in the preparation method provided by the embodiment of the present disclosure, a pattern of the catalyst particles is formed in the catalyst film layer by:forming an imprint resist on the catalyst film layer;performing a nanoimprint process on the imprint resist to form a pattern of imprint resist lines, where an extending direction of the imprint resist lines is the same as an extending direction of the retaining walls;etching the catalyst film layer by using the pattern of the imprint resist lines as a mask pattern, to form a pattern of catalyst lines; and performing plasma bombardment on the catalyst lines to form the pattern of the catalyst particles. Optionally, in the preparation method provided by the embodiment of the present disclosure, the catalyst line width may be controlled between 50 nm and 1000 nm, such as 50 nm, 100 nm, 500 nm, or 1000 nm, which is not limited here. In specific implementation, the pattern of the catalyst particles may also be formed by a photolithography process. Therefore, optionally, in the preparation method provided by the embodiment of the present disclosure, a pattern of the catalyst particles is formed in the catalyst film layer by:forming a photoresist on the catalyst film layer;performing an exposure and developing process on the photoresist to form a pattern of photoresist particles; andetching the catalyst film layer by using the pattern of the photoresist particles as a mask pattern, to form the pattern of the catalyst particles. Optionally, in the preparation method provided by the embodiment of the present disclosure, the material of the catalyst particles may be indium, tin, nickel, or indium oxide, which is not limited here. Optionally, in the preparation method provided by the embodiment of the present disclosure, the particle diameters of the catalyst particles may be controlled between 1 nm and 5000 nm, such as 1 nm, 10 nm, 50 nm, 100 nm, 500 nm, or 1000 nm, which is not limited here. Optionally, in the preparation method provided by the embodiment of the present disclosure, the heights of the guide walls may be controlled between 5 nm and 5000 nm, such as 5 nm, 100 nm, 500 nm, or 1000 nm, which is not limited here. Optionally, in the preparation method provided by the embodiment of the present disclosure, the heights of the retaining walls may be controlled between 5 nm and 5000 nm, such as 5 nm, 100 nm, 500 nm, or 1000 nm, which is not limited here. In specific implementation, the particle diameter of the catalyst particle is determined according to the width of the silicon-based nanowire. Generally, the particle diameter of the catalyst particle is refinedness close to the width of the silicon-based nanowire. Further, the heights of the guide walls and the heights of the retaining walls are determined according to the particle diameters of the catalyst particles. Generally, the height of the guide wall and the height of the retaining wall are at least equal to the particle diameter of the catalyst particle. That is, the height of the guide wall and the height of the retaining wall are not less than the particle diameter of the catalyst particle. Moreover, in the case where the retaining wall and the guide wall are formed in a same film layer, their heights are generally equal. Optionally, in the preparation method provided by the embodiment of the present disclosure, the catalyst particles are formed at positions close to the retaining walls. Generally, within ant area defined by the retaining walls and the guide walls, the catalyst particles are in contact with a same retaining wall, to ensure that within the areas defined by the retaining walls and the guide walls, the silicon-nanowire only grows toward one direction along the guide walls. In specific implementation, in the preparation method provided by the embodiment of the present disclosure, within each area defined by the retaining walls and the guide walls, one or more catalyst particles are formed. Optionally, in the preparation method provided by the embodiment of the present disclosure, if the distance between the two adjacent retaining walls is smaller than 1 μm, one catalyst particle is arranged in an area defined by the retaining walls and the guide walls. In specific implementation, to ensure that each silicon-based nanowire can grow linearly along a set direction, at least one side of the catalyst particles is adjacent to the guide wall. That is, at least part of the catalyst particles is in contact with the guide wall. Therefore, within each area defined by the retaining walls and the guide walls, two catalyst particles may be arranged between adjacent two guide walls. Of course, in specific implementation, within each area defined by the retaining walls and the guide walls, a plurality of catalyst particles may also be arranged such that a silicon-based nanowire grows along each catalyst particle. It should be noted that in the preparation method provided by the embodiment of the present disclosure, the number of the guide walls may be determined according to the number of the silicon-based nanowires that are actually required. The positions of the retaining walls may be set according to the length of the required silicon-based nanowire, which is not limited here. The preparation method provided by the embodiment of the present disclosure is described below with specific embodiments. Some embodiments provided by the present disclosure specifically include as follows. Step1, as shown inFIG.2A, a dielectric layer02is formed on the base substrate01. In specific implementation, the dielectric layer is formed by a layered deposition process, and the material of the dielectric layer may be AlOx, SiOx, or SiNx, which is not limited here. Step2, as shown inFIG.2B, the dielectric layer02is patterned, to form a pattern of guide walls021extending in a first direction X and a pattern of retaining walls022intersecting the guide walls021by one patterning process. The included angles between the guide walls021and the retaining walls022may be nonzero acute angles or 90°. In specific implementation, the heights of the guide walls and the heights of the retaining walls may be controlled between 5 nm and 5000 nm, which is not limited here. Step3, as shown inFIG.2C(the diagram is intended to show that the guide wall and retaining wall are located on the same level, but does not limit the height relationship between the guide wall and retaining wall), a catalyst film layer03covering the guide walls021and the retaining walls022is formed on the base substrate01. In specific implementation, the material of the catalyst film layer may be indium, tin, nickel, or indium oxide, which is not limited here. Step4, as shown inFIG.2D(the diagram is intended to show that the guide wall and retaining wall are located on the same level, but does not limit the height relationship between the guide wall and retaining wall), an imprint resist04is formed on the catalyst film layer03. Step5, as shown inFIG.2E, a nanoimprint process is performed on the imprint resist04to form a pattern of imprint resist particles041. Step6, as shown inFIG.2F, the catalyst film layer03is etched by using the pattern of the imprint resist particles041as a mask pattern to form a pattern of catalyst particles031. In specific implementation, the particle diameters of the catalyst particles may be controlled between 1 nm and 5000 nm, which is not limited here. Step7, as shown inFIG.2G, an amorphous silicon film05covering the catalyst particles031, the guide walls021and the retaining walls022is formed. Step8, as shown inFIGS.2H and2I, the amorphous silicon film05is annealed, so that the amorphous silicon grows from the catalyst particles031along the direction of the guide walls to form silicon-based nanowires051. In specific implementation, in annealing of the amorphous silicon film, an annealing temperature may be controlled between 200° C. and 600° C., which is not limited here. In some other embodiments of the present disclosure, other steps are same as in the embodiment described above except steps4,5and6which are different. Only the different steps are described in detail below. Step4′, a photoresist is formed on the catalyst film layer. Step5′, an exposure and developing process is performed on the photoresist to form a pattern of photoresist particles. Step6′, the catalyst film layer is etched by using the pattern of the photoresist particles as a mask pattern, to form a pattern of catalyst particles. In still some other embodiments of the present disclosure, the steps are the same as steps1-4, and7-8, except for steps5and6, as compared to the above embodiments. Only the different steps are described in detail below. Step5″, as shown inFIG.3A, a nanoimprint process is performed on the imprint resist04to form a pattern of imprint resist lines042. An extending direction of the imprint resist lines042is same as an extending direction of the retaining walls022. Step6″, as shown inFIG.3B, the catalyst film layer03is etched by using the pattern of the imprint resist lines042as a mask pattern to form a pattern of catalyst lines032. Step7″, plasma bombardment is performed on the catalyst lines032to form a pattern of catalyst particles031. It should be noted that, in the above-mentioned preparation method provided by embodiments of the present disclosure, the patterning process may only include a photolithography process, or may include a photolithography process and an etching step, and may also include other processes for forming predetermined patters such as printing, and inkjet processes. And the photolithography process refers to a process for forming patterns by using a photoresist, a mask, an exposure machine and the like, including film formation, exposure, development and other process steps. In specific implementation, a corresponding patterning process may be selected according to the structure formed in the present disclosure. Based on the same inventive concept, an embodiment of the present disclosure further provides a silicon-based nanowire, which is prepared by any of the above-mentioned preparation methods provided by embodiments of the present disclosure. As the problem-solving principle of the silicon-based nanowire is similar to that of the above-mentioned preparation method of a silicon-based nanowire, for the implementation of the silicon-based nanowire, reference may be made to the implementation of the above-mentioned and the preparation method of a silicon-based nanowire, and repeated description is omitted. Based on the same inventive concept, an embodiment of the present disclosure further provides a thin film transistor, which includes a source, a drain, a gate and an active layer. The material of the active layer includes the above-mentioned silicon-based nanowire provided by an embodiment of the present disclosure. As the problem-solving principle of the thin film transistor is similar to that of the above-mentioned silicon-based nanowire, for the implementation of the thin film transistor, reference may be made to the implementation of the above-mentioned silicon-based nanowire, and repeated description is omitted here. In specific implementation, in the case where the above-mentioned silicon-based nanowire provided by an embodiment of the present disclosure is used as the material of the active layer, the amorphous silicon film formed with the silicon-based nanowire needs to be patterned, so that the silicon-based nanowire is in an active layer area, and both the amorphous silicon and the silicon-based nanowire, or only the silicon-based nanowire may be selected to retain in the active layer area. For the silicon-based nanowire, the method for preparing the silicon-based nanowire, and the thin film transistor provided by embodiments of the present disclosure, by using a eutectic point of the catalyst particles and silicon, and a driving factor that the Gibbs free energy of amorphous silicon is greater than that of crystalline silicon, and due to absorption of the amorphous silicon by the molten catalyst particles to form a supersaturated silicon eutectoid, the silicon nucleates and grows into silicon-based nanowires. Moreover, during the growth of the silicon-based nanowire, the amorphous silicon film grows linearly along guide walls under the action of the catalyst particles, and reverse growth of the silicon-based nanowire is restricted by the retaining walls, thus obtaining silicon-based nanowires with a high density and high uniformity. In addition, by controlling the size of the catalyst particles and the thickness of the amorphous silicon film, the width of the silicon-based nanowire may also be controlled. Evidently, those skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is also intended to encompass these modifications and variations thereto so long as the modifications and variations come into the scope of the claims appended to the present disclosure and their equivalents. | 19,776 |
11860542 | DETAILED DESCRIPTION Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be apparent to one of ordinary skill in the art that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, systems, and components have not been described in detail so as not to unnecessarily obscure aspects of the various embodiments. In a lithography step in a process of manufacturing semiconductor devices or the like, in order to form a desired resist pattern on a semiconductor wafer (hereinafter also referred to as a “wafer”), a resist film forming process of coating a resist liquid on the wafer to form a resist film, or the like, is performed. A resist coating apparatus that performs the resist film forming process described above is provided with a discharge nozzle that discharges the resist liquid onto the wafer and a liquid supply pipe that supplies the resist liquid to the discharge nozzle. In many cases, a resin tube made of PFA or the like is used for the liquid supply pipe. Further, the resist coating apparatus is provided with a temperature adjustment mechanism for adjusting a temperature of the resist liquid so that the film thickness is uniform in the plane of the wafer and between the wafers. In the film coating apparatus of Patent Document 1, a cooling water for temperature adjustment is used to adjust the temperature of the resist liquid. If unnecessary water is mixed in the resist liquid, there arises a problem that a resist pattern having a desired shape is not obtained. In the film coating apparatus disclosed in Patent Document 1, the means for suppressing the content of water in the coated film includes the double resin pipe, and the gas including an inert gas exists in the space between the inner pipe and the outer pipe of the double resin pipe so that the double resin pipe is used for the transport system. In Patent Document 1, the liquid transport system transports the coating liquid from the coating liquid container that holds the coating liquid to the coating liquid container in which the substrate is loaded and the film coating is performed on the substrate. However, in the resist coating apparatus used as the film coating apparatus of Patent Document 1, among the resist lines used as the liquid transport system, a resist line provided with the means for suppressing the content of water is a pipe connecting the resist discharge device having the resist temperature adjusting pipe and the nozzle, and the resist supply container as the coating liquid container. The cooling water for resist liquid temperature adjustment is provided in the resist temperature adjusting pipe in the vicinity of the nozzle. As described above, in the case of performing the temperature adjustment of the resist with water, when a resin such as PFA is used for a pipe that separates the water for temperature adjustment from the resist liquid, the water for temperature adjustment may be mixed in the resist liquid. This holds true in treatment liquids that require temperature adjustment other than the resist liquid used in the manufacture of semiconductor devices and the like. Therefore, the technique according to the present disclosure prevents water from being mixed in a treatment liquid when adjusting a temperature of a treatment liquid supplied to a discharge nozzle. Hereinafter, a liquid treatment apparatus and a method of adjusting a temperature of a treatment liquid according to an embodiment will be described with reference to the drawings. In the present specification, elements having substantially the same functional configuration will be denoted by the same reference numerals and therefore, explanation thereof is not repeated. FIG.1is a longitudinal sectional view showing a schematic configuration of a resist coating apparatus as the liquid treatment apparatus according to the embodiment.FIG.2is a perspective view showing the interior of a processing container (to be described later) of the resist coating apparatus ofFIG.1.FIG.3is a sectional view showing a schematic configuration of a gas pipe (to be described later). As shown inFIG.1, the resist coating apparatus1has a processing container10whose interior can be sealed. A loading/unloading port (not shown) for the wafer W is formed in the side surface of the processing container10. An opening/closing shutter (not shown) is provided in the loading/unloading port Two processing parts P1and P2are provided inside the processing container10. The processing parts P1and P2are arranged so as to be arranged along an apparatus width direction (±X-direction inFIG.1). The processing part P1includes a spin chuck11as a substrate holder that holds the wafer W horizontally by vacuum-suctioning the central portion of the back surface of the wafer W. The spin chuck11is connected to a rotation mechanism12and is rotated about a vertical axis by the rotation mechanism12. Further, a cup13for preventing a treatment liquid from scattering from the wafer W is provided so as to surround the wafer W held by the spin chuck11. A liquid drain port14is opened at the bottom of the cup13. Further, an exhaust pipe15is provided at the bottom of the cup13. The interior of the cup13is exhausted by an exhaust device (not shown) connected to the exhaust pipe15during processing of the wafer W. Elevating pins21are arranged around the spin chuck11. The elevating pins21can be elevated vertically by an elevating mechanism22to support and elevate the wafer W. By the elevating pins21, the wafer W can be delivered between the spin chuck11and a wafer transfer mechanism (not shown). Since the processing part P2has the same configuration as the processing part P1, explanation thereof will be omitted. As shown inFIG.2, a guide groove30extending along the apparatus width direction (±X-direction inFIG.2) is formed on one side (the side in a negative Y-direction inFIG.2) of the processing portions P1and P2in an apparatus depth direction on a bottom wall10aof the processing container10. The guide groove30is formed, for example, from the outside of one side (the side in the negative X-direction inFIG.2) of the cup13of the processing part P1in the apparatus width direction to the outside in the other side (the side in the positive X-direction inFIG.2) in the apparatus width direction. An arm31is attached to the guide groove30via a drive mechanism32as a moving mechanism. The arm31extends from the drive mechanism32in the apparatus depth direction orthogonal to the extension direction of the guide groove30. A nozzle head33is connected to the leading end of the arm31. As shown inFIG.1, a discharge nozzle33athat discharges a resist liquid as a treatment liquid onto the wafer W held by the spin chuck11is supported on the lower surface of the nozzle head33. The resist liquid discharged from the discharge nozzle33ais, for example, a metal-containing resist used for EUV exposure. The drive mechanism32ofFIG.2moves in the apparatus width direction (±X-direction inFIG.2) along the guide groove30. With the driving of the drive mechanism, the discharge nozzle33acan move from a standby part34provided on the outside of the other side (the side in the positive/negative X-direction inFIG.2) of the cup13of the processing part Pt in the apparatus width direction to above the central portion of the wafer W inside the cup13. Further, the arm31can be raised and lowered by the drive mechanism32, so that the height of the discharge nozzle33acan be adjusted. Further, as shown inFIG.1, the resist coating apparatus1includes a fan filter unit (FFU)16as an atmosphere gas supply part for supplying an atmosphere gas into the processing container10. The fan filter unit16is provided to supply clean air whose temperature has been adjusted as the atmosphere gas to the wafer W held by the spin chuck11. The temperature of the air from the fan filter unit16is adjusted to, for example, about 23 degrees C. Furthermore, in the resist coating apparatus1, a gas pipe40is connected to the nozzle head33. As shown inFIG.3, the gas pipe40encompasses a liquid supply pipe41. The liquid supply pipe41is configured to supply the resist liquid from a resist liquid storage source (not shown) to the discharge nozzle33aof the nozzle head33. Further, an inert gas from a storage source (not shown) of the inert gas such as a N2gas flows through in a space between the inner wall surface of the gas pipe40and the liquid supply pipe41. The temperature of the inert gas flowing through the gas pipe40is adjusted by exchanging heat with the air from the fan filter unit16during the flow. Then, the temperature of the resist liquid is adjusted by the temperature-adjusted inert gas. That is, the inert gas flowing through the gas pipe40is for adjusting the temperature of the resist liquid. For example, a PFA tube or a high barrier tube obtained by subjecting the surface of a resin tube made of PFA or the like to hydrophobic treatment may be used as the gas pipe40. Further, a tube made of a metallic material having high thermal conductivity such as SUS may be used for a portion of the gas pipe40that does not deform when the discharge nozzle33ais moved by the drive mechanism32. As for the liquid supply pipe41, the same tube as the gas pipe40may be used. It should be noted that the gas pipe40does not encompass the liquid supply pipe41in all the regions thereof, but encompasses the liquid supply pipe41only in a portion on the downstream side thereof. In this example, it is assumed that the gas pipe40encompasses the liquid supply pipe41only in a portion on the downstream side from the vicinity of a fixture71to be described later. An encompassing portion40aof the liquid supply pipe41in the gas pipe40is connected to the nozzle head33via a regulating member35. The regulating member35is for regulating the movement of the gas pipe40with respect to the nozzle head33and the like. In order to maintain the sealability of the gas pipe40, a sealing member (not shown) is provided around the liquid supply pipe41at an insertion portion of the liquid supply pipe41in the gas pipe40. Further, in the gas pipe40, an extension portion40b, which is a portion between the upstream end of the gas pipe40inside the processing container10and the encompassing portion40aof the liquid supply pipe41, is provided to be folded back in the processing container10in a plan view. In the present embodiment, inside the processing container10, the upstream end of the gas pipe40is connected to a joint60with respect to an introduction pipe50for introducing the inert gas from the inert gas storage source into the processing container10. Therefore, in the present embodiment, the extension portion40bis from a portion connected to the joint60of the gas pipe40to the encompassing portion40a. Inside the processing container10, the gas pipe40is arranged so that the above-mentioned extension portion40bis folded back in a plan view. Specifically, in the gas pipe40, the extension portion40bis arranged so as to be folded back in the movement direction of the discharge nozzle33aby the drive mechanism32in a plan view, that is, the apparatus width direction (±X-direction inFIG.2), and is arranged so as to be folded back in a direction orthogonal to the apparatus width direction in a plan view, that is, in the apparatus depth direction (±Y-direction inFIG.2). More specifically, the extension portion40bis arranged to travel on the bottom wall10aso as to extend around the interior of the processing container10along the side wall of the processing container10. Therefore, the extension portion40bhas bent portions in the vicinity of four corners in the processing container10. Further, the processing container10has an L-shaped fixture70in a plan view and a linear fixture71in a plan view for fixing the gas pipe40inside the processing container10. The fixture70fixes the bent portions in the vicinity of four corners in the processing container10in the extension portion40bof the gas pipe40to the bottom wall10aof the processing container10. The fixture71fixes a portion of the gas pipe40, which is further downstream than the bent portion at the most downstream of the extension portion40b, to the bottom wall10aof the processing container10. As the material of the fixtures70and71, a metallic material such as stainless steel having higher thermal conductivity than the gas pipe40may be used. A part having a through-hole, which is called through-joint, may be used as each of the fixtures70and71. However, the through-joints as the fixtures70and71is not used for the purpose of connecting gas pipes to each other, but is used for the purpose of stabilizing the position of the gas pipe40by fixing the through-joints with the gas pipe40inserted through the through-holes of the through-joints. A metallic material having higher thermal conductivity than the gas pipe40may be used not only for the fixtures70and71but also for the material of the joint60. Next, an example of wafer processing in the resist coating apparatus1will be described. First, the wafer W is transferred into the processing container10and placed and adsorbed on the spin chuck11of any of the processing parts P1and P2. Here, it is assumed that the wafer W is placed and adsorbed on the spin chuck11of the processing part P1. Subsequently, the discharge nozzle33ais moved above the center of the wafer W held by the spin chuck11of the processing part P1with the driving of the drive mechanism32. Then, the wafer W held by the spin chuck11is rotated with the driving of the rotation mechanism12, and the discharge nozzle33adischarges the temperature-adjusted resist liquid to the rotating wafer W. In order to adjust the temperature of the resist liquid, in the resist coating apparatus1, the temperature-adjusted air from the fan filter unit16is first used to adjust the temperature of the inert gas, which passes through the extension portion40bof the gas pipe40, to, for example, about 23 degrees C. Then, in the encompassing portion40aon the downstream side of the extension portion40b, the temperature of the resist liquid flowing through the liquid supply pipe41is adjusted to, for example, about 23 degrees C. by the temperature-adjusted inert gas. After the resist liquid is discharged to form a resist film on the wafer W, the discharge nozzle33ais retracted to the standby part34, and the wafer W is discharged from the processing container10. In this way, the wafer processing is completed. As described above, in the present embodiment, the resist coating apparatus1has the spin chuck11that holds the wafer W, and the discharge nozzle33athat discharges the resist liquid to the wafer W held by the spin chuck11. Further, the resist coating apparatus1has the liquid supply pipe41that supplies the resist liquid from the resist liquid storage source to the discharge nozzle33a, and the gas pipe40that encompasses the liquid supply pipe41and through which the inert gas for adjusting the temperature of the resist liquid flows in the space between the gas pipe40and the liquid supply pipe41. Further, the resist coating apparatus1has the processing container10in which the spin chuck11, the discharge nozzle33athe liquid supply pipe41, and the gas pipe40are provided, and the fan filter unit16that supplies the atmosphere gas into the processing container10. Furthermore, in the resist coating apparatus1, the gas pipe40has the extension portion40bwhich is a portion between the upstream end in the processing container10and the encompassing portion40aencompassing the liquid supply pipe41. Therefore, the inert gas that has exchanged heat with the atmosphere gas from the fan filter unit16in the extension portion40bcan be supplied to the encompassing portion40aencompassing in the liquid supply pipe41. Further, in the present embodiment, since the extension portion40bof the gas pipe40is arranged so as to be folded back inside the processing container10in a plan view to lengthen the extension portion40b, the heat exchange can be promoted in the extension portion40b. Therefore, it is possible to more reliably make the temperature of the resist liquid substantially equal to the temperature of the atmosphere gas from the fan filter unit16. That is, the temperature of the resist liquid can be adjusted more reliably. Further, in the present embodiment, since the inert gas is used to adjust the temperature of the resist liquid in the vicinity of the discharge nozzle33ainstead of the temperature regulating water, unnecessary moisture is not mixed in the resist liquid. In a case in which a metal-containing resist liquid is used as the resist liquid, when unnecessary moisture is mixed in the resist liquid, the line width of the resist pattern may fluctuate. However, according to the present embodiment, the fluctuation as described above can be prevented. Unlike the technique according to the present embodiment, a method of adjusting the temperature of the resist liquid by using an inert gas whose temperature is adjusted outside the resist coating apparatus1may be considered. However, in the case of this method, it is necessary to dispose a temperature adjusting mechanism of the inert gas separately from the resist coating apparatus1, and the position of the temperature adjusting mechanism needs to be in the vicinity of the resist coating apparatus1so as not to prevent the temperature of the inert gas from being affected by the outside until it reaches the resist coating apparatus1. Therefore, the above-mentioned method different from the technique according to the present embodiment has a problem in an installation space of the resist coating apparatus including the inert gas temperature adjusting mechanism. In contrast, in the present embodiment, since the inert gas temperature adjusting mechanism does not need to be arranged separately from the resist coating apparatus1, the space related to the resist coating apparatus1can be reduced. Thus, the above-mentioned problem in the installation space does not occur. Further, in the present embodiment, a metallic material having higher thermal conductivity than the gas pipe40is used as the material of the fixtures71and72for fixing the gas pipe40inside the processing container10. Accordingly, the fixtures71and72are easily heated and cooled down by the atmosphere gas from the fan filter unit16. Therefore, the heat exchange between the atmosphere gas from the fan filter unit16and the inert gas flowing through the gas pipe40can be further promoted at the arrangement positions of the fixtures71and72. Further, in the present embodiment, a metallic material having higher thermal conductivity than the gas pipe40is used as the material of the joint60. Accordingly, the joint60is easily heated and cooled down by the atmosphere gas from the fan filter unit16. Therefore, the heat exchange between the atmosphere gas from the fan filter unit16and the inert gas flowing through the gas pipe40can be further promoted at the arrangement position of the joint60. When the gas pipe40is divided into a plurality of sections which are connected to each other by respective joints, a metallic material having higher thermal conductivity than the gas pipe40may also be used as materials of the joints. FIG.4is an enlarged plan view showing a specific example of the gas pipe40. As shown inFIG.4, the outer circumference of the gas pipe40may be formed in a bellows shape. The surface area of the outer circumferential surface of the gas pipe40can be increased by forming the outer circumference of the gas pipe40in the bellows shape. As a result, the heat exchange between the atmosphere gas from the fan filter unit16and the inert gas flowing through the gas pipe40can be further promoted. Further, an outer diameter R1of the gas pipe40may be larger than an outer diameter R2of the introduction pipe50that introduces the inert gas from the inert gas storage source into the processing container10. This makes it is possible to increase the surface area of the outer wall surface of the gas pipe40. As a result, the heat exchange between the atmosphere gas from the fan filter unit16and the inert gas flowing through the gas pipe40can be further promoted. In some embodiments, an inner diameter of the gas pipe40may be larger than an inner diameter of the introduction pipe50. As a result, since a flow velocity of the inert gas inside the gas pipe40can be reduced, the heat exchange can be further promoted. In the above example, a single liquid supply pipe41is encompassed in the gas pipe40. However, a plurality of liquid supply pipes41may be encompassed in the gas pipe40. FIG.5is a sectional view for explaining another example of the liquid supply pipe41. As shown inFIG.5, the liquid supply pipe41may have a double pipe structure composed of an inner pipe41aand an outer pipe41b. A metal-containing resist liquid as a resist liquid may be flowed into the inner pipe41aand an acid solvent may be flowed between the inner pipe41aand the outer pipe41b. When moisture is mixed in the metal-containing resist liquid that is originally adjusted to be acidic so that the metal-containing resist liquid becomes neutral, among a hydrolysis reaction and a polycondensation reaction, the percentage of the poly condensation reaction becomes large. As a result, the number of particles and the line width may become large. In contrast, as in this example, by flowing the acid solvent to the outside of the inner tube41athrough which the resist liquid flows, even if moisture is mixed in the resist liquid, the acid solvent is intentionally mixed in the resist liquid. Thus, the resist liquid can be kept acidic. Accordingly, the increase in the number of particles and the increase in the line width can be prevented. As the acid solvent, for example, a mixture of a carboxylic acid such as formic acid or acetic acid with an organic solvent may be used. Further, it is preferable that the liquid supply pipe41is thicker than the conventional one. By making the liquid supply pipe41thicker, the amount of moisture that flows through the liquid supply pipe41and is mixed in the resist liquid discharged from the discharge nozzle33acan be further suppressed. Further, it is preferable that an internal pressure of the liquid supply pipe41through which the resist liquid flows is higher than that of the conventional one. By making the internal pressure of the liquid supply pipe41higher, the amount of moisture that flows through the liquid supply pipe41and is mixed in the resist liquid discharged from the discharge nozzle33acan be further suppressed. In the above example, the gas pipe40is arranged so that the extension portion40bextends around the interior of the processing container10. That is, the gas pipe40is arranged such that the extension portion40bis folded back in the apparatus width direction and is also folded back in the apparatus depth direction. Without being limited to this example, the gas pipe40may be arranged such that the extension portion40bmay be folded back only in the apparatus width direction, or the extension portion40bmay be folded back only in the apparatus depth direction. Further, in the above examples, the resist liquid is used as the treatment liquid to be supplied to the discharge nozzle, but the treatment liquid is not limited thereto. For example, the treatment liquid may be a coating liquid for forming a coating film by spin coating or the like, other than the resist liquid. More specifically the treatment liquid may be a coating liquid containing both organic and inorganic substances (for example, a coating liquid used for forming a SiARC film or a Spin On metal film) other than the metal-containing resist. It should be noted that the embodiments disclosed herein are exemplary in all respects and are not restrictive. The above-described embodiments may be omitted, replaced or modified in various forms without departing from the scope and spirit of the appended claims. The following configurations also belong to the technic-al scope of the present disclosure. (1) A liquid treatment apparatus includes: a substrate holder configured to hold a substrate; a discharge nozzle configured to discharge a treatment liquid onto the substrate held by the substrate holder; a liquid supply pipe configured to supply the treatment liquid from a treatment liquid storage source to the discharge nozzle; a gas pipe that encompasses the liquid supply pipe and through which an inert gas for adjusting the temperature of the treatment liquid flows in a space between the gas pipe and the liquid supply pipe; a processing container in which the substrate holder, the discharge nozzle, the liquid supply pipe, and the gas pipe are provided; and an atmosphere gas supply part configured to supply an atmosphere gas into the processing container, wherein an extension portion of the gas pipe is folded back inside the processing container in a plan view, the extension portion being a portion between an upstream end inside the processing container and an encompassing portion that encompasses the liquid supply pipe. According to (1) above, it is possible to prevent moisture from being mixed into the treatment liquid when the temperature of the treatment liquid to be supplied to the discharge nozzle is adjusted. Further, the atmosphere gas from the temperature-adjusted atmosphere gas supply part is used to adjust the temperature of the inert gas used for adjusting the temperature of the treatment liquid. By providing the gas pipe as in (1) above, the heat exchange between the inert gas and the atmosphere gas can be further promoted. (2) The liquid treatment apparatus of (1) above further includes a moving mechanism configured to move the discharge nozzle in a predetermined direction in the plan view. (3) In the liquid treatment apparatus of (2) above, the extension portion of the gas pipe is folded back in a movement direction of the discharge nozzle by the moving mechanism. (4) In the liquid treatment apparatus of (2) or (3) above, the extension portion of the gas pipe is folded back in a direction orthogonal to the movement direction of the discharge nozzle by the moving mechanism. (5) In the liquid treatment apparatus of any one of (2) to (4) above, the extension portion of the gas pipe extends around an interior of the processing container. (6) The liquid treatment apparatus of any one of (1) to (5) above further includes a fixture configured to fix the gas pipe inside the processing container, wherein the fixture is made of a metallic material having higher thermal conductivity than that of the gas pipe. According to (6) above, the heat exchange between the inert gas and the atmosphere gas can be further promoted. (7) In the liquid treatment apparatus of any one of (1) to (6) above, an outer circumference of the gas pipe is formed in a bellows shape. According to (7) above, the heat exchange between the inert gas and the atmosphere gas can be further promoted. (8) In the liquid treatment apparatus of any one of (1) to (7) above, the processing container has a Joint that connects an introduction pipe and the gas pipe, the introduction pipe being configured to introduce the inert gas from an inert gas storage source into the processing container, and the joint is made of a metallic material having higher thermal conductivity than that of the gas pipe. According to (8) above, the heat exchange between the inert gas and the atmosphere gas can be further promoted. (9) In the liquid treatment apparatus of any one of (1) to (8) above, the gas pipe has an outer diameter larger than that of the introduction pipe configured to introduce the inert gas from the inert gas storage source into the processing container. According to (9) above the heat exchange between the inert gas and the atmosphere gas can be further promoted. (10) There is provided a method of adjusting a temperature of a treatment liquid in a liquid treatment apparatus. The liquid treatment apparatus includes: a substrate holder configured to hold a substrate; a discharge nozzle configured to discharge the treatment liquid onto the substrate held by the substrate holder; a liquid supply pipe configured to supply the treatment liquid from a treatment liquid storage source to the discharge nozzle; a gas pipe that encompasses the liquid supply pipe and through which an inert gas for adjusting the temperature of the treatment liquid flows in a space between the gas pipe and the liquid supply pipe; a processing container in which the substrate holder, the discharge nozzle, the liquid supply pipe, and the gas pipe are provided; and an atmosphere gas supply part configured to supply an atmosphere gas into the processing container, wherein an extension portion of the gas pipe is folded back inside the processing container in a plan view, the extension portion being a portion between an upstream end inside the processing container and an encompassing portion that encompasses the liquid supply pipe. The method includes: adjusting a temperature of the inert gas flowing through the gas pipe with the atmosphere gas from the atmosphere gas supply part; and adjusting the temperature of the treatment liquid flowing through the liquid supply pipe with the inert gas having the adjusted temperature. According to the present disclosure in some embodiments, when a temperature of a treatment liquid to be supplied to a discharge nozzle is adjusted, it is possible to prevent unnecessary moisture from being mixed into the treatment liquid. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosures. Indeed, the embodiments described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the disclosures. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosures. | 30,469 |
11860543 | DETAILED DESCRIPTION OF THE INVENTION The transport devices, systems and methods as described herein, generally relate to moving photopolymer plates lying on a flat horizontal support surface from one production process to the next. The plates are grabbed and lifted slightly by a vacuum gripper end effectors (i.e. suction cups), which in preferred arrangements, provides enough space to permit air to be blown between the plate and the support surface, creating an air cushion that reduces friction between plate and support surface. In exemplary systems, the plate is moved from a reservoir stack into the imager, after imaging it is moved to the UV exposure stage and after UV exposure, on to the washing processor. The system may comprise fewer than all of the above stations, however, and may be useable for transport of plate between any set of process step stations. Movement of a plate through an exemplary plate processing workflow is depicted inFIG.2, which also depicts the overall plate movement schematically.FIG.1Bdepicts an exemplary physical layout of a portion of the exemplary workflow. The plate is first, optionally, taken from a location210in a plate reservoir stack or plate pickup area (e.g. table212) and moved in the direction of arrow1to a marked staging location220(markings described herein later). From the staging location220, the plate is moved into location230in the imager (which may be similar to imager110depicted inFIG.1A) in the direction of arrow2, and the plate is again disposed in the staging location220when it is discharged by the imager in the direction of arrow3. The plate is then transported in the direction of arrow4to a location240in a photopolymer curing station, such as a UV exposure station (which may be similar to exposure unit120depicted inFIG.1A), and then the plate is transported in the direction of arrow5to a next station, which may be a location250in a plate washing processor station, or a pre-washing loading area where the plate is punched and pins are placed therein for use in a transport system through the washing station. Thus, the system as described herein may be used for executing each of the following process steps:taking the non-processed plate from reservoir stack210in the direction of arrow1;feeding the plate into imager230in the direction of arrow2;removing the plate from the imager230in the direction of arrow3;feeding the plate to the UV exposure unit240in the direction of arrow4; andfeeding the plate from the UV exposure unit240to the washing processor250in the direction of arrow5. In one embodiment, a row of vacuum gripper end effectors (i.e. suction cups), as are well known in the art, pick up the plate from the top surface of the plate adjacent edges that are aligned orthogonal to the movement direction. The thick black lines260,262,264inFIG.2indicate edges of the plate adjacent to the gripping areas on the surface of the plate where the suction cups pick up the plate, in an exemplary embodiment. Thus, for example, edge260is the trailing edge of the plate when moving in the direction of arrow2in the Y direction into the imager, and the leading edge of the plate when moving in the direction of arrow3in the −Y direction out of the imager. Likewise, edge262is the leading edge of the plate when moving in the direction of arrow1in the X direction from the plate reservoir stack in the location depicted inFIG.2and when moving in the direction of arrow4in the X direction from location220to the exposure unit240. Non-cured plates are sensitive to being damaged in the LAMS layer by the suction cups, which may warp the plate surface in a way that would stay in the plate surface and transfer into the print, if there is a mask opening in the LAMS layer at the same location of the warping, allowing the curing step to permanently fix the plate in the warped configuration. Thus, it is ideal to avoid image information in locations where the suction cups are applied, meaning the lifting area of the plate contacted by the suction cups should be as close as possible adjacent the relevant edge of the plate. To reduce the forces needed for the movement of the plate, compressed air is preferably blown between the lifted edge of the plate and the support surface, creating an air cushion, which reduces the friction between plate and support surface. The overall lift of the plate by the plate handling system is preferably minimal—enough to permit the air to blow beneath the plate to create the friction-reducing air cushion—but no more than is needed for this purpose. In general, a lifting distance in the Z direction on the order of 2 mm may be sufficient. The geometry of the vacuum end effectors is not limited to any particular geometry, and may be, for example without limitation, round, square, rectangular, or nearly linear in nature. While round end effectors are depicted, the invention is not limited to any particular geometry, and other geometries (such as a rectangular geometry with the long edge parallel to the edge of the plate adjacent the pick-up area) may be used minimize the overall margin of non-imaged area required around the edge of the plate to provide non-imaged surfaces for interacting with the plate handler. Round end effectors are generally preferred, however, because forces are more evenly distributed for round end effectors, particularly at the edge of the end effector adjacent the edge of the plate from which the plate is grabbed. For example, the holding forces when moving the plate in the y-direction using a rectangular effector are different at the edge of the effector parallel to edge260of the plate than at the edge of the effector parallel to edge262. When the relatively longer side of a rectangular end effector is aligned parallel to the plate edge, grabbing the plate at side260provides less holding force for the y-movement than grabbing from side262. Although four vacuum end effectors are depicted in the figures, the invention is not limited to any number of end effectors. In exemplary embodiments configured for lifting plates having dimensions of 50×80 and 35×48 inches, respectively, it was found that four circular end effectors having a diameter of less than 1 inch, such as ½ inch diameter, evenly spaced in a linear configuration over a distance of 20 to 25 inches, provided adequate lifting characteristics. It should be understood that the embodiments as described herein are not limited to use for any particular size plates, and may be suitable for use in connection with any size of plates known, including but not limited to other standard sizes, such as 42×60 and 25×30 inch sizes, or portions thereof. Accurate orientation of the plate relative to the imager intake and relative to the punch that punches holes into the plate downstream of the exposure station (for transportation through the washing processor) is often important. If the plate is not angled at the preferred orientation in the punching step (which punches from the top surface to the bottom surface of the plate adjacent trailing edge262of the plate), plate material may only incompletely surround the holes (and pins later inserted in the holes), and thus the plate may not be sufficiently fixed to the pins. The punching step is typically performed at a punching station located adjacent an entrance to the plate washing processor. Thus, it is preferred to align the relevant plate edge precisely parallel to the row of pins that punch holes into the plate at this step. Likewise, a drum imager is configured to receive the plate and grab the leading edge268of the plate using a clamp. The imaging step benefits from precise alignment of the leading edge in the clamp, so that the imaged information is properly aligned on the plate. Thus, an optimal plate handling system is capable of positioning the plate and the suction cups very precisely, preferably within 1 mm of tolerance. As shown inFIG.2, the handling system moves the plate in two directions (X, Y) along a plane defined by the staging location220for moving the plate into and out of the imager and the flat bed of the exposure unit. Because both alignment of the edge268first received by the imager and the edge262first received by the washing station, alignment parallel to both the X and Y axes may be critical. To facilitate parallel alignment of the relevant edges of the plate with both axes, the plate handling unit has another degree of freedom. Exemplary plate handler embodiments500and600are depicted inFIGS.4A-4F.FIGS.4A-4Care schematic drawings that depict an exemplary embodiment, not drawn to scale, showing various components of an exemplary system as described herein. Some elements are shown in all figures, whereas other elements are omitted in some figures to reduce clutter.FIGS.4D-4Fare photographs of an exemplary prototype embodiment600, showing one of many ways in which various aspects of the invention may be reduced to practice. Common elements betweenFIGS.4A-4Fare given the same element numbers, but it should be understood that the schematic features shown inFIGS.4A-4Cmay be embodied in any number of ways, and not necessarily as depicted in prototype600. As shown in the exemplary embodiments, a bar510with a row of vacuum end effectors512(suction cups) is mounted to a motorized rotation stage514. Vacuum end effectors512are connected to a source820of vacuum, optionally controlled via a control valve (not shown), for controlling the state of the end effectors as pulling a vacuum or not pulling a vacuum. Vacuum end effectors and control thereof are well known in the art and specific designs are not detailed further herein. Rotation stage514is mounted on a Y linear stage516configured to provide movement in the Y direction, carriage518travels along the X-direction, and may also be configured to move up and down in the Z direction. Rotation stage514may also provide Z translation functionality to lower the bar with the suction cups down to the plate and then lift the plate up in the Z direction in some embodiments. Rotation stage514may, for example, comprise a bar mounting plate550to which bar510is connected. In one embodiment, bar mounting plate550may be connected to a translatable and rotatable hub520capable of translating in the Z direction and also rotating about the Z-axis. In the embodiment depicted inFIGS.4D-4F, hub520may be configured only for rotation along the Z-axis. Z translation may be effected by a Z-translation stage581disposed vertically between carriage582, which is configured to run on rails580extending across the workflow portion covered by the transporter, and the cantilevered portion584of carriage518, as depicted inFIG.4F. The Z-stage may be disposed within an enclosure586, which may also house any of the control system drivers, solenoids, relays, processors (or portions of processing systems), and wiring connecting various components of the plate handling system. As described above, precise positioning of the suction cups on the plate is desirable for multiple reasons, and therefore it is also desirable to determine the location of the plate edges precisely. In an exemplary embodiment, a sensor interacting the markings in staging location220measures location of the plate edge relative to four locations, as further described with reference toFIG.3. In comparison toFIG.2,FIG.3schematically depicts a plan view of a plate400in staging location220, oriented with edge268facing the top of the page inFIG.3. In a preferred embodiment, one or more contrast sensors560(shown inFIGS.4B and4E, but omitted in other figures to reduce clutter), each configured detect a change in reflectivity between the plate and markings in the staging location are used for determining location. In the exemplary embodiment, a plurality of reflector strips A1, A2, B1, B2are affixed to the underside of a transparent or translucent support surface, such as the glass support surface114depicted inFIG.1B. Suitable contrast sensors include, for example, a Wenglor Laser contrast sensor, part number YM24PAH2ABF. The invention is not limited to any particular type of sensor, type of contrast sensor, or wavelength of operation. However, because the contrast sensor typically sends out radiation, such as in the form of a focused laser beam at a specific wavelength or band of wavelengths, and measures the amount of reflected light, the wavelength and intensity of radiation used by the sensor is optimally selected so as not to cause curing of the photopolymer or ablation of the LAMS layer. The invention is not limited to embodiments with transparent or translucent support surfaces in some or all portions of the workflow. The use of opaque support surfaces may provide certain advantages, such as being able to use contrast sensors that can directly detect the difference between an opaque support surface and the polymer plate without using stripes marked on the support surface, as described herein. Lines A1, B1, A2, and B2inFIG.3comprise exemplary markings for the contrast measurement. In embodiments in which the support surface is made of glass or other material that is transparent to radiation range of the sensor, the stripes may be attached to the bottom side of the support surface. The sensor is preferably connected to the plate handler in a way that is fixed relative to the Y linear stage516so that it is moveable in the X and Y axis with the other elements of the plate handler, but does not rotate or translate along the Z axis, as schematically depicted inFIG.4B. Parallelism of the plate is measured with respect to the X-direction by scanning stripes A1and A2(e.g. if A1is greater in length than A2, then some adjustment is needed). Scanning stripes B1and B2provides information regarding the location of the beginning and end of the plate in the transportation direction (X). Stripes B1and B2may be a continuous stripe with no gap between portions B1and B2, or each of B1and B2may terminate in locations where any size plate for which the systems is designed would be expected to block the unstriped portion. A1and A2may similarly terminate near the expected edge268of the plate, or in a central location where any size plate for which the system is designed would block any unstriped portions. In an exemplary embodiment, scanning may be performed in the order: A1, B1, A2, B2. In one embodiment, the plate may be placed on staging location220by an operator or by a prior step of the present plate handling system (or some other plate handling system). The contrast sensor560scans the plate relative to the four locations, calculates adjustments required to orient the plate relative to the stripes, picks up the plate, and rotates and translates the plate, as required. In one embodiment, the orientation of the plate is performed by grabbing the plate in the gripping area adjacent edge260and adjusting plate location. This adjustment step may be performed before moving the plate into a plate receiving position relative to the imager, to ensure the plate is engaged in a proper orientation by the clamp on the imager drum. In embodiments in which, for example, the plate is loaded by a human operator into the imager, or in which orientation in the imager is not as critical as orientation in the punching step upstream of the washing station, orientation adjustment prior to loading the plate into the imager may not be necessary. A plate orientation adjustment step may instead, or additionally, be performed when the plate is discharged from the imager, preferably before the plate is completely pulled out of the imager. Exemplary imagers are capable of unloading the plate to the support surface of staging area220automatically, but cannot fully push the plate onto the support surface, which requires the plate handling system to pull the plate a certain distance away from the imager (in the −Y direction as depicted by arrow3inFIG.2). Thus, just before picking up the plate, the plate handler can assess orientation of the plate relative to the markings, adjust the position of the suction cups accordingly, based on the orientation of the plate, pick up the plate, and align the plate precisely prior to dropping the plate from the gripping area adjacent edge260, and again picking up the plate adjacent edge262for movement in the X direction. In still another embodiment, the orientation step may additionally or instead be performed by the plate handler between the steps of pulling it from the imager adjacent edge260and picking it up again from edge262. In some embodiments, the imager loading, imaging, and discharging process may be sufficiently reliable such that a single orientation step prior to loading into the imager may be sufficient. However, in preferred embodiments, the plate handling system uses the sensor to detect the edge260of the plate relative to stripes A1and A2before grabbing the plate with the end effectors to pull it from the imager. This edge detection information provides sufficient information to adjust parallelism of the plate relative to A1and A2, so the system can perform any necessary rotation before dropping the plate adjacent edge260or after picking up the plate adjacent edge262. The sensor may also be used for sensing the location of edge262relative to B2in order to place the end effectors in the correct location prior to lifting the plate for transporting it to the curing station. The sensor may be used prior to each picking step to ensure the end of the plate is in the expected location, or the handling system may rely upon stored information for gripping area locations, based upon expected (or measured) size of the plate and locations previously measured in process. An exemplary orientation of the exemplary plate handler relative to an imager110having a glass support surface114that serves as staging area220for orienting plates going into and/or coming out of the imager, is depicted inFIGS.1B,2and3. Markings in the form of stripes A2and B2of reflective tape are affixed to the underside of glass114. Air blade570, connected to a pressurized air source800via various conduits (not shown) and controlled by one or more valves (also not shown), is shown positioned adjacent the vacuum end effectors512and pointed so that air is blown in a direction underneath plate400from a location on the other side of the vacuum end effectors from the plate. The operation of air blades and the control thereof are well known in the art and are not detailed further herein, except with respect to one exemplary, non-limiting embodiment. Air blade570is connected to the plate handling system in a fixed configuration relative to bar510(as shown, fixed to bar mounting plate550), so that the air blade remains in a fixed location relative to the vacuum end effectors. In a preferred embodiment, the vacuum end effectors move in the Z direction in a fixed relationship with the air blade, with the air blade disposed in a location and at an angle at which it directs air at the interface of the plate and a surface on which the plate rests, for a known plate thickness. The relationship between the air blade and the vacuum end effectors may also be adjustable for different plate thickness. The air blade rotates with mounting plate550and bar510so that it is always directed from “behind” the end effectors relative to the plate. Rotation stage514thus rotates hub520so that the air blade is in the desired position regardless of whether the plate handler is pushing or pulling the plate. Although systems with an air blade are preferable to reduce the friction of the plate sliding on the surface and thus the gripping force needed to be exerted by the suction cups, embodiments without an air blade may also be provided. The air blade is not shown schematically in the other figures, to reduce clutter. In some embodiments, depicted inFIGS.6A-6C, each vacuum end effector512may be connected to bar510via a fitting900, which fitting also serves as air blade570. Thus, the air blade nozzles are directly connected to the end effectors and move with them in a fixed relationship. The air blades are aligned such that they direct air into the gap between the polymer plate and the support surface, after the polymer plate has been lifted. In preferred embodiments, air is directed through the air blade before the polymer plate is moved in the x or y direction. Air hose910connects to fitting900, such as via elbow connector912, and is connected to air supply800(depicted schematically inFIGS.4A and6C), preferably via a common manifold (not shown) to which the hoses connected to the respective air blades connect. Air blade570may comprise outlet922in fitting900, which may be the outlet of a venturi nozzle924in communication with air hose910. Air blade570is configured to direct air914toward the area between bottom of the plate400and the surface on which the plate rests114when the vacuum end effector is in contact with the plate. In the embodiment depicted, air914moving through the narrow portion950of venturi924nozzle causes the venturi effect that pulls air into the nozzle through the connected end effectors512, thus creating a vacuum at the end effectors. It should be understood that the embodiment of air blade570depicted inFIGS.6A-6Cis only one embodiment having many alternatives. In other embodiments, fitting900may be connected via a vacuum hose (not shown) to a vacuum source that is not integrated with the air blade fitting, such as a vacuum pump. Various systems for providing vacuum and air, in either integrated or separate systems, including manifolds, conduits, connectors, and the like are known, and embodiments of the invention may comprise any such systems known in the art without limitation, which systems are not detailed herein further. As depicted inFIG.4A, carriage518is preferably cantilevered from one side of the processing workflow, running on rails580that extend a necessary distance between the stations of the workflow (e.g.210,220,240and250as depicted inFIG.2). A cantilevered design obviates any need for a vertical support that would interfere with plate transit along arrows2and3(as depicted inFIG.2) in the Y direction between the imager230and the staging area220. Thus, with reference to the exemplary system ofFIG.1B, rails580are located on the on the side of the staging surface114opposite slit112of the imager230. The prototype600depicted inFIGS.4D-4Falso shows an exemplary mechanism for moving the Y linear stage516along the Y-axis. As shown best inFIG.4E, Y linear stage516comprises a pair of rollers610that run on rails612of carriage518, and are moved along the rails612by a tooth belt614operated by a stepper motor (not shown), as is known in the art in connection with linear positioner systems, and is configured to precisely position the Y linear stage in a desired position. Likewise, carriage518may comprise linear positioner configured to precisely position the stage in a desired position along rails580, which in the embodiment depicted inFIGS.4D-4Fmay be in the form of a toothed rack that interfaces with a toothed gear driven by a stepper motor (not shown). The invention is not limited to any particular embodiments for causing movements in the various directions with the desired degrees of freedom, precision, and repeatability, as many alternatives for providing equivalent operability are well known in the art. As depicted inFIGS.4D and4F, the various components of the system may be connected via various wires620(or wirelessly) to a control system630, which is connected to a power source640, along with any electrically powered components of the plate handling system. Some or all of the power to the plate handling system may be run through the control system, and some portions of the system may be directly powered by the power source, with the controller causing instructions to be sent to various switches, relays, and the like that control the various functions of the moving parts. Control system630may comprise one or more computer processers of any type known in the art, and may comprise components locally mounted at the physical location of the process workflow stations as well as components remotely located and connected via wires or wirelessly to components located locally with the workflow stations. The processor is connected to the various drivers of the system components as described herein, and is programmed with machine-readable instructions residing in a computer memory, for causing the various functions as described herein. A user interface650(such as a touchscreen like140,142depicted inFIG.1B) may permit a human operator to select various functions of the system to be performed. The process may be fully automated to minimize the need for operator attention in at least some modes of operation, although interactions between a human operator and the computer process may be desired to trigger each function in other modes of operation (such as in a mode for servicing the equipment). In an exemplary user interface, a menu of options listing the various functions of which the system is capable may be displayed on the display of the human operator, and the operator may be able to use a touch screen of the display interactively to select which function to trigger. Alternatively, the user may be able to select an automated mode by which the system interacts with the various other portions of the workflow in a coordinated fashion automatically. For example, in the system as described herein, the method steps and programmed functions for using the plate handling system may include, as depicted in the flowchart ofFIG.5:a) moving (e.g. pulling) a plate from the plate reservoir to the imager staging area in step700;b) checking and/or correcting orientation of a plate relative to sensed markings in the imager staging area in step710;c) moving (e.g. pushing) a plate to a loading position of the imager in step720;d) moving (e.g. pulling) the plate from the imager to the staging area in step730;e) moving (e.g. pulling) the plate from the staging area to an exposure position on the exposure unit in step740; andf) moving (e.g. pushing) the plate from the exposure position on the exposure unit to next position in the workflow in step750. The next position in the workflow may be any operation known in the art, but in exemplary processes, the next location may include a punching station wherein the holes for inserting the pins for the plate transport system of the plate are punched. The pins are then inserted in the holes and fastened to the plate in the manner known in the art, and the washing station transport system may then move the plates using the pins in subsequent steps. In embodiments in which the punching station and the pin loading station are not co-located, the plate handling system as described herein may be used for moving the plate from the punching station to the pin loading station, and may retain a grip on the plate until the pins have been inserted and fastened. It should be understood that the system may be capable of handling multiple plates at different portions of the workflow simultaneously, meaning that while the steps700-750as depicted inFIG.5may be sequentially performed for each plate, the system itself may perform the steps out of order for a plurality of plates. For example, the system may perform each of steps700,710,720,730, and740for a first plate, and then perform steps700,710and720for a second plate. At some point after completion of the exposure step for the first plate, the plate handling system may then perform step750for the first plate, at which time the system is available to perform step740for the second plate. Step730may be performed for the second plate at any time after performance of steps700,710and720for the second plate and performance of step740for the first plate. After performance of step740for the second plate, the workflow has vacancy to permit the plate handler to perform steps700,710,720, and730for a third plate. The controller may be configured to store the last location of each plate and the user interface may display the schematic location of each plate in the system and the next step for selection by a human operator. The user interface may be programmed to disable performance of any steps not cleared to be performed because of a blocking plate in the workflow. For example, if a first plate is in the imager, the controller may disable performance of steps700,710, and720for a second plate, until step730has been conducted for the first plate (or until the user selects and override function indicating that the first plate has otherwise been cleared). While systems may be fully automated for operation without the involvement of a human operator, using various sensors and communications among process stations to administer performance of the method steps, typical workflows will include interactions with a human operator to trigger performance of each step in the process. The system may also be capable of performing other steps not discussed herein, such as moving a plate to a cutting station. The system may also permit the operator to initiate performance of plate orientation correction step710at any point in the workflow when the plate is in the staging area. Although described primarily with a marked staging area at a location adjacent the imager, it should be understood that marked staging locations for correcting orientation of the plate may be at other locations in a workflow. Step710reflects checking and/or for changing the orientation of the plate, because the sensor may detect that the plate is perfectly oriented and not correction or change in orientation is required. This may be particularly true in embodiments in which an orientation step is performed prior to loading a plate into the imager, in which the imager does not disturb the original plate orientation. The system may be configured to rest in a neutral resting location when not actively moving a plate, and to return to that neutral resting location immediately after depositing a plate in an instructed location, absent a command to move to some other location. The neutral resting location may comprise any location that does not interfere with any of the other moving parts of the workflow. For example, the neutral resting location is ideally located so that the plate handler does not interfere with the range of travel of the moving parts of the UV exposure system and beyond the range of travel of a plate being discharged from the imager. The resting location may be a variable location and may be anticipated by the controller to be a location closest to the next position the plate handler expects to be instructed to go, based on preprogrammed instructions or by machine learning. Finally, although depicted in the figures in a particular embodiment comprising various linear and rotational stages to provide the minimum number of degrees of freedom required for the exemplary configuration shown, it should be understood that other configurations for providing the desired movements may be devised using any combination of components known in the art, including combinations with more degrees of freedom. For example, instead of system comprising linear and rotational stages as shown having geometrically limited ranges of travel and only four degrees of freedom, a robotic arm having six degrees of freedom may be provided with suitable end effectors to provide the desired range of motion. Any system providing the range of functions required may be suitable for use as a plate handling system. Although described herein in some embodiments as pulling (lifting from a leading edge of the plate in motion) or pushing (lifting from the trailing edge of the plate in motion) the plate, it should be understood that pulling may be substituted for pushing, or vice versa, for some steps (e.g. step740). Other steps (e.g. step720, in the configuration as depicted with the carriage518cantilevered from the side of the staging area opposite the imager input) may require a specific type of movement (pushing). Furthermore, while the relative locations described herein for the imager, the exposure unit, the plate reservoir are preferred relative locations, it should be understood that the invention is not limited to any particular configuration. Plate reservoir as described herein may be location where individual plates are deposited by the human operator for pick up by the system, or may be a reservoir configured to hold multiple plates in a stack. In one exemplary stacked system, as described in U.S. Pat. No. 6,981,447, titled “METHOD AND APPARATUS FOR LOADING AND UNLOADING FLEXOGRAPHIC PLATES FOR COMPUTER-TO-PLATE IMAGING,” incorporated herein by reference, the system comprises a support surface that is movable in the Z direction and configured to position the next plate in the stack on the same plane as the rest of the stations in the process workflow. Such functionality minimizes the overall range of travel required in the Z direction for the Z translation component of the plate handler. It should also be noted that some workflows may not use the plate handler for performing step700at all, and may rely on the human operator to place the plate on the staging area. Mobile Preparation Table or Carriage In flexographic plate production an operator typically feeds polymer plates to the plate processing system. The plate processing system usually starts with the plate imager followed by a UV-curing unit and a plate processor to remove non-cured polymer. Whenever the imager has completed imaging a plate, the plate is typically moved to the next process step (such as with the transport system described elsewhere herein) and the operator supplies the next plate to be imaged. While the plate transportation system as described herein is configured to automatically move the plate from the imager to the UV-curing station, supplying the system with new ready-to-image plates is generally beyond the scope of cost-effective automation. Typically, a vast variety of different plates types and plate thicknesses are stored in a plate storage room physically separated from the plate processing room. The plate type to be imaged frequently changes from job-to-job, so a human operator typically is tasked with supplying the next plate from storage to the imager. The time the imager waits for the next plate reduces the overall efficiency of the equipment, and therefore, having the next plate ready exactly when imager has delivered the prior imaged plate to the UV curing unit and is ready to image the next plate can provide optimization advantages. Prioritizing this task may reduce flexibility of the operator to perform other tasks. Sometimes operators mistakenly deliver a wrong plates to the imager. If this mistake is not recognized until the plate is already on the printing press, it reduces efficiency and adds cost. Often, such mistakes are recognized in a Quality Assurance (QA) step before moving the fully-processed plate to the press, causing additional costs in the nature of plate waste and wasted production capacity. Thus, improvements to plate supplying systems and methods provide human operators more flexibility for other tasks, minimizes idle time of the processing equipment, and reduces the likelihood of operator mistakes. One way to provide such improvements is to integrate the transport table into the plate workflow, such that the table becomes a component of the plate processing system consisting of imager, UV exposure and plate washing unit. Embodiments of the plate handling system as described herein may therefore benefit from interaction with a plate loading table or carriage configured for transport of flexographic polymer printing plates and for connection mechanically and electrically to the system. An exemplary mobile preparation table or carriage1000(“platform”, “staging area,” “table” and “carriage” may be used interchangeably herein) for transporting printing plates to and from an interface with plate processing equipment is depicted inFIGS.7A-7B. Various exemplary features are also depicted inFIGS.8-10. An exemplary interface with plate processing equipment is depicted inFIG.1C. Carriage1000comprises a base1002having a frame and a plurality of wheels1006a,1006b,1006c, (fourth wheel not visible in the drawings) attached to the frame and configured to roll along a floor surface. In the embodiment depicted inFIGS.7A and7B, the frame includes two vertical risers1005each attached to a horizontal wheel frame1007, with opposite wheel frames connected to one another by a cross-brace1009. Bracket1008connects each vertical riser1005a,1005bto a corresponding wheel frame1007a,1007b, and brackets1111strengthen the connection between each wheel frame and cross-brace1009. Planar preparation surface1010for receiving plate1050includes a top frame1012pivotally attached to the vertical risers1005of the base frame via hinge1014(e.g. a barrel hinge mechanism), which is configured to facilitate pivoting of the planar preparation surface frame1012within a range of angles along arrow P between a first, horizontal position in which the plate preparation surface is parallel to the floor surface (depicted in solid lines inFIG.7B) and a second maximum tilt position (depicted in dashed lines, with top portion cut off) in which the plate preparation surface is disposed at an acute angle relative to the first position. One or more physical connection interfaces1020are configured to secure the base mechanically in a fixed position relative to the plate processing equipment. Physical connection interfaces1020may comprise locks that affix the position of the table when docked to the imager loading area. The locks may operate mechanically, electrically, magnetically, or a combination thereof. Such connection interfaces may be present on one or both sides, but in particular, are present at least on any side expected to abut the processing equipment (such as on the right side of the carriage inFIG.1C). When not docked to the plate processing system, the table is configured to transport flexographic printing plates between a plate storage room and the plate processing system. The table is configured to tilt from horizontal to an upright or nearly upright position to fit through narrow doors. Additional functions such as integrated plate size measurement, discussed further herein, and means for cutting may also be provided. In the embodiment depicted inFIG.1C, a side portion1060of the carriage tabletop is configured to overlap the loading area of the imager on the side abutting the imager110to allow the automatic plate handler500to pick the plate and pull it completely on to the loading area of the imager. In another embodiment, the tabletop does not overlap and is in proximity to the edge of the loading table, with the height of the tabletop higher (e.g. a few millimeters) than the loading table. While the carriage as described herein may be used with processing systems with or without plate handling systems as described herein, when used in conjunction with systems having an automatic plate handler as described above, the processing systems are configured so the plate handler can reach the edge of a plate positioned in a ready position on the carriage. For example, whereas the layout inFIG.1Bshows the plate handler500running on a rail580that extends the length of fixed table212, the layout inFIG.1Cfor use with a moveable carriage1000, may have a rail that ends at or adjacent the edge of the imager. In another embodiment, the interface between the table and the processing system may be oriented so that a freestanding portion of the rail extends to provide a reach of the plate handler across the full range of the carriage, with the geometry of the carriage and its physical connections configured so that the carriage can be wheeled into a fixed position relative to the rail and the imager. Carriage1000is configured to be moved to the docked location along dashed arrow D to a docked position (dashed lines), wherein it is physically held in place by mechanical connections1020(disposed on the opposite side of the base than the side depicted inFIG.7B). One or more of the plurality of wheels comprises a stop mechanism1016for arresting rotation of the wheel. A first pair of the plurality of wheels (e.g. wheel1006cand the wheel (not shown) attached in the corresponding position to wheel frame1007b), may be fixed to the frame in an orientation in which each wheel is configured to rotate about a first common horizontal axis H1parallel to the floor. A second pair of the plurality of wheels (e.g. wheels1006a,1006b) may be pivotally attached to the frame in orientations in which each wheel is configured to rotate about a independent horizontal axis H2, H3parallel to the floor and free to pivot about a second axis V1, V2perpendicular to the floor. The table as depicted inFIGS.7A,7Bincludes a mechanism for moving the planar preparation surface between the horizontal position and the maximum tilt position. A first mechanical stop1022is positioned to restrict pivoting of the planar preparation surface beyond the range of angles when the top is in the horizontal position, and a second mechanical stop1024is configured to restrict pivoting of the planar preparation surface beyond the range of angles when the top is in the maximum tilt position. Each stop may comprise a member attached to the preparation surface frame1012, to the base1002, or a combination thereof. The stop preferably comprises a robust, cushioned member (e.g. real or synthetic rubber). A spring-damper member (e.g. a gas spring)1026has a first end1027connected to the base frame and a second end1028connected to the planar preparation surface frame. A handle1030connected to the planar preparation surface frame1012adjacent a front edge1031of the surface1010of is configured to permit a human user to manipulate the planar preparation surface between the horizontal position and tilted positions without a need for the user to contact the frame1012. An actuator (knob1032) has a first configuration (e.g. pushed in) for retaining the planar preparation surface from pivoting and a second configuration (e.g. pulled out) for releasing the planar preparation surface to permit pivoting. In the exemplary carriage depicted in the figures, the planar preparation surface frame has a rectangular shape with a relatively longer length dimension L than width dimension W, and has a size to accommodate the largest plate the system is configured to process. The front edge of the planar preparation surface frame1031has a length that defines the length dimension. The handle1030comprises a parallel member connected to, spaced laterally from, and centered relative to edge1031and has a length at least half the length of the first edge. In one embodiment, actuator1032has an actuated position, and a resting position, and is connected to a first end of a Bowden cable (not shown) connected to a valve (not shown) in gas spring1026. With the actuator in the actuated position, the gas valve opens, allowing gas to enter or exit the chamber of the gas spring in accordance with movement of the table. In the resting position, the valve is closed, stabilizing the position of the gas spring and the table in the corresponding position. The general mechanical functions of gas springs, valves thereon, and Bowden cables are well known to those of skill in the art, and are therefore not discussed or illustrated in more detail herein. The carriage may comprise a sensor system including sensors for detecting one or more of presence, actual and/or intended alignment, dimensions (including thickness), and weight of the plate on the table. The table may be equipped with a microcontroller configured to process information from the sensors, calculate measurements, and communicate with connected components, including providing viewable information on the display and exchanging information with the processing system. A battery (such as a lithium ion or other rechargeable battery) may be provided for providing power to the microcontroller and the other electronic components of the carriage. The rechargeable battery may be configured for charging by a wired or wireless connection, such as with electrical power connections established when the table is docked to processing equipment. In some embodiments, the carriage may be configured as light-table, thereby permitting a human operator to check images on the plate and the overall plate quality. In such embodiments, the tabletop is made of a transparent material, such as glass or a synthetic organic resin, such as a Plexiglas® sheet. The rear side of the plate, or at least a portion thereof, is illuminated by a light source (e.g. an OLED or a LED matrix1083, shown schematically inFIG.8). The light source may cover the complete table surface or only a portion of the table surface. As depicted inFIG.8, the table preferably includes at least one detection sensor1085,1086,1087configured to detect a plate positioned on the preparation surface and configured to provide an electrical signal output indicative of such detection. A plurality of alignment guides include at least one alignment guide1088a,1088badjacent a front edge1031of the frame, and at least one alignment guide adjacent1089a second dimension of the frame perpendicular to the first dimension. Each alignment guide1088a,1088b,1089may have an extended projection relative to the planar preparation surface in a position configured to align with an edge of a printing plate disposed on the planar preparation surface. This extended projection of front alignment guides1088a,bmay have a height above the preparation surface of at least a thickness of an expected printing plate, but in any event have a height sufficient to help keep the plate from slipping downward off the preparation surface when the table is in a maximum tilt position. The plate dimensions may be measured automatically using sensors1085,1086,1087. Sensors1085,1086,1087may comprise arrays of photo detectors, such as solar cells or photodiodes. Sensor arrays1085,1086are arranged orthogonally in perpendicular directions (1085in the width direction, and1086in the length direction). When disposed on the table, plate1050covers portions of the photodetector arrays, and the lack of ambient light received by the covered photo detectors relative to light received by the uncovered detectors provides information used by the processor to calculate the plate length and width dimensions. Accordingly, aligning the plate parallel to the photo detector array respective to the table edge may be important for obtaining a correct measurement. Indicia on the table may include a stripe defining a line for aligning to one edge of the plate, or the alignment guides1088a,1088b,1089may provide correct alignment if the respective edges of the plate are in contact with the respective portions of those guides that project above the plate preparation surface. A third photo detector array1087in combination with photo detector arrays1085,1086may be used for checking alignment parallel to the table edge. In another embodiment, a single photo element may be provided to detect that one edge of the plate is positioned properly in the corner of the table surface, while the detector arrays check the plates edge is positioned as close to the table edge as the plate corner. In still another embodiment, mechanical sensors (e.g. alignment guides1088aand1088bhaving a contact sensor) may detect the presence of a correct aligned plate. The sensors as described above are connected to a processor1084for calculating characteristics of the plate based upon measurement signals from the sensors. The table is preferably configured with a communication port for data exchange with the processing system. This communication port is preferably wireless, but may be a wired connection that is engaged at the same time as the physical connection interface2020that affixes the carriage to the imager. One or more communication interfaces1081are configured to exchange information between the table and the plate processing equipment. One or more of the physical connection interfaces1020may also include an electrical connection to provide charging power to the table, which may have an onboard rechargeable power source1082, such as a battery. Communication interface1081and power source1082are depicted in dashed lines inFIG.8indicating that they are located somewhere on the carriage, but not necessarily in any particular position or in a position visible to users of the table. When in communication with the processing system, the communication interface1081is configured to provide information to the system regarding the plate disposed thereon. In one embodiment, information about the next plate ready for processing and positions the plate ready for processing when the loading table is docked to the imager, thus providing the operator who brings the plate to the processing system a wider time window to place the plate next to the processing system. Without interaction between table and processing system, the operator typically has to watch for the exact moment when the system is ready to load the next plate and supply the plate at this exact moment in order to keep delay times short. Exemplary information exchanged by the carriage to the processing system, may include, inter alia, plate type, dimensions (length, width, thickness), weight, and alignment of the plate (e.g. relative or absolute coordinate information) on the tabletop. Information exchanged by the processing system to the table, may include required plate type and dimensions. A display may provide information for the operator regarding the next plate to pick from storage for delivery to the processing system. In one embodiment, display1080, such as an LED, LCD or TFT display, is placed beneath a transparent surface of the table top. In another embodiment (not shown), the display may be located on the operator handling side of the table adjacent the operator handle. This display may provide information to the user including dimensions (including thickness) of the plate on the table, specifications for the next plate to be supplied for imaging, time remaining until the imager needs the next plate, and whether the plate detected on the table corresponds to the plate expected next in the process. Communication to the processing system may be established by any wireless or wired technology known in the art. For example, communication may include wireless communication over a network, such as over a wireless local area network (WLAN), such as a WiFi® network, or over a short-range wireless connection, such as a Bluetooth® network. In other embodiments, wireless communications may be transmitted via infrared radiation, such as are commonly used for remote controls for audiovisual equipment. In embodiments in which the topography of the wireless network permits, the carriage may have a continuous data link to the processing system. In other embodiments, including wired connections, the data link may be only temporary, such as when the table is docked to the processing system. Regardless of communication protocol, the data exchanged preferably contains information about the plate disposed on the table and the next plate to be delivered from plate storage. In the embodiment depicted inFIG.8, the carriage is configured to automatically determine a size of the plate disposed thereon. In another embodiment, the plate dimensions may be measured by the operator relative to indicia adjacent the table top (e.g. rulers) (not shown) and manually entered using a user interface (such as a keyboard or mobile device) connected to the processor. Thickness Measurement Systems The preparation table may also be configured to determine the thickness of the polymer plate and to communicate this information to the processing units and to the operator. In one embodiment, the thickness can be measured by a standard triangulation sensor1090located above the tabletop pointing towards the polymer plate, as illustrated inFIG.9A. Sensor1090may be any type of non-contact sensor, such as a time of flight sensor that transmits a beam1091(or pulses) of radiation toward surface1092and determines distances based upon time for reflected radiation to reach the sensor. Thus, as depicted inFIG.9A, a first distance is measured between the sensor and the surface1092in a first reading. A second distance is measured between the sensor and a plate1093in a second reading. The difference in distance between the first and second readings corresponds to the thickness of the plate. The thickness measurement system may disposed on a positioner to move the sensor between the first and second readings, or sensor1090may be located in a place where the plate moves underneath the sensor relative to the surface. In one embodiment, sensor1090is mounted on the plate handling system500. In another, the thickness measurement system may be disposed in the plate storage room where the plate thickness is confirmed after loading on the table. In still another embodiment, the thickness measurement system may be disposed in a fixed location along the path of the plate, such as inside the imager in the plate loading path. Thus, such a system can be characterized as a system for measuring thickness of a printing plate relative to a surface1092for receiving a printing plate1093, comprising a non-contact distance measurement sensor1090configured to output a measurement signal indicative of distance along an axis perpendicular to the surface. In an embodiment where the sensor is moveable, the system further comprises a sensor positioner (e.g. plate handler500) disposed above the surface and controllable in one or more directions parallel to the surface, and a processor1084configured to control the positioner and to receive a measurement signal from the sensor. In some embodiments, the processor may be programmed with instructions for:(a) receiving information defining an expected location of the plate;(b) moving the positioner to a first position disposed above a point on the surface not above the expected location of the plate (e.g. left position of sensor1090depicted inFIG.9A);(c) obtaining a reference distance measurement signal from the sensor for use as a reference distance between the sensor and the surface;(d) moving the positioner to a second position disposed above a point on the surface above the expected location of the printing plate (e.g. right position of sensor1090depicted inFIG.9A);(e) obtaining a second distance measurement signal from the sensor, and processing the second distance measurement signal and the reference distance measurement signal to obtain the measured thickness of the printing plate. In another thickness measurement embodiment, depicted inFIG.10, the triangulation sensor may be located below the transparent table top as depicted inFIG.10. Light source110sends a light beam1102through the transparent portion of the table top (e.g. glass plate)1104, the dimensional stable layer1106, and the polymer layer1108towards the laserablate able mask1110of the polymer plate1112. Each transition of the beam from one medium to the next causes a light reflection (1120,1121,1122). The reflected light rays hit a lateral sensor1130, such as an array of photo detectors. While reflection1120from the transition between the table substrate1104and the dimensionally stable layer1106, and reflection1121from the transition between the dimensionally stable layer1106and the photopolymer1108, both hit fixed positions on the sensor1130(assuming a standard dimensionally stable layer thickness), reflection1122from the transition between photopolymer1108and the laser ablateable mask1110changes its position depending on the thickness of the polymer layer, thus indicating the thickness of the plate. In the alternative, the distance between reflection1120and1122may be used to measure the overall thickness of the combination of the dimensionally stable layer and the photopolymer. Although the dimensional stable layer and polymer have a different index of refraction that may cause inaccuracies, the sensor has sufficient performance to distinguish between plate standard gauges (e.g. 1.14 mm, 1.7 mm, 2.84 mm, 3.19 mm), which are typically different enough to absorb a relatively high level of imprecision. In another embodiment the table may have one or more integrated scales1095disposed between the table frame and the table top that measures the additional weight of the photopolymer plate. This weight, in combination with the plate length and width dimensions, may be used for calculating the plate thickness. The thickness measurement systems as disclosed herein are not limited for use in connection with the plate preparation table or a plate handling system having the details as discussed herein, and may be used in any system for measuring thickness relative to a surface on which a plate is disposed or location of a surface of a component relative to the distance sensor. For example, a thickness sensor such as the system shown inFIG.9A, may be incorporated into the focusing head of the imager, such as for detecting plate thickness and the open position of the clamp on the drum imager for securing the plate. As schematically shown inFIG.11, a drum imager includes a drum1200with an exemplary clamping bar1202that has an open position (as shown inFIG.11) for receiving the plate, and a closed position (not shown). The bar is typically moved by compressed air, wherein the open position is controlled by the opening time of an air valve (not shown). Distance sensor1090, positioned, e.g. on the focusing head of the imager, may be positioned relative to the plate1204, to measure the height of the top surface of the plate relative to the sensor location, which is an indication of plate thickness. The measured thickness may then be used for determining how far the clamping bar should be opened. The distance sensor may also be used for calibrating the valve that controls the compressed air for opening the valve, by measuring height of the top surface of the clamping bar1202versus open time of the valve, to create a suitable curve or lookup table. WhileFIG.11depicts the drum in a position with the clamping bar1202located beneath sensor1090, it should be understood that with the drum rotated in a different position, the plate may be located beneath the sensor, without the sensor needing to be moveable in the direction of plate feeding to read plate thickness. It should also be understood that the arrangement and clamp configuration depicted inFIG.11is exemplary only, and that thickness sensing may be used in conjunction with any number of different designs. Additionally, systems similar to those depicted inFIGS.9A and9Bmay have uses in other ways relevant to the plate handling system. For example, a thickness measurement system mounted to the plate handler500(or on a different positioner) may be used for detecting location of the plate, as well as thickness, in any of plate locations as described herein. Such a system may detect an actual position and angle of the plate on a surface, if the processor is programmed with instructions for:(f) detecting a change in signal received from the sensor while the positioner is in motion from the first position to the second position indicative of a location of a first edge of the plate;(g) detecting a change in signal received from the sensor while the positioner is in motion between at least two other pairs of points, each pair including one point above the expected location of the printing plate and one point not above the expected location of the printing plate, and detecting respective changes in signals received from the sensor while the positioner is in motion between each pair of points, each change indicative of locations of second and third edges of the plate;(h) processing the signals indicative of the locations of the first, second, and third edges of the plate to obtain a detected actual position and angle of the plate on the surface. In one embodiment, the sensor may be a laser triangulation sensor and the positioner may comprise a 3-axis robotic arm. The measurement signal may be an analog signal or a digital signal. A thickness measurement system as described herein may also be used in conjunction with plate coding systems for embedding non-printing indicia on a floor of the plate using areas of presence and absence of polymer in the plate floor, such as is described in, inter alia, U.S. application Ser. No. 16/559,702, titled SYSTEM AND PROCESS FOR PERSISTENT MARKING OF FLEXO PLATES AND PLATES MARKED THEREWITH, and applications related thereto, incorporated herein by reference. In implementations in which the plate includes information encoded as differences in plate thickness along a predetermined path as depicted inFIG.9B, the system may be further programmed with instructions for moving the sensor1090along the predetermined path while receiving measurement signals, and processing the measurement signals so received to read the encoded information. Thus, for a path in which sensor1090takes a reading at point A, point B, and point C as depicted inFIG.9B, the difference between the distance D1at point A and distance D2at point B is indicative of the presence or absence of the plate (and transitions in signal from D1to D2may be used for determining location of plate edges). The difference between the distance D2at point B and D3at point C may be indicative of a thickness difference comprising non-printing indicia (where D3is a distance above or below the floor of the plate that does not correspond to a distance that results in a printed feature). Of course, the thickness measurement system may also be used for detecting a distance associated with the floor of the plate. Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention. | 62,572 |
11860544 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. In addition, the term “being made of” may mean either “comprising” or “consisting of.” In the present disclosure, a phrase “one of A, B and C” means “A, B and/or C” (A, B, C, A and B, A and C, B and C, or A, B and C), and does not mean one element from A, one element from B and one element from C, unless otherwise described. The present disclosure is generally related to extreme ultraviolet (EUV) lithography systems and methods. More particularly, it is related to apparatuses and methods for improved target control to obtained increased EUV energy by controlling an excitation laser used in a laser produced plasma (LPP)-based EUV radiation source. The excitation laser heats metal (e.g., tin) target droplets in the LPP chamber to ionize the droplets to a plasma which emits the EUV radiation. For increased EUV energy, a majority of the excitation laser has to be incident on the target droplets to improve EUV output and conversion efficiency. Thus, the shape of the excitation laser, the angle of incidence of the excitation laser, and the profile of the laser beam has to be considered in order to obtain increased EUV energy. Existing methods consider the relative position between the target droplets and the excitation laser without considering the shape of the excitation laser, change in the angle of incidence of the excitation laser (also referred to as pointing error), and the profile of the laser beam. Thus, EUV energy drop due to these issues are not detected. Since these issues cannot be detected, the excitation laser control system and/or the droplet generator cannot be controlled to address these issues and thereby compensate for the reduced EUV energy. Embodiments of the present disclosure are directed to controlling the relative position (e.g., direction of travel) of the excitation laser and the position of the target droplet based on the angle of incidence of the laser beam on the target droplet and the profile of the laser beam. FIG.1is a schematic view of an EUV lithography system with a laser produced plasma (LPP)-based EUV radiation source, in accordance with some embodiments of the present disclosure. The EUV lithography system includes an EUV radiation source100to generate EUV radiation, an exposure tool200, such as a scanner, and an excitation laser source300. As shown inFIG.1, in some embodiments, the EUV radiation source100and the exposure tool200are installed on a main floor MF of a clean room, while the excitation laser source300is installed in a base floor BF located under the main floor. Each of the EUV radiation source100and the exposure tool200are placed over pedestal plates PP1and PP2via dampers DMP1and DMP2, respectively. The EUV radiation source100and the exposure tool200are coupled to each other by a coupling mechanism, which may include a focusing unit. The lithography system is an extreme ultraviolet (EUV) lithography system designed to expose a resist layer by EUV light (also interchangeably referred to herein as EUV radiation). The resist layer is a material sensitive to the EUV light. The EUV lithography system employs the EUV radiation source100to generate EUV light, such as EUV light having a wavelength ranging between about 1 nm and about 100 nm. In an example, the EUV radiation source100generates an EUV light with a wavelength centered at about 13.5 nm. In the present embodiment, the EUV radiation source100utilizes a mechanism of laser-produced plasma (LPP) to generate the EUV radiation. The exposure tool200includes various reflective optical components, such as convex/concave/flat mirrors, a mask holding mechanism including a mask stage, and wafer holding mechanism. The EUV radiation generated by the EUV radiation source100is guided by the reflective optical components onto a mask secured on the mask stage. In some embodiments, the mask stage includes an electrostatic chuck (e-chuck) to secure the mask. Because gas molecules absorb EUV light, the lithography system for the EUV lithography patterning is maintained in a vacuum or a-low pressure environment to avoid EUV intensity loss. In the present disclosure, the terms mask, photomask, and reticle are used interchangeably. In the present embodiment, the mask is a reflective mask. In an embodiment, the mask includes a substrate with a suitable material, such as a low thermal expansion material or fused quartz. In various examples, the material includes TiO2doped SiO2, or other suitable materials with low thermal expansion. The mask includes multiple reflective layers (ML) deposited on the substrate. The ML includes a plurality of film pairs, such as molybdenum-silicon (Mo/Si) film pairs (e.g., a layer of molybdenum above or below a layer of silicon in each film pair). Alternatively, the ML may include molybdenum-beryllium (Mo/Be) film pairs, or other suitable materials that are configurable to highly reflect the EUV light. The mask may further include a capping layer, such as ruthenium (Ru), disposed on the ML for protection. The mask further includes an absorption layer, such as a tantalum boron nitride (TaBN) layer, deposited over the ML. The absorption layer is patterned to define a layer of an integrated circuit (IC). Alternatively, another reflective layer may be deposited over the ML and is patterned to define a layer of an integrated circuit, thereby forming an EUV phase shift mask. The exposure tool200includes a projection optics module for imaging the pattern of the mask on to a semiconductor substrate with a resist coated thereon secured on a substrate stage of the exposure tool200. The projection optics module generally includes reflective optics. The EUV radiation (EUV light) directed from the mask, carrying the image of the pattern defined on the mask, is collected by the projection optics module, thereby forming an image on the resist. In various embodiments of the present disclosure, the semiconductor substrate is a semiconductor wafer, such as a silicon wafer or other type of wafer to be patterned. The semiconductor substrate is coated with a resist layer sensitive to the EUV light in presently disclosed embodiments. Various components including those described above are integrated together and are operable to perform lithography exposing processes. The lithography system may further include other modules or be integrated with (or be coupled with) other modules. As shown inFIG.1, the EUV radiation source100includes a target droplet generator115and a laser produced plasma (LPP) collector110, enclosed by a chamber105. The target droplet generator115generates a plurality of target droplets DP, which are supplied into the chamber105through a nozzle117. In some embodiments, the target droplets DP are tin (Sn), lithium (Li), or an alloy of Sn and Li. In some embodiments, the target droplets DP each have a diameter in a range from about 10 microns (μm) to about 100 μm. For example, in an embodiment, the target droplets DP are tin droplets, each having a diameter of about 10 μm, about 25 μm, about 50 μm, or any diameter between these values. In some embodiments, the target droplets DP are supplied through the nozzle117at a rate in a range from about 50 droplets per second (i.e., an ejection-frequency of about 50 Hz) to about 50,000 droplets per second (i.e., an ejection-frequency of about 50 kHz). For example, in an embodiment, target droplets DP are supplied at an ejection-frequency of about 50 Hz, about 100 Hz, about 500 Hz, about 1 kHz, about 10 kHz, about 25 kHz, about 50 kHz, or any ejection-frequency between these frequencies. The target droplets DP are ejected through the nozzle117and into a zone of excitation ZE at a speed in a range from about 10 meters per second (m/s) to about 100 m/s in various embodiments. For example, in an embodiment, the target droplets DP have a speed of about 10 m/s, about 25 m/s, about 50 m/s, about 75 m/s, about 100 m/s, or at any speed between these speeds. The remnants (residue) after the interaction of the target droplets DP with the excitation laser LR2are collected in a tin catcher TC located below the target droplet generator115. The excitation laser LR2generated by the excitation laser source300is a pulse laser. The laser pulses LR2are generated by the excitation laser source300. The excitation laser source300includes a laser generator310, laser guide optics320, and a focusing apparatus330. In some embodiments, the laser source310includes a carbon dioxide (CO2) or a neodymium-doped yttrium aluminum garnet (Nd:YAG) laser source with a wavelength in the infrared region of the electromagnetic spectrum. For example, the laser source310has a wavelength of 9.4 μm or 10.6 μm, in an embodiment. The laser light LR1generated by the laser generator310is guided by the laser guide optics320and focused into the excitation laser LR2by the focusing apparatus330, and then introduced into the EUV radiation source100. In some embodiments, the excitation laser LR2includes a pre-heat laser and a main laser. In such embodiments, the pre-heat laser pulse (interchangeably referred to herein as the “pre-pulse”) is used to heat (or pre-heat) a given target droplet to create a low-density target plume with multiple smaller droplets, which is subsequently heated (or reheated) by a pulse from the main laser, generating increased emission of EUV light. In various embodiments, the pre-heat laser pulses have a spot size about 100 μm or less, and the main laser pulses have a spot size in a range of about 150 μm to about 300 μm. In some embodiments, the pre-heat laser and the main laser pulses have a pulse-duration in the range from about 10 ns to about 50 ns, and a pulse-frequency in the range from about 1 kHz to about 100 kHz. In various embodiments, the pre-heat laser and the main laser have an average power in the range from about 1 kilowatt (kW) to about 50 kW. The pulse-frequency of the excitation laser LR2is matched with the ejection-frequency of the target droplets DP in an embodiment. The laser light LR2is directed through windows (or lenses) into the zone of excitation ZE. The windows adopt a suitable material substantially transparent to the laser beams. The generation of the laser pulses is synchronized with the ejection of the target droplets DP through the nozzle117. As the target droplets move through the excitation zone, the pre-pulses heat the target droplets and transform them into low-density target plumes. A delay between the pre-pulse and the main pulse is controlled to allow the target plume to form and to expand to an optimal size and geometry. In various embodiments, the pre-pulse and the main pulse have the same pulse-duration and peak power. When the main pulse heats the target plume, a high-temperature plasma is generated. The plasma emits EUV radiation EUV, which is collected by the collector mirror110. The collector110further reflects and focuses the EUV radiation for the lithography exposing processes performed through the exposure tool200. FIG.2Aschematically illustrates the movement of target droplet DP relative to the collector110after being irradiated by the pre-pulse PP. A target droplet DP is sequentially irradiated by the pre-pulse PP and the main pulse MP. When the target droplet DP travels along X-axis in a direction “A” from the droplet generator DG to the zone of excitation ZE, the pre-pulse PP exposing the target droplet DP causes the target droplet DP to change its shape into, for example, a pancake and introduce a Z-axis component to its direction of travel in the X-Z plane. The laser-produced plasma (LPP) generated by irradiating the target droplet DP with the laser beams PP, MP presents certain timing and control problems. The laser beams PP, MP must be timed so as to intersect the target droplet DP when it passes through the targeted point. The laser beams PP, MP must be focused on each of their focus positions, respectively, where the target droplet DP will pass. The position of the zone of excitation ZE and parameters such as, for example, laser power, time delay between the main pulse and the pre-pulse, focal point of the pre-pulse and/or main pulse, may be determined when an EUV radiation source100is set up. The actual position of the zone of excitation ZE and the aforementioned parameters are then adjusted during wafer exposure using a feedback mechanism in various embodiments. However, these parameters can change over time due to various factors such as, for example, separation between the main pulse MP and the pre-pulse PP, shape of the excitation laser, the profile of the laser beam, mechanical and/or electrical drift in the radiation source, instability of the droplet generator, changes in chamber environment. FIG.2Billustrates an exemplary optical metrology for misalignment in the x-axis OMX. OMX is defined by a distance in the x-axis between a droplet and the focal point of the pre-pulse PP. Similarly,FIG.2Cillustrates an exemplary optical metrology for misalignment in the y-axis OMY. OMY is defined by a distance in the y-axis between the droplet and the focal point of the pre-pulse PP.FIG.2Dfurther illustrates an exemplary optical metrology for misalignment in the z-axis OMZ. Similar to OMX and OMY, OMZ is defined by a distance in the z-axis between a droplet and the focal point of the pre-pulse PP.FIG.2Eillustrates an exemplary optical metrology for misalignment in radius OMR. The x-axis is in the direction of motion by the droplet from the droplet generator115. The z-axis is along the optical axis A1(FIG.1) of the collector mirror110. The y-axis is perpendicular to the x-axis and the z-axis. As shown inFIG.2F, the target droplet DP is ejected from a droplet generator115travelling in a direction to a tin catcher TC. When such mechanical and/or electrical drift occurs in the radiation source, the pre-pulse laser PP causes the target droplet DP to expand in a direction with an angle with respect to a direction of incidence from the pre-pulse laser beam. This gives a rise to a target droplet DP2which has expanded to form a football-like shape shown inFIG.2E. In such an embodiment, a spatial separation between the pre-pulse PP and the main-pulse MP, MPPP separation, is defined as a distance between the focal point of the pre-pulse PP and the focal point of the main-pulse MP, which is a 3-D vector contributed by x, y, and z sections. For example, as shown inFIG.2F, MPPPx is a distance along the x-axis of the MPPP separation and MPPPz is a distance along the z-axis of the MPPP separation. FIG.3schematically illustrates the laser guide optics320and the focusing apparatus330used in the EUV lithography system illustrated inFIG.1, in accordance with an embodiment. As illustrated, the laser guide optics320includes a forward beam diagnostic (FBD)302, a return beam diagnostic (RBD)304, and a plurality of mirrors M301, M303, M300, and M330. The forward beam diagnostic302and the return beam diagnostic304include a device such as wavefront sensor for measuring the aberrations of an optical wavefront. Some non-limiting types of wavefront sensors include a Shack-Hartmann wavefront sensor, a phase-shifting Schlieren technique, a wavefront curvature sensor, a pyramid wavefront sensor, a common-path interferometer, a Foucault knife-edge test, a multilateral shearing interferometer, a Ronchi tester, and a shearing interferometer. The forward beam diagnostic302and the return beam diagnostic304, and mirrors M301, M303constitute a final focus metrology (FFM) module350. The signal from the final focus metrology (FFM) module350is used as a control signal and may be connected with an actuator to control one of the mirrors of the focusing apparatus330, such as for example, the mirror M150in the optical path before the laser hits the target droplets DP. In some embodiments, the mirror M150is the last mirror before the laser hits the target droplets DP. The mirror M150is a steerable mirror and is adjustable in 3 axis. The mirrors M301, M303, M300, and M330are arranged (or otherwise configured to) guide incident laser light in a desired direction. The focusing apparatus330includes a plurality of mirrors M310, M320, M130, M140, and M150. The mirrors M310, M320, M130, M140, and M150are arranged (or otherwise configured to) guide incident laser light in a desired direction. The focusing apparatus330also includes windows W10, W20, and W30. The windows W10and W20, and mirrors M310and M320are in an environment that is under atmospheric pressure conditions. The mirrors M130, M140, and M150are in vacuum. The window W10is located at an entry point into the focusing apparatus330and receives laser light from the laser guide optics320. The window W20is located between the atmospheric pressure environment and vacuum. The forward beam diagnostic302receives the laser light LR1generated by the laser generator310. The forward beam diagnostic302analyzes the laser light LR1generated by the laser generator310. Some of the laser light LR1is guided from the window W10of the focusing apparatus330on to the mirror M300. From the mirror M300, the laser light LR1is incident on the mirror M301. The laser light LR1is guided by the mirror M301to the forward beam diagnostic302. Thus, the forward beam diagnostic302receives the laser light LR1before the laser light LR1interacts with the target droplets DP. The forward beam diagnostic302analyzes the wavefront of the laser light LR1. As illustrated, the laser light LR1passes through the window W10and window W20of the focusing apparatus330and is incident on the mirror M130. The mirror M130is arranged (or otherwise configured) such that the laser light LR1is reflected on to the mirror M140. The mirror M140is arranged (or otherwise configured) such that the laser light LR1received from the mirror M130is reflected on to the mirror M150. The mirror M150is arranged (or otherwise configured) such that the laser light LR1obtained from the mirror M140is reflected on to the target droplets DP. The laser light thus guided by the mirrors M130, M140, and M150is focused into the excitation laser LR2by the focusing apparatus330, and then introduced into the EUV radiation source100(FIG.1). After interacting with the target droplets DP, the excitation laser LR2is dispersed and a return beam of excitation laser LR2is guided back to window W10via the mirrors M150, M140, and M130and window W20. From the window W10, the return beam of excitation laser LR2travels to mirror M310via window W30. In some embodiments, the window W10is a diamond window. The mirror M310guides the return beam to mirror M303via mirrors M320and M330. The return light is guided to the return beam diagnostic304using mirror M303. The return beam diagnostic304receives the return beam and analyzes the return beam, more specifically, the optical wavefront of the return beam. In analyzing the return beam of excitation laser LR2, the return beam diagnostic304generates a plurality of Zernike polynomials. Each Zernike polynomial describes specific form of surface deviation that can be fit to specific forms of wavefront deviations (aberrations). By including a plurality of Zernike polynomials (commonly referred to as terms), wavefront deformation can be described to a desired degree of accuracy. FIG.4illustrates the first 15 Zernike polynomials ordered vertically by radial degree and horizontally by azimuthal frequency. Table 1 below lists the different Zernike polynomials and the aberration type obtained from each polynomial. TABLE 1Zemike PolynomialIndexAberration TypeZ001PistonZ112Tip (X-Tilt, horizontal tilt)Z1−13Tilt (Y-Tilt, vertical tilt)Z204DefocusZ2−25Oblique AstigmatismZ226Vertical AstigmatismZ3−17Vertical ComaZ318Horizontal ComaZ3−39Vertical TrefoilZ3310Horizontal TrefoilZ4011Primary SphericalZ4212Vertical secondary astigmatismZ4−213Oblique secondary astigmatismZ4414Vertical quadrafoilZ4−415Oblique quadrafoil The return beam diagnostic304is configured to measure aberration of the received wavefront (radiation) of the return beam of excitation laser LR2and quantify the aberration using the Zernike polynomials. The return beam diagnostic304analyzes the received wavefront to determine the Zernike coefficients of the Zernike polynomial that best fits the specific wavefront deviation. Based on the shift in the Zernike coefficients, the change in the beam profile can be determined.FIG.5shows an exemplary schematic view of the EUV lithography system illustrated inFIG.1. As illustrated, a control signal501corresponding to the change in the beam profile is generated by the final focus metrology (FFM) module350. As discussed above, the control signal501is connected with an actuator505to control one of the mirrors of the focusing apparatus330, such as for example, the mirror M150in the optical path before the laser hits the target droplets DP. Thus, by using the control signal501, the targeting control is optimized to maximize generation of EUV energy. In some embodiments, the feedback mechanism, illustrated inFIG.5, may further send a notification based on the change in the beam profile. In some embodiments, the notification includes a spatial separation between the pre-pulse and the main-pulse. In some embodiments, the notification also includes a time delay between the pre-pulse and the main-pulse. In some embodiments, the notification also includes an angle of a steerable mirror coupled to the radiation source. In some embodiments, based on the generating the notification, the feedback further sends the notification to a first external device associated with a steerable mirror controller and a second external device associated with a time delay controller. FIG.6is a graph600indicating variation in the Zernike coefficients of the 4thZernike polynomial as determined by measurements using the return beam diagnostic304, according to embodiments of the disclosure. In the embodiment illustrated inFIG.6, the return beam diagnostic304analyzes the wavefront of the return beam of excitation laser LR2and determines that the excitation laser LR2incident on the target droplets DP has an aberration of the type defocus, which corresponds to the 4thZernike polynomial. The return beam diagnostic304determines the variation (line602) in the Zernike coefficients611of the 4thZernike polynomial. The variation in the Zernike coefficients611within a desired range is considered acceptable and no corrective action is performed. However, for a variation beyond the desired range, as indicated by the dip605in the line602, one or more corrective actions are undertaken. A corrective action includes actuating the actuator505to change a position of the mirror M150based on the control signal501. Alternatively or additionally, one or more of the OMX, OMY and OMZ distances can be adjusted to minimize the change in the beam profile and thereby improve the interaction between the excitation laser LR2and the target droplets DP. Also illustrated are the images606and608obtained by the return beam diagnostic304before and after the dip605, respectively. As illustrated, the defocus aberration determined in image606is reduced in image608due to the corrective actions. The example inFIG.6illustrates how the 4thZernike polynomial can be used to detect the defocus aberrations in the optical wavefront, according to embodiments. However, embodiments are not limited thereto. Other Zernike polynomials can be used to detect corresponding aberrations, and one or more corrective actions can be taken to mitigate the detected aberrations. Thus, by improving the interaction between the target droplets and the excitation laser LR2, the conversion efficiency can be maximized, and variations (fluctuations) in the EUV energy can be minimized. FIG.7illustrates signal flow in the EUV lithography system illustrated inFIG.5, according to embodiments of the disclosure. The desired position of the zone of excitation ZE and parameters such as, for example, laser power, time delay between the main pulse and the pre-pulse, focal point of the pre-pulse and/or main pulse, may be determined when an EUV radiation source100is set up, and thereby define the set point702of the EUV lithography system. A control signal701corresponding to one or more of the parameters is provided to one or more components of the EUV lithography system to control the position of the zone of excitation ZE. For instance, one or more of the position of the droplet generator115and trajectory of the excitation laser LR2are adjusted to maximize the interaction between the target droplets DP and the excitation laser LR2to maximize the generation of EUV energy. In some embodiments, the control signal701is the control signal501provided to the actuator505to control one of the mirrors of the focusing apparatus330, such as for example, the mirror M150in the optical path before the excitation laser LR2hits the target droplets DP. However, these parameters can change over time due to various factors such as, for example, separation between the main pulse MP and the pre-pulse PP, shape of the excitation laser, the profile of the laser beam, mechanical and/or electrical drift in the radiation source, instability of the droplet generator, changes in chamber environment. The interaction between the excitation laser LR2and the target droplets DP is analyzed at704. The target droplets DP reflect and/or scatter the light (excitation laser LR2, in this case) incident upon it. The reflected and/or scattered light is detected for example, at a droplet detection module420(FIG.1). In some embodiments, the droplet detection module420includes a photodiode designed to detect light having a wavelength of the light from a droplet illumination module410(FIG.1). In various embodiments, the droplet illumination module410is a continuous wave laser or a pulsed laser having emitting light of a desired wavelength. It is determined whether an intensity of the detected light (i.e., light reflected and/or scattered by the target droplet) is within an acceptable range. In some embodiments, the determination is based on a value of current and/or voltage produced by photodiode of the droplet detection module420when it receives the light reflected and/or scattered by the target droplet DP. In some embodiments, the droplet detection module420includes a logic circuit programmed to generate a prescribed signal706when the detected intensity is not within an acceptable range. For example, the prescribed signal706is generated when the detected intensity is less than a certain threshold value. The prescribed signal706is indicative of the relative position of the excitation laser LR2and the target droplets DP. If the intensity of the detected light is not within the acceptable range, a parameter of the droplet illumination module410is adjusted (e.g., automatically) to increase or decrease the intensity of light irradiating the target droplet so as to ultimately bring the intensity of the detected light within the acceptable range. In various embodiments, the parameter of the droplet illumination module410includes, for example, an input voltage and/or current to the light source (e.g., laser) in the droplet illumination module410, a width of a slit controlling the amount of light exiting the droplet illumination module410, an aperture of the droplet illumination module410, and a value of angle and/or tilt of the droplet illumination module410. In some embodiments, the parameter is adjusted using a controller that is programmed to control various parameters of the droplet illumination module410. For example, in an embodiment, the controller is coupled to a slit controlling the amount of light exiting the droplet illumination module410and/or a mechanism that controls the tilt/angle of the droplet illumination module410. In such embodiments, the controller is coupled to the droplet detection module420and adjusts the width of the slit and/or the tilt of the droplet illumination module410in response to the prescribed signal706generated by the droplet detection module420when the intensity of the detected light is not within the acceptable range. In other embodiments, the controller is coupled to the actuator505and provides control signal701to the actuator505to control one of the mirrors of the focusing apparatus330, such as for example, the mirror M150in the optical path before the laser hits the target droplets DP. In some embodiments, the controller is a logic circuit programmed to receive the prescribed signal706from the droplet detection module420, and depending on the prescribed signal706transmit control signals to one or more components (e.g., the slit and/or tilt control mechanism described elsewhere herein) of the droplet illumination module410to automatically adjust one or more parameters of the droplet illumination module410and/or to adjust one of the mirrors of the focusing apparatus330. Referring briefly toFIG.8, illustrated is the droplet illumination module410including a radiation source415, a tilt control mechanism413and a slit control mechanism417. The tilt control mechanism413(also referred to herein as “auto tilt”) controls the tilt of the radiation source415. In various embodiments, the auto tilt413is a stepper motor coupled to the radiation source415(e.g., laser) of the droplet illumination module410and moves the radiation source415to change the angle of incidence at which light (or radiation) L is incident on the target droplet DP (and in effect changing the amount of light R reflected and/or scattered by the target droplet DP into the droplet detection module420). In some embodiments, the auto tilt413includes a piezoelectric actuator. The slit control mechanism417(also referred to herein as “auto slit”) controls the amount of light exiting the radiation source415. In an embodiment a slit or an aperture414is disposed between the radiation source415and the zone of excitation ZE at which the target droplet DP is irradiated. When, for example, the controller450determines that the intensity of light detected at droplet detection module420is lower than the acceptable range, the controller450moves the slit control mechanism417such that a wider slit is provided in the path of light exiting the radiation source415, allowing more light to irradiate the target droplet DP and increasing the detected intensity. On the other hand, if it is determined that the intensity of light detected at the droplet detection module420is higher than the acceptable range, the controller450moves the slit control mechanism417such that a narrower slit is provided in the path of light exiting the radiation source415, thereby reducing the detected intensity. In such embodiments, parameter of the droplet illumination module410adjusted by the controller450is the width of the aperture414in the path of light L irradiating the target droplet DP. Returning toFIG.7, generation of the prescribed signal706thus changes the set point702of the EUV lithography system. The control signal701is correspondingly changed based on the change is the set point702. The change in the control signal701thus actuates the slit control mechanism, tilt control mechanism, and/or the actuator505to causing corresponding changes in the amount of light exiting the radiation source415and the angle of incidence at which light is incident on the target droplet DP. In addition to the above techniques to maximize generation of EUV energy, embodiments of the disclosure are also directed to utilizing the Zernike shift in the wavefront of the light reflected after interaction between the excitation laser LR2and the target droplets DP. More specifically, embodiments measure aberration of the received wavefront (radiation) and quantify the aberration using the Zernike polynomials in order to obtain the Zernike shift in the reflected wavefront. Accordingly, at704, the return beam diagnostic304determines the Zernike shift in the reflected wavefront and a corresponding control signal708is generated. In some embodiments, the control signal708is the control signal501(illustrated inFIG.5) that corresponds to the change in the beam profile and is generated by the final focus metrology (FFM) module350. As discussed above, the control signal501is connected with the actuator505to control one of the mirrors of the focusing apparatus330, such as for example, the mirror M150in the optical path before the laser hits the target droplets DP. Thus, the targeting control is optimized to maximize generation of EUV energy. FIGS.9A and9Bshow a EUV data analyzing apparatus according to an embodiment of the present disclosure.FIG.9Ais a schematic view of a computer system that controls an operation of the final focus metrology (FFM) module350and the return beam diagnostic304for detecting one or more aberrations in the return image and performing one or more corrective actions described above. The foregoing embodiments may be realized using computer hardware and computer programs executed thereon. InFIG.9A, a computer system900is provided with a computer901including an optical disk read only memory (e.g., CD-ROM or DVD-ROM) drive905and a magnetic disk drive906, a keyboard902, a mouse903, and a monitor904. FIG.9Bis a diagram showing an internal configuration of the computer system900. InFIG.9B, the computer901is provided with, in addition to the optical disk drive905and the magnetic disk drive906, one or more processors911, such as a micro processing unit (MPU), a ROM912in which a program such as a boot up program is stored, a random access memory (RAM)913that is connected to the MPU911and in which a command of an application program is temporarily stored and a temporary storage area is provided, a hard disk914in which an application program, a system program, and data are stored, and a bus915that connects the MPU911, the ROM912, and the like. Note that the computer901may include a network card (not shown) for providing a connection to a LAN. The program for causing the computer system900to execute the functions of the EUV data analyzing apparatus in the foregoing embodiments may be stored in an optical disk921or a magnetic disk922, which are inserted into the optical disk drive905or the magnetic disk drive906, and be transmitted to the hard disk914. Alternatively, the program may be transmitted via a network (not shown) to the computer901and stored in the hard disk914. At the time of execution, the program is loaded into the RAM913. The program may be loaded from the optical disk921or the magnetic disk922, or directly from a network. In the programs, the functions realized by the programs do not include functions that can be realized only by hardware in some embodiments. For example, functions that can be realized only by hardware, such as a network interface, in an acquiring unit that acquires information or an output unit that outputs information are not included in the functions realized by the above-described programs. Furthermore, a computer that executes the programs may be a single computer or may be multiple computers. Embodiments of the disclosure provide numerous advantages over the existing systems and methods. Zernike terms which correlate with the shift in the X, Y, and/or Z direction of the excitation beam and/or the target droplet direction, and the beam profile change are used to quantify the change in shape of beam and the aberration. By using this as a feedback, an additional feedback loop is generated to compensate the shifting error from the return image instead of only considering relative targeting position. The angle to incidence of the excitation laser is detected and minimized so that a more stable EUV energy can be obtained. It will be understood that not all advantages have been necessarily discussed herein, no particular advantage is required for all embodiments or examples, and other embodiments or examples may offer different advantages. An embodiment of the present disclosure is a method1000of operating an extreme ultraviolet (EUV) lithography system according to the flowchart illustrated inFIG.10. It is understood that additional operations can be provided before, during, and after processes discussed inFIG.10, and some of the operations described below can be replaced or eliminated, for additional embodiments of the method. The order of the operations/processes may be interchangeable and at least some of the operations/processes may be performed in a different sequence. At least two or more operations/processes may be performed overlapping in time, or almost simultaneously. The method includes an operation S1010of irradiating a target droplet with laser radiation. In operation S1020, laser radiation reflected by the target droplet is detected. In operation S1030, aberration of the detected laser radiation is determined. In operation S1040, a Zernike polynomial corresponding to the aberration is determined. In operation S1050, a corrective action is performed to reduce a shift in at least one of Zernike coefficients of the Zernike polynomial. Another embodiment of the present disclosure is a method1100of operating an extreme ultraviolet (EUV) lithography system according to the flowchart illustrated inFIG.11. It is understood that additional operations can be provided before, during, and after processes discussed inFIG.11, and some of the operations described below can be replaced or eliminated, for additional embodiments of the method. The order of the operations/processes may be interchangeable and at least some of the operations/processes may be performed in a different sequence. At least two or more operations/processes may be performed overlapping in time, or almost simultaneously. The method includes an operation S1110of detecting excitation radiation reflected by a target droplet generated by a droplet generator of the EUV lithography system. The EUV lithography system also includes an EUV radiation source for generating EUV radiation that includes an excitation radiation source. The excitation radiation from the excitation radiation source interacts with the target droplet. In operation S1120, aberration of the detected excitation radiation is determined. In operation S1130, a plurality of Zernike polynomials are generated. In operation S1140, one or more Zernike polynomials from the plurality of Zernike polynomials that correspond to the aberration are determined. In operation S1150, a corrective action is performed to reduce a shift in at least one of Zernike coefficients of the one or more Zernike polynomials. Another embodiment of the present disclosure is a method1200of operating an extreme ultraviolet (EUV) lithography system according to the flowchart illustrated inFIG.12. It is understood that additional operations can be provided before, during, and after processes discussed inFIG.12, and some of the operations described below can be replaced or eliminated, for additional embodiments of the method. The order of the operations/processes may be interchangeable and at least some of the operations/processes may be performed in a different sequence. At least two or more operations/processes may be performed overlapping in time, or almost simultaneously. The method includes an operation S1210of detecting excitation radiation reflected by a target droplet generated by a droplet generator of the EUV lithography system. In operation S1220, aberration of the detected excitation radiation are determined. In operation S1230, a plurality of Zernike polynomials are generated. In operation S1240, one or more Zernike polynomials from the plurality of Zernike polynomials that correspond to the aberration are determined. In operation S1250, a shift in at least one of Zernike coefficients of the one or more Zernike polynomials is determined. In operation S1260, a change in a beam profile of the excitation radiation based on the shift in the Zernike coefficients is detected. According to one aspect of the present disclosure, a method of controlling an extreme ultraviolet (EUV) lithography system includes irradiating a target droplet with laser radiation and detecting laser radiation reflected by the target droplet. The method also includes determining aberration of the detected laser radiation, determining a Zernike polynomial corresponding to the aberration, and performing a corrective action to reduce a shift in at least one of Zernike coefficients of the Zernike polynomial. In one or more other embodiments, the corrective action includes generating a control signal to actuate one or more components of the EUV lithography system to adjust an interaction between the laser radiation and the target droplet. In one or more other embodiments, the interaction between the laser radiation and the target droplet is adjusted by changing a position of a droplet generator of the EUV lithography system, changing a trajectory of the laser radiation, or both. In one or more other embodiments, the one or more component includes an actuator and adjusting the interaction between the laser radiation and the target droplet includes controlling a focal point of laser radiation using the actuator. In one or more other embodiments, the actuator is connected to a steerable mirror, and the corrective action includes adjusting the steerable mirror using the actuator to adjust an interaction between the laser radiation and the target droplet. In one or more other embodiments, the corrective action includes adjusting an angle of incidence of the laser radiation. In one or more other embodiments, the method further includes generating a plurality of Zernike polynomials, and selecting the Zernike polynomial from the plurality of Zernike polynomials. The selected Zernike polynomial correspond to the aberration. In one or more other embodiments, the method further includes detecting a change in a beam profile of the EUV radiation based on the shift in the Zernike coefficients. In one or more other embodiments, the method further includes generating a control signal corresponding to the change in the beam profile, controlling an actuator of the EUV lithography system using the control signal, and adjusting a steerable mirror of the EUV lithography system using the actuator to change an optical path of the laser radiation. In one or more other embodiments, the shift in the Zernike coefficients is reduced such that EUV energy generated by an interaction of the laser radiation and target droplet is increased. In one or more other embodiments, the laser radiation includes a CO2laser. According to yet another aspect of the present disclosure, an apparatus for extreme ultraviolet (EUV) lithography includes a droplet generator configured to generate target droplets, and an EUV radiation source for generating EUV radiation including an excitation radiation source. The excitation radiation from the excitation radiation source interacts with the target droplets. The apparatus also includes a final focus module that is configured to detect excitation radiation reflected by the target droplet, determine aberration of the detected excitation radiation, generate a plurality of Zernike polynomials, determine one or more Zernike polynomials from the plurality of Zernike polynomials that correspond to the aberration, and perform a corrective action to reduce a shift in at least one of Zernike coefficients of the one or more Zernike polynomials. In one or more other embodiments, the apparatus further includes a steerable mirror. The steerable mirror is a last mirror in an optical path of the excitation radiation before the excitation radiation interacts with the target droplet. The apparatus also includes an actuator configured to control the steerable mirror. The final focus module is further configured to adjust the steerable mirror using the actuator to adjust an interaction between the excitation radiation and the target droplet. In one or more other embodiments, the steerable mirror is adjustable in 3 axis. In one or more other embodiments, the final focus module is further configured to detect a change in a beam profile of the excitation radiation based on the shift in the Zernike coefficients. In one or more other embodiments, the final focus module is further configured to reduce the shift in the Zernike coefficients such that EUV energy generated by an interaction of the excitation radiation and target droplet is increased. According to another aspect of the present disclosure, a non-transitory, computer-readable medium includes computer readable instructions stored in a memory which, when executed by a processor of a computer direct the computer to control a final focus module of an extreme ultraviolet (EUV) lithography apparatus to perform a method. The method includes detecting excitation radiation reflected by a target droplet, determining aberration of the detected excitation radiation, generating a plurality of Zernike polynomials, determining one or more Zernike polynomials from the plurality of Zernike polynomials that correspond to the aberration, determining a shift in at least one of Zernike coefficients of the one or more Zernike polynomials, and detecting a change in a beam profile of the excitation radiation based on the shift in the Zernike coefficients. In one or more other embodiments, the method further includes generating a control signal corresponding to the change in the beam profile, controlling an actuator of the EUV lithography system using the control signal, adjusting a steerable mirror of the EUV lithography system using the actuator to change an optical path of the excitation radiation. In one or more other embodiments, the steerable mirror is a last mirror in the optical path before the excitation radiation hits the target droplet. In one or more other embodiments, the steerable mirror is adjustable in 3 axis. The foregoing outlines features of several embodiments or examples so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments or examples introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 48,290 |
11860545 | DESCRIPTION OF THE EMBODIMENTS This disclosure will now be described in more detail with reference to the accompanying drawings that show various embodiments of this disclosure. This disclosure provides an exposure device and an exposure method, and uses a spatial light modulator to modulate the phase and amplitude of a laser, and makes use of the lasers with different phases and amplitudes to interfere and offset each other to form a more accurate exposure pattern. With reference toFIGS.2A,2B and7Afor the schematic views of an exposure device100, the structure of the exposure device100and an exposure method of this disclosure respectively, the exposure device100includes a laser source110, a beam expander112, a first spatial light modulator120, a second spatial light modulator130, a sensor140, a controller150and a projection lens160. Firstly, the Step S11is carried out to emit a laser111to a first spatial light modulator120. The laser source110as described in the Step S11is provided for emitting the laser111. The first spatial light modulator120is installed on the path of the laser111and irradiated by the laser111. The beam expander112is installed between the laser source110and the first spatial light modulator120and disposed on the path of the laser111, so that the laser111passes through the beam expander112and irradiates the first spatial light modulator120. When the laser111passes through the beam expander112, the irradiation radius of the laser111will be expanded to increase the area of the first spatial light modulator120which is irradiated by the laser111. In some embodiment, when the laser111passes through the beam expander112, the laser111will cover the entire first spatial light modulator120. In this embodiment, the first spatial light modulator120is a liquid crystal on silicon (LCOS) device, so that the first spatial light modulator120includes a plurality of first pixels201, each being used for reflecting the laser11after the phase of the laser111irradiated on the first spatial light modulator120is modulated, and the phase of the reflected laser111acan be changed. Then, Step S12is proceeded, and the first spatial light modulator120changes the phase of the laser modulated by each first pixel201. With reference toFIG.3Afor the schematic view of a liquid crystal on silicon (LCOS) device uses as the first spatial light modulator in accordance with some embodiment of this disclosure, the liquid crystal on silicon (LCOS) device used as the first spatial light modulator120includes a substrate layer210, a complementary metal oxide semiconductor (CMOS) layer220, a reflective layer230, two alignment layers240,240′, a liquid crystal layer250, a transparent electrode layer260and a cover glass270. The CMOS layer220includes a CMOS circuit layer221and a plurality of pixel electrodes222, and the reflective layer230is disposed on the CMOS layer220, and the alignment layer240is disposed above the reflective layer230, and the liquid crystal layer250is disposed between the two alignment layers240,240′. The transparent electrode layer260is disposed above the alignment layer240′, and the transparent electrode layer260has a plurality of transparent electrodes (not shown in the figure), each being configured to be corresponsive to one of the pixel electrodes222. In other words, the transparent electrode and the pixel electrode222are configured in pairs. The cover glass270covers the top of the transparent electrode layer26for protecting each component in the liquid crystal on silicon (LCOS) device, in addition to receiving external light. Further, the top of the cover glass270has an anti-reflective layer280such as an anti-reflective coating. With reference toFIG.3Bfor the perspective view of a liquid crystal on silicon (LCOS) device uses as a first spatial light modulator120of this disclosure, the viewing plane of the liquid crystal on silicon (LCOS) device is divided into a plurality of first pixels201, each being configured to be corresponsive to one of the pixel electrodes222and the transparent electrode. In this embodiment, the liquid crystal on silicon (LCOS) device receives an external control signal through the CMOS circuit layer221to control the electrical property of the transparent electrode on the pixel electrode222and the transparent electrode layer260, so that the liquid crystal251in the liquid crystal layer250corresponding to each pixel electrode can rotate, and after light enters into the liquid crystal on silicon (LCOS) device, the light is affected by the rotated liquid crystal251and the phase of the light will be changed. In addition, the direction of the liquid crystal251corresponding to each first pixel201is controlled by the pixel electrode222and the transparent electrode, so that the phase of the light emitted by each first pixel201can be controlled. InFIGS.2A and7A, the Step S13is carried out to irradiate a laser111to the second spatial light modulator130after the first spatial light modulator120modulates the phase of the laser111a. The second spatial light modulator130is installed on the path of the laser111areflected by the first spatial light modulator120. In addition, the second spatial light modulator130includes a plurality of second pixels, each second being used for reflecting the laser111aafter the amplitude of the laser111ais modulated. In this embodiment, the second spatial light modulator130is a digital micromirror device (DMD) that can reflect the laser111a, and the amplitude of the reflected laser111bwill be changed. Therefore, when the Step S14is carried out, the second spatial light modulator130changes the amplitude of the laser111bmodulated by the second pixel. With reference toFIG.4for the schematic view of a second spatial light modulator in accordance with some embodiment of this disclosure, the second spatial light modulator130further includes a plurality of second pixels, which are micromirrors320in this embodiment, and these micromirrors320are installed on a substrate310. In this embodiment, each micromirror320corresponds to one of the first pixels201on the first spatial light modulator120(as shown inFIG.3B). Each micromirror320can control and change an angle, so that the micromirror320has the ON or OFF effect. For example, when the angle relative to the horizontal plane is positive 12°, the micromirror320is turned on, and when it is negative 12°, the micromirror320is turned off, and the frequency of turning on/off the micromirror320can also be controlled. By adjusting the different frequency of switching on and off the micromirrors320, the laser111acan be reflected, so that the laser111areflected by the micromirror320with different switching frequencies can accumulate different energies, so as to the effect of adjusting the amplitude (i.e.: the intensity) of the laser111airradiated on the photoresist layer20. After being reflected by the second spatial light modulator130, the laser111bis projected onto the photoresist layer20through the projection lens160to form an exposure pattern400as shown inFIG.5. InFIGS.2A and7A, the Step S15is carried out, and the laser111bmodulated by the second spatial light modulator130is irradiated on the photoresist layer20to form an exposure pattern400. After the laser passes through the first spatial light modulator120, the phase of the laser emitted by each first pixel201can be adjusted. In addition, the laser passes through the second spatial light modulator130, the adjusted laser is irradiated on the photoresist layer20with an intensity distribution, so as to have bright and dark distributions on the surface of the photoresist layer20and form an exposure pattern400as shown inFIG.5. With reference toFIG.5for the schematic view of the exposure pattern400, the exposure pattern400can be regarded as being formed by a plurality of third pixels401arranged in an array, and the third pixel401includes a third dark-area pixel410and a third bright-area pixel420, and each third pixel401corresponds to one of the first pixels201on the first spatial light modulator120and one of the micromirrors320on the second spatial light modulator130. The first pixel201corresponding to the third pixel401includes a plurality of first dark-area pixels and a plurality of first bright-area pixels, and the third dark-area pixel410corresponds to the first dark-area pixel, and the third bright-area pixel420corresponds to the first bright-area pixel. The phases of the laser modulated by at least two first bright-area pixel of the adjacent first dark-area pixels differ by 180° with each other. With reference toFIG.6Afor the schematic view of a first pixel, only three first pixels201are shown inFIG.6Afor simplicity, and the first pixel includes one first dark-area pixel2011and two adjacent first bright-area pixels2012a,2012b. In this embodiment, the first bright-area pixels2012a,2012bare disposed on the left and right sides of the first dark-area pixel2011respectively. After the first bright-area pixels2012a,2012bmodulate the lasers, the phases of the modulated lasers differ by 180° with each other. In other words, the phases of the lasers modulated by the first bright-area pixel2012a,2012bdiffer by 180° with each other. With reference toFIG.6Bfor the schematic view of the exposure pattern formed by the laser, the laser reflected by the first bright-area pixel2012ais projected onto the third bright-area pixel420, and the laser reflected by the first bright-area pixel2012bis projected onto the third bright-area pixel420′. With reference toFIG.6Cfor the schematic view showing the light intensity of the third pixel at a time point, the curve601shows the light intensity of the laser reflected by the first bright-area pixel2012aand projected onto the third bright-area pixel420. The curve602shows the light intensity of the laser reflected by the first bright-area pixel2012band projected onto the third bright-area pixel420′. Due to the light diffraction, a part of the laser is projected onto the third dark-area pixel410. The laser reflected by the first bright-area pixel2012a,2012bhas a phase difference of 180°, so that the phase of the laser projected onto the third bright-area pixels420,420′ are in opposite states, and the lasers diffracted onto the third dark-area pixel410will be interfered and offset with each other due to the lasers reflected by the first bright-area pixels2012a,2012b. In this way, the light projected on the third dark-area pixel410can be effectively reduced, and further, when the exposure pattern is formed, the distinction between the third bright-area pixel420and the third dark-area pixel410can be clearer. The micromirror320corresponding to the third pixel401of the third dark-area pixel410has a deflection angle, so that the reflected laser will not irradiate on the third dark-area pixel410. On the contrary, the deflection angle of the micromirror320corresponding to the third pixel401of the third bright-area pixel420can irradiate the reflected laser on the third bright-area pixel420. With reference toFIGS.2A and7Aagain, a sensor140of one of the embodiments is installed between the second spatial light modulator130and the photoresist layer20and provided for detecting the phase and light intensity of the laser111bemitted from the second spatial light modulator130in the Step S16. In addition, a projection lens160is installed on the path of the laser111breflected from the second spatial light modulator130, and the laser111bpasses through the projection lens160and then projects onto the photoresist layer20to form the exposure pattern400on the photoresist layer20. In some embodiment, the sensor140includes a plurality of beam splitters143,143′, a light intensity sensor142, and a phase sensor141. The beam splitter143′ is installed between the second spatial light modulator130and the projection lens160and provided for dividing the incident laser111binto a first laser111b′ and a second laser111b″. Further, the ratio of the light intensity of the first laser111b′ to the light intensity of the second laser111b″ is 99:1, i.e. the beam splitter143′ will project 1% of the divided laser into a light intensity sensor142and a phase sensor141. In some embodiment as shown inFIG.2A, the second laser111b″ is split by a beam splitter143, and projected to light intensity sensor142and the phase sensor141. The sensor140receives the second laser111b″ through the light intensity sensor142and the phase sensor141to generate a sensing signal. With reference toFIGS.2B to7B,FIG.7Bshows a flow chart of detecting the phase and light intensity in the exposure method of this disclosure, and the controller150is electrically connected to the laser source110, the first spatial light modulator120, the second spatial light modulator130and the sensor140. The controller150is a programmable logic controller or a computer device with a control program. The controller150is provided for receiving a sensing signal, and controlling the light intensity and phase of the laser outputted by the laser source110according to the sensing signal, and the control method includes the following steps: S161: The beam splitter143′ is used to divide the laser111binto a first laser111b′ and a second laser111b″, wherein the first laser111b′ is the laser irradiated on the photoresist layer20as described in the Step S15. S162: The second laser111b″ is projected on a light intensity sensor142and a phase sensor141, and another beam splitter143is used to drive the second laser111b″ to enter into the light intensity sensor142and the phase sensor141separately. S163: The light intensity sensor142detects the light intensity of the second laser111b″. S164: The phase sensor141is used to detect the phase of the second laser111b″. S165: The first light modulator120and the second light modulator130are controlled according to the phase and light intensity of the second laser111b″, and the formed exposure pattern400is corrected. In some embodiment, the steps S15and S16(which are the steps S161˜S165) are performed synchronously. While the exposure pattern400is being formed, the exposure pattern400formed by the laser can be dynamically adjusted through the steps S161˜S165, and the exposure pattern400can be continuously corrected during the exposure process, further ensuring that the exposure pattern400matches the expected pattern. More specifically, the controller150compares the laser parameter measured by the sensor140with a predetermined laser parameter, and if the measured laser parameter does not match the predetermined laser parameter, the controller150will control the laser source110to adjust the light intensity and phase of the incident laser, until the laser parameter measured by the sensor140matches the predetermined laser parameter. In some embodiment, the controller150further includes an input interface151provided for an operator to enter a predetermined laser parameter. In addition, the input interface151is also provided for the operator to input a desired exposure pattern, so that the controller150can further operate the first spatial light modulator120and the second spatial light modulator130to form the desired exposure pattern on the photoresist layer20according to the data of inputted exposure pattern. The exposure device and method of this disclosure have the advantages of forming a more precise or sharp exposure pattern, improving the yield of development, and overcoming the drawbacks of the related art by means of modulating the of the phase and amplitude of the laser and using the mutual interference and offset of the laser between pixels. Although the invention has been disclosed and illustrated with reference to particular embodiments, the principles involved are susceptible for use in numerous other embodiments that will be apparent to persons skilled in the art. This invention is, therefore, to be limited only as indicated by the scope of the appended claims. | 15,972 |
11860546 | DETAILED DESCRIPTION FIG.1schematically depicts a lithographic apparatus according to one embodiment of the invention. The apparatus comprises:a. an illumination system (illuminator) IL configured to condition a projection beam B (e.g. UV radiation or DUV radiation);b. a support structure (e.g. a mask table) MT constructed to support a patterning device (e.g. a mask) MA and connected to a first positioner PM configured to accurately position the patterning device MA in accordance with certain parameters;c. a support table, e.g. a sensor table to support one or more sensors or a support table WT constructed to hold a substrate (e.g. a resist-coated substrate) W, connected to a second positioner PW configured to accurately position the surface of the table, for example of a substrate W, in accordance with certain parameters; andd. a projection system (e.g. a refractive projection lens system) PS configured to project a pattern imparted to the projection beam B by patterning device MA onto a target portion C (e.g. comprising one or more dies) of the substrate W. The illumination system IL may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic or other types of optical components, or any combination thereof, for directing, shaping, or controlling radiation. The support structure MT holds the patterning device MA. It holds the patterning device MA in a manner that depends on the orientation of the patterning device MA, the design of the lithographic apparatus, and other conditions, such as for example whether or not the patterning device MA is held in a vacuum environment. The support structure MT can use mechanical, vacuum, electrostatic or other clamping techniques to hold the patterning device MA. The support structure MT may be a frame or a table, for example, which may be fixed or movable as required. The support structure MT may ensure that the patterning device MA is at a desired position, for example with respect to the projection system PS. Any use of the terms “reticle” or “mask” herein may be considered synonymous with the more general term “patterning device.” The term “patterning device” used herein should be broadly interpreted as referring to any device that can be used to impart a radiation beam with a pattern in its cross-section such as to create a pattern in a target portion of the substrate. It should be noted that the pattern imparted to the radiation beam may not exactly correspond to the desired pattern in the target portion of the substrate, for example if the pattern includes phase-shifting features or so called assist features. Generally, the pattern imparted to the radiation beam will correspond to a particular functional layer in a device being created in the target portion, such as an integrated circuit. The term “projection system” used herein should be broadly interpreted as encompassing any type of projection system, including refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term “projection lens” herein may be considered as synonymous with the more general term “projection system”. As here depicted, the apparatus is of a transmissive type (e.g. employing a transmissive mask). Alternatively, the apparatus may be of a reflective type (e.g. employing a programmable mirror array of a type as referred to above, or employing a reflective mask). The lithographic apparatus may be of a type having two or more tables (or stage or support), e.g., two or more support tables or a combination of one or more support tables and one or more cleaning, sensor or measurement tables. For example, in an embodiment, the lithographic apparatus is a multi-stage apparatus comprising two or more tables located at the exposure side of the projection system, each table comprising and/or holding one or more objects. In an embodiment, one or more of the tables may hold a radiation-sensitive substrate. In an embodiment, one or more of the tables may hold a sensor to measure radiation from the projection system. In an embodiment, the multi-stage apparatus comprises a first table configured to hold a radiation-sensitive substrate (i.e., a support table) and a second table not configured to hold a radiation-sensitive substrate (referred to hereinafter generally, and without limitation, as a measurement, sensor and/or cleaning table). The second table may comprise and/or may hold one or more objects, other than a radiation-sensitive substrate. Such one or more objects may include one or more selected from the following: a sensor to measure radiation from the projection system, one or more alignment marks, and/or a cleaning device (to clean, e.g., the liquid confinement structure). In such “multiple stage” (or “multi-stage”) machines the multiple tables may be used in parallel, or preparatory steps may be carried out on one or more tables while one or more other tables are being used for exposure. The lithographic apparatus may have two or more patterning device tables (or stages or supports) which may be used in parallel in a similar manner to substrate, cleaning, sensor and/or measurement tables. Referring toFIG.1, the illumination system IL receives a radiation beam from a source SO or radiation. The source SO and the lithographic apparatus may be separate entities, for example when the source SO is an excimer laser. In such cases, the source SO is not considered to form part of the lithographic apparatus and the radiation beam is passed from the source SO to the illumination system IL with the aid of a beam delivery system BD comprising, for example, suitable directing mirrors and/or a beam expander. In other cases the source SO may be an integral part of the lithographic apparatus, for example when the source SO is a mercury lamp. The source SO and the illumination system IL, together with the beam delivery system BD if required, may be referred to as a radiation system. The illumination system IL may comprise an adjuster AD for adjusting the angular intensity distribution of the radiation beam. Generally, at least the outer and/or inner radial extent (commonly referred to as σ-outer and σ-inner, respectively) of the intensity distribution in a pupil plane of the illumination system IL can be adjusted. In addition, the illumination system IL may comprise various other components, such as an integrator IN and a condenser CO. The illumination system IL may be used to condition the radiation beam, to have a desired uniformity and intensity distribution in its cross-section. Similar to the source SO, the illumination system IL may or may not be considered to form part of the lithographic apparatus. For example, the illumination system IL may be an integral part of the lithographic apparatus or may be a separate entity from the lithographic apparatus. In the latter case, the lithographic apparatus may be configured to allow the illumination system IL to be mounted thereon. Optionally, the illumination system IL is detachable and may be separately provided (for example, by the lithographic apparatus manufacturer or another supplier). The projection beam is incident on the patterning device MA, which is held on the support structure MT, and is patterned by the patterning device MA. Having traversed the patterning device MA, the projection beam passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioner PW and position sensor IF (e.g. an interferometric device, linear encoder or capacitive sensor), the support table WT can be moved accurately, e.g. so as to position different target portions C in the path of the radiation beam B. Similarly, the first positioner PM and another position sensor (which is not explicitly depicted inFIG.1) can be used to accurately position the patterning device MA with respect to the path of the projection beam, e.g. after mechanical retrieval from a mask library, or during a scan. In general, movement of the support structure MT may be realized with the aid of a long-stroke module (coarse positioning) and a short-stroke module (fine positioning), which form part of the first positioner PM. Similarly, movement of the support table WT may be realized using a long-stroke module and a short-stroke module, which form part of the second positioner PW. In the case of a stepper (as opposed to a scanner) the support structure MT may be connected to a short-stroke actuator only, or may be fixed. Patterning device MA and substrate W may be aligned using patterning device alignment marks M1, M2and substrate alignment marks P1, P2. Although the substrate alignment marks P1, P2as illustrated occupy dedicated target portions, they may be located in spaces between target portions C (these are known as scribe-lane alignment marks). Similarly, in situations in which more than one die is provided on the patterning device MA, the patterning device alignment marks may be located between the dies. Although specific reference may be made in this text to the use of lithographic apparatus in the manufacture of ICs, it should be understood that the lithographic apparatus described herein may have other applications in manufacturing components with microscale, or even nanoscale, features, such as the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat-panel displays, liquid-crystal displays (LCDs), thin-film magnetic heads, etc. Arrangements for providing immersion liquid between a final element of the projection system PS and the substrate can be classed into three general categories. These are the bath type arrangement, the so-called localized immersion system and the all-wet immersion system. In a bath type arrangement substantially the whole of the substrate W and optionally part of the support table WT is submersed in a bath of immersion liquid. In the all-wet immersion system the whole top surface of the substrate is covered in immersion liquid. A localized immersion system uses a liquid supply system in which immersion liquid is only provided to a localized area of the substrate. The region filled by immersion liquid is smaller in plan than the top surface of the substrate and the region filled with immersion liquid remains substantially stationary relative to the projection system PS while the substrate W moves underneath that area.FIGS.2and3show different fluid handling structure which can be used in such a system. A sealing feature is present in a bottom surface20to seal immersion liquid to the localized area. One way which has been proposed to arrange for this is disclosed in PCT patent application publication no. WO 99/49504. An arrangement which has been proposed is to provide the fluid handling structure with a structure which extends along at least a part of a boundary of a space between the final element of the projection system PS and the support table WT. Such an arrangement is illustrated inFIG.2. FIG.2schematically depicts a localized fluid handling structure12. The fluid handling structure12extends along at least a part of a boundary of the space11between the final element of the projection system PS and the support table WT or substrate W. (Please note that reference in the following text to surface of the substrate W also refers in addition or in the alternative to a surface of the support table WT or an object, such as a sensor, on the support table WT, unless expressly stated otherwise.) The fluid handling structure12is substantially stationary relative to the projection system PS in the XY plane though there may be some relative movement in the Z direction (in the direction of the optical axis). In an embodiment, a seal is formed between the fluid handling structure12and the surface of the substrate W and may be a contactless seal such as a gas seal (such a system with a gas seal is disclosed in European patent application publication no. EP 1,420,298 A) or liquid seal. The fluid handling structure12at least partly confines immersion liquid in the space11between the final element of the projection system PS and the substrate W. A contactless seal to the substrate W may be formed around the image field of the projection system PS so that immersion liquid is confined within the space11between the substrate W surface and the final element of the projection system PS and more generally to a region including between the fluid handling structure12and the substrate W adjacent the space11. The space11is at least partly formed by the fluid handling structure12positioned below and surrounding the final element of the projection system PS. Immersion liquid is brought into the space11below the projection system PS and within the fluid handling structure12by one of liquid openings3. The immersion liquid may be removed by another of liquid openings3. The immersion liquid may be brought into the space11through at least two liquid openings3. Which of liquid openings3is used to supply the immersion liquid and optionally which is used to remove the immersion liquid may depend on the direction of motion of the support table WT. The fluid handling structure12may extend a little above the final element of the projection system PS. The liquid level rises above the final element so that a buffer of immersion liquid is provided. In an embodiment, the fluid handling structure12has an inner periphery that at the upper end closely conforms to the shape of the projection system PS or the final element thereof and may, e.g., be round. At the bottom, the inner periphery closely conforms to the shape of the image field, e.g., rectangular, though this need not be the case. The immersion liquid may be confined in the space11by a gas seal16which, during use, is formed between the bottom surface20of the fluid handling structure12and the surface of the substrate W. The surface20faces the substrate W and a seal is formed between that surface20and the substrate W. An aperture15is formed in the fluid handling structure12for the passage there through of the projection beam through immersion liquid in the space11. The gas seal16is formed by gas. The gas in the gas seal16is provided under pressure via inlet25to the gap between the fluid handling structure12and substrate W. The gas is extracted via outlet14. The overpressure on the gas inlet25, vacuum level on the outlet14and geometry of the gap are arranged so that there is a high-velocity gas flow inwardly that confines the immersion liquid. The force of the gas on the immersion liquid between the fluid handling structure12and the substrate W confines the immersion liquid in the space11. The inlets/outlets may be annular grooves which surround the space11. The annular grooves may be continuous or discontinuous. The flow of gas is effective to confine the immersion liquid in the space11. Such a system is disclosed in United States patent application publication no. US 2004-0207824, which is hereby incorporated by reference in its entirety. In an embodiment, the fluid handling structure12does not have the gas seal16. FIG.3illustrates schematically features formed in the surface20of an alternative fluid handling structure12. The surface20includes features to adapt the surface20for the extraction of the immersion liquid from the region.FIG.3illustrates schematically and in plan meniscus controlling features of a fluid handling structure12which may have outlets using the gas drag principle and to which an embodiment of the present invention may relate. The features of a meniscus controlling feature are illustrated which may, for example, replace the meniscus controlling features depicted by the gas seal16, provided by the inlet15and the outlet14inFIG.2. The meniscus controlling feature ofFIG.3is a form of extractor, for example a dual phase extractor. The meniscus controlling feature comprises a plurality of discrete openings50in the surface20of the fluid handling structure12. Thus, the surface20is adapted for the extraction of immersion fluid from the region. Each discrete opening50is illustrated as being circular, though this is not necessarily the case. Indeed, the shape is not essential and one or more of the discrete openings50may be one or more selected from: circular, elliptical, rectilinear (e.g. square, or rectangular), triangular, etc., and one or more openings may be elongate. Radially inwardly of the discrete openings50and also in the surface20of the fluid handling structure12are a plurality of inlet openings13. Immersion liquid is provided through inlet openings13to the region to which immersion liquid is provided. Inlet openings13surround the space11which is bounded by the aperture15formed in the fluid handling structure12. There may be no meniscus controlling features radially inwardly of the openings50. The meniscus320is pinned between the discrete openings50with drag forces induced by gas flow into the discrete openings50. A gas drag velocity of greater than about 15 m/s, desirably about 20 m/s is sufficient. The amount of splashing or leaking of fluid from the substrate W may be reduced, thereby reducing evaporation of fluid and thereby reducing thermal expansion/contraction effects. Various geometries of the bottom of the fluid handling structure are possible. For example, any of the structures disclosed in U.S. patent application publication no. US 2004-0207824 or U.S. patent application publication no. US 2010-0313974 could be used in an embodiment of the present invention. An embodiment of the invention may be applied to a fluid handling structure12which has any shape in plan, or has a component such as the outlets arranged in any shape. Such a shape in a non-limiting list may include an ellipse such as a circle, a rectilinear shape such as a rectangle, e.g. a square, or a parallelogram such as a rhombus or a cornered shape with more than four corners such as a four or more pointed star, for example, as depicted inFIG.3. Known lithographic apparatus may comprise a fluid handling structure12comprising a gas knife. The gas knife can be used to help confine immersion fluid to the space11. Therefore, the gas knife can be useful in preventing immersion fluid from escaping from the space11, which could later lead to defects. Providing a strong gas knife is useful in preventing film pulling. This is because a strong gas knife will reduce or prevent the amount of immersion fluid which is dragged behind the fluid handling structure12. Additionally a strong gas knife may break up the film faster to reduce the amount of immersion fluid left behind the fluid handling structure12. The fluid handling structure12is configured to confine immersion fluid to a region and comprises a gas knife system. The gas knife system may be configured to generate a gas knife in use. The gas knife may be radially outward of the space11and may contribute to confining the immersion fluid. The gas knife system comprises passages each having an exit60. The gas knife may be formed by gas exiting the exits60in use. The exits60form at least one side of a shape in plan view. The exits60may form at least one, multiple or all the sides of the shape in plan view. For example, the exits60may form the sides of a four pointed star as shown inFIG.3. The shape may have a plurality of sides, for example any appropriate number of sides may be provided, e.g. 3, 4, 5, 6, 7, 8, 9, 10 or more. As described above, the exits60may form the sides of any shape and this is not limiting.FIG.3depicts the scanning direction110as in-line with two of the points of the four point star but this may not be the case. The shape formed by the gas knife may be aligned with the scanning direction110in any selected orientation. In the embodiment ofFIG.3it can be seen that there is fluid flow out of the surface20through liquid openings13(immersion liquid) and through exits60(a flow of gas to form a gas knife). There is also a flow of fluid into the surface20(a mixture of gas and immersion liquid) into discrete openings50. In an embodiment, the fluid handling structure12is adapted to provide a fluid flow into or out of the surface20(e.g. flow out of fluid openings13, flow into discrete openings50, flow out of exits60). The fluid handling structure12is adapted to change position of the fluid flow into or out of the surface20relative to the aperture15, for example in the radial direction. The faster the relative movement between the fluid handling structure12and the substrate W, the more likely leaking of immersion liquid is to occur. However, it is desirable to maximise the relative speed between the fluid handling structure12and the substrate W in order to maximise throughput of the apparatus. Therefore, it is desirable to increase the maximum relative speed between the fluid handling structure12and the substrate W without allowing any immersion liquid or only small amounts of immersion liquid to be left behind on the substrate W. By changing the position of a fluid flow relative to the aperture15, it is possible to increase the relative speed of the substrate W to the projection system PS which is achievable before immersion liquid is left behind on the surface of the substrate W. In an embodiment, the fluid flow(s) associated with a meniscus controlling feature can be moved in the same direction as the substrate W, thereby reducing the relative velocity between the flows associated with the meniscus controlling feature and the substrate W. Thus, in an embodiment the change in position of the fluid flow means that the relative speed of the position on which the fluid flow impinges on the surface of the substrate W has a reduced velocity compared to the case where there is no relative movement and the position of the fluid flow into or out of the surface relative to the aperture15. This reduction in relative velocity means that liquid loss is less likely and/or that a higher speed of the substrate W relative to the projection system PS is usable before immersion liquid is left behind on the substrate W. In an embodiment allowing the change of position of the fluid flow into or out of the surface20relative to the aperture allows for the alignment of the shape, in plan, of fluid flows associated with a meniscus controlling feature in a particular direction. For example, the shape, in plan, may be a cornered shape and a corner of the shape may be turned always to face the oncoming substrate which is optimal in terms of confining immersion liquid. Specifically, such a system might allow the alignment of the corners of the overall shape, in plan, made by the openings50ofFIG.3with the actual movement direction of the substrate W relative to the projection system PS, rather than the fixed alignment with the scanning direction110as illustrated inFIG.3. This would allow a greater speed of non-scanning movements of the substrate W without liquid loss than is possible with the fixed fluid handling structure12ofFIG.3. This thus allows alignment of features for confining the immersion liquid to the region to be optimally aligned relative to the direction of travel of the substrate W under the fluid handling structure12. In an embodiment the change of position allows several passes of a fluid flow over a particular region of the substrate W such that any immersion liquid which does manage to pass a meniscus controlling feature closer to the aperture15can be contained by the fluid flow. A change in position of the fluid flow is achieved by allowing a first part of the fluid handling structure12to move relative to a second part of the fluid handling structure12. The movement may be controlled by controller500. The change in position is achieved by arranging for one of the first part and second part to comprise at least one through-hole for the fluid flow there through. The other of the first part and second part comprises at least one opening for the fluid flow there through. The at least one through-hole and at least one opening are in fluid communication when they are aligned. By moving the first part and second part relative to one another, the at least one opening will align with a different one of the at least one through hole. In this way the fluid flow can be selected to be out of different of the at least one through-holes, thereby to change the position of the fluid flow into or out of the surface relative to the aperture. Such an arrangement is advantageous because, without substantially increasing the volume of the fluid handling structure12, it is possible efficiently to adapt the fluid handling structure12to allow a fluid flow into or out of the surface20to change its position relative to the aperture15. Additionally any support for the fluid handling structure12for holding the fluid handling structure12in position relative to the projection system PS can be attached to the first part of the fluid handling structure12which can be arranged not to move relative to the projection system PS. This greatly simplifies supporting of the fluid handling structure12compared to the case where the whole fluid handling structure12rotates or moves in order to change the position of a fluid flow relative to the projection system PS. Additionally instead of moving the whole of the fluid handling structure12, by just moving a part of the fluid handling structure12, the overall mass of the moving part is lower than the case where the whole fluid handling structure12is moved. Generally a chamber with a larger cross-sectional area than the opening is provided in the first or second part which comprises the at least one opening. The chamber smooths the flow through the opening by absorbing pressure variations. In preferred embodiments the chamber is provided in the non moving part so that advantageously the part which defines the chamber does not need to move thereby reducing moving mass. The part with the at least one through-hole in it may be kept relatively thin and thereby keep its mass low, meaning that actuation for the movement is simpler. In an embodiment the movement of the first part relative to the second part includes a rotational movement of the first part relative to the second part. In an embodiment the movement of the first part relative to the second part includes a translational movement of the first part relative to the second part. An embodiment of the present invention will now be described by way of example only with reference toFIG.4. Various openings, inlets, exits etc. for the flow of fluid into/out of the fluid handling structure12are described in this and other embodiments. The openings, inlets, exits etc. may be in the form of one or more openings, inlets, exits etc. with any cross-sectional shape and in any number. For example the openings, inlets, exits etc. may be in the form of a single slit instead of separate holes as illustrated.FIG.4is a schematic perspective diagram of the surface20of the fluid handling structure12. The fluid handling structure ofFIG.4is the same as that ofFIG.3except as described below. The fluid handling structure12comprises discrete openings50and exits60a, b, cwith the same function as the discrete openings50and exits60ofFIG.3. The discrete openings50in the surface20, which are for the extraction of the immersion liquid, are closer to the aperture15than the exits60. The fluid handling structure12comprises a first part100and a second part200. The first part100is mounted substantially stationary relative to the projection system PS. The second part200sits in a groove formed in the bottom of the first part100such that the surface20(which is the under surface of the fluid handling structure12) is formed partly by the first part100and partly by the second part200. Bottom surfaces of the first part100and second part200are substantially co-planar surfaces and together define the surface20through which immersion liquid is extracted. The fluid handling structure12is adapted such that the second part200may rotate relative to the first part100. The rotational movement is around a central axis of the fluid handling structure12. In an embodiment the central axis of the fluid handling structure12is coincident with a central axis of the projection system PS of the lithographic apparatus. The exits60are in the form of outlet openings in the surface20. The fluid flow out of the exits60a, b, cpasses out of the surface20along at least one line, in plan because the exits60a, b, care arranged along a line, in plan. The fluid (e.g. gas) flow of out exits60a, b, cis in the form of a gas knife. In the embodiment illustrated inFIG.4, the exits60a, b, cform three lines, in plan. However, the present embodiment is not limited to three lines and any number of lines may be formed by the exits60a, b, c. The lines of exits60a, b, care formed such that the fluid flow out of them is effective to manipulate droplets of the immersion liquid towards the aperture15due to the change in position of the fluid flow. This is illustrated schematically inFIG.5which is a plan view of the surface20illustrated inFIG.4except that the exits60are formed into five blade shaped lines, in plan (60a,60b,60c,60d,60e). A droplet of immersion liquid62is illustrated being pushed along by the gas flow out of the line of exits60eand swept towards the discrete openings50. The rotation of second part200acts like a diode comprising a plurality of gas knives. The gas knives bulldoze all immersion liquid which escapes radially inner confinement features back radially inwardly into the region where immersion liquid is supposed to be. The immersion liquid can then be extracted by the extraction features such as the discrete openings50. The lines of exits60a-eextend in the circumferential direction and also get progressively further from the aperture15along their length. The second part200is rotated such that at a given fixed radial position relative to the projection system PS or aperture15as a given line passes that radial position at that fixed radial position moves closer to the projection system PS and aperture15. The direction of rotation of the second part200relative to the first part100is illustrated by arrow63. The movement of the line relative to the droplet62(which would be on the substrate W under the fluid handling structure12) would result in the droplet62moving in direction64towards the aperture15. The movement of the droplet62is due to the movement of the position the fluid flow exiting the exits60erelative to the aperture15. As the droplet62moves towards the aperture15, it will be extracted through one of the discrete openings50. Even if the droplet62moves past the line of exits60e, it will then be caught by the next line of exists60dwhich will exert a force on the droplet62in the same way as described above in relation to the force exerted on the droplet62by the line of exit60e. In this way any droplets of immersion liquid which move past the discrete openings50in the direction away from the aperture15are swept by the fluid flow out of the line of exits60a, b, c, d, eback towards the openings50. As illustrated inFIGS.4and5, the lines of exits60a-emay not be straight and may be for example curved. The lines of exists60a-emay or may not overlap in the circumferential direction. Although not illustrated, openings13may be provided in the surface20in the first part100adjacent to the aperture15. As illustrated inFIG.5, liquid openings13are present in the first part100radially inwardly of the second part200. The liquid openings13are optional in this embodiment and have the same function as the liquid openings13in the fluid handling structure12ofFIG.3. The second part200is frictionlessly suspended by the first part100. The second part200may rotate relative to the first part100at a speed of, for example, between 0.1 and 100 Hz, preferably between 1 and 10 Hz, say a few Hertz. If the rotation speed is kept constant, there are advantageously no acceleration or deceleration forces introduced to the apparatus. The movement of the second part200relative to the first part100may be driven pneumatically or hydraulically.FIG.6is a cross section of the fluid handling structure12ofFIGS.4and5. Veins120are illustrated for pneumatic or hydraulic actuation. Pneumatic actuation may be using the same gas as the gas used in the exit60, for example carbon dioxide. A hydraulic actuation may use the same liquid as the immersion liquid, for example by pumping immersion liquid past veins120. Other ways of moving the second part200relative to the first part100include using a (servo/stepper) motor, optionally in combination with gears, a belt or a rod. Using a belt or a rod to transmit motion from a motor to the second part200may be advantageous because it is then possible to place the motor at a location distant from the second part200. This may be particularly advantageous in cases where the fluid handling structure12has a limited volume. A rotation sensor (not depicted in the figure) detecting the speed of rotation of the first part100relative to the second part200may be provided. A flow sensor (not depicted in the figure) measuring the speed of immersion liquid/gas during the rotation may be provided. The sensors may be used by a controller500to control the rotational speed of the second part200relative to the first part100in a closed loop. As illustrated inFIG.6, in an embodiment an outer bearing130and an inner bearing140are provided between the first part100and the second part200. The inner bearing140may be a liquid bearing, for example using the same liquid as is in the space11. The outer bearing130may be a gas bearing, for example using the same gas as the gas exiting exit60. Only one bearing130,140may be provided and the bearing may be any type of contactless and/or frictionless bearing including pneumatic and hydraulic as described above. Removing the inner bearing140and only having an outer bearing130may be advantageous because more space is available at the outer side of the second part200than on the inner side. As a result the size of the rotating second part200may be reduced as no space is required for the inner bearing140. This would lead to a smaller shielding gas (e.g. carbon dioxide) footprint because of the smaller radius of the rotating second part200. Additionally this would mean that fewer exits60are required. However, the stiffness of the second part200might need to be increased compared to the case where an inner bearing140is provided. The embodiments ofFIGS.4-6illustrate the discrete openings50being provided in the second part200. However, this need not be the case and in an embodiment the discrete openings50are provided in the first part100. The arrangement of the embodiments ofFIGS.4-6may be advantageous as the relative speed between the discrete openings50and the substrate W under the fluid handling structure12is lower meaning that lower forces act on the meniscus due to the relative motion between the openings50and the substrate W. As a result liquid loss from the region past the discrete openings50can be reduced. Additionally the shape the discrete openings50make, in plan, could be a shape other than a circle. For instance, it might be desirable to have the discrete openings in a cornered shape with corners aligned with the scan direction, such as is illustrated inFIG.3. However, from the view point of minimising the size of the fluid handling structure12, the shape made by the discrete openings50in plan is desirably a circle, as illustrated, whether or not the discrete openings50are in the first part100or second part200. As illustrated inFIG.6, an annular chamber55,65is provided for each of the fluid flows out of the second part200. For instance, in the embodiment ofFIG.6, an annular chamber55is provided which is in fluid communication with the discrete openings50via through holes51in the second part200. An under pressure is provided in the chamber55. This under pressure induces a flow of fluid (immersion liquid and gas) through the discrete openings50. Similarly an annular chamber65is provided above the exits60and in fluid communication with the exits60via through holes61in the second part200. An over pressure of gas is supplied to the annular chamber65. Because the annular chamber65is in fluid communication with all of the exits60a-e, this provides an even flow of gas out of all of the exits60a-e. The chambers55and65can be seen as being openings in the first part100which are in fluid communication with the through holes51,61in the second part200. In this way the annular chambers55,65are in fluid communication with the discrete openings50and exits60in the surface20. The annular chamber55is a relatively large chamber compared to the combined volume of the through holes51. This allows pressure fluctuations present due to immersion liquid passing through the through holes51, to be evened out. A similar principle applies to annular chamber65. However, additionally the large surface area of the opening in the first part100corresponding to the annular chamber65means that each of the exits60a-ein the surface20, which are at different radial positions relative to the aperture15are in fluid communication with the one annular chamber65. In an embodiment centrifugal forces can be used to separate the immersion liquid and gas in the annular chamber55. This is beneficial because this reduces disturbance forces and evaporation of immersion liquid in the annular chamber55. The second part200may be made by turning instead of milling. This is advantageous because turning is less expensive and faster and also more accurate than milling. Therefore, the manufacturing cost is reduced compared to the fluid handling structure12ofFIG.3which is typically made by milling. In an embodiment the second part200is dismountable from the first part100in order to aid in cleaning and/or upgrading and/or to change the pattern of the discrete openings50and/or exits60and/or any other features which may be on the surface20of the second part200. Although not illustrated, outer extraction holes may be formed in the first part100radially outward of the second part200. Outer extraction holes may be connected to an under pressure via a chamber in the body of the first part100. Outer extraction holes may be used to remove gas from radially inward of the outer extraction holes. In the case of use of a shielding gas other than air (e.g. carbon dioxide) in the gas flow exiting the exits60a-e, outer extraction holes are desirable. Use of outer extraction holes can ensure that the environment of the lithographic apparatus is not contaminated with dangerous levels of shielding gas. Additionally allowing shielding gas to escape into the remainder of the lithographic apparatus may be undesirable as it may then reach optical paths of beams of radiation of imaging sensors and/or alignment sensors which can lead to errors in measurements made. In another embodiment outer extraction holes may be provided in the second part200and connected to an under pressure source in a similar way to discrete openings50. FIGS.7-9show alternative embodiments which work on similar principles to those of the embodiments ofFIGS.4-6. The embodiments ofFIGS.7-9are the same as the embodiments ofFIGS.4-6except as described below. In the embodiments ofFIGS.7-9the inlet openings13and exits60are formed in the first part100. The discrete openings50are formed in the second part200. As with all other embodiments, the exact openings which are provided in the first part100and which are provided in the second part200can vary from the arrangement illustrated. For example, the one or more of the inlet openings13and exits60may be provided in the rotating second part200instead of in the substantially stationary first part100. Additionally, as with the embodiments ofFIGS.4-6, outer extraction openings may be provided radially outwardly of the exits60, either in the stationary first part100or the rotating second part200. Although the embodiments ofFIGS.7-9are explained in relation to the liquid confinement features being in the form of discrete openings50, the same principle can be used with any type of liquid confinement features which produce a flow of fluid into or out of the surface20. As can be seen inFIGS.7-9the discrete openings50are formed in a non circular shape, in plan, surrounding the aperture15. The shape is designed to reduce the relative speed of the substrate W to a line normal to the direction between adjacent discrete openings50. This is effective to reduce the force on the meniscus320extending between the discrete openings50(rather like the shape ofFIG.3does). Therefore the second part200can be rotated such that the shape is aligned in an optimised direction for the confinement of immersion liquid relative to the direction of travel of the substrate W under the fluid handling structure12. This arrangement has the benefit over the embodiment ofFIG.3in that the relative position of the first part100to the second part200can be adjusted continuously to optimise the orientation of the shape made by the discrete openings50relative to the changing direction of travel of the substrate W under the fluid handling structure12. As in other embodiments, a controller500controls the movement of the first part100relative to the second part200. This may be on the basis of a known route of the substrate W under the fluid handling structure12or may be alternatively or additionally be on the basis of a sensed direction of motion of the substrate W under the fluid handling structure12. The overall shape the discrete openings50of the embodiment ofFIGS.7and8make in plan is an off axis ellipse or another (routing) optimized elongate shape. The shape the discrete openings50of the embodiment ofFIG.9make in plan is one with a rounded leading edge and which is elongated towards a corner with substantially straight lines at the trailing edge. In both embodiments the number of discrete openings50per unit area or length may increase in the region of the trailing edge. These shapes work at the trailing edge on the same principles as the shape of the fluid handling structure12ofFIG.3. That is, the relative velocity of the substrate W moving under the fluid handling structure12is reduced relative to the direction perpendicular to a direction between adjacent discrete openings50. As a result, the speed at which immersion liquid is no longer contained by the discrete openings50is increased. FIG.10shows a further embodiment which is the same as the foregoing embodiments except as described below. In theFIG.10embodiment the aperture15is defined by the first part100. In the embodiment ofFIG.10the second part200is in the form of a plate. The second part200forms the entire surface20. Any flows of fluid into or out of the surface20occur through the second part200. For this purpose the second part200has a plurality of through holes51,131in it. The through holes include through holes131for the liquid flow through liquid openings13and through holes51for the discrete openings50for the extraction of immersion liquid and gas thereby to pin the meniscus320. An annular chamber55is provided above the discrete openings50like in the foregoing embodiments. Another annular chamber135is provided above the liquid openings13for the provision of immersion liquid through the liquid openings13. The annular chamber135works on the same principles as the other annular chambers described herein. The second part200is translatable in a plane relative to the first part100. The plane in which the second part200is movable is a plane substantially parallel to the substrate W. The plane is also parallel to the plane of the surface20. The second part200has an aperture27in it radially inwardly of the liquid openings13. The aperture27in the second part200is larger than the aperture15in the first part100. This allows movement of the second part200relative to the first part100without the second part200blocking the aperture15through which the projection beam passes to image the substrate W. This can most clearly be seen in the embodiment ofFIG.11, which is similar in this respect. The aperture27in the second part200may be of any shape. In the embodiment ofFIG.11the shape is, in plan, a rounded square. This shape is chosen as it allows an extent of movement of the second part200relative to the first part100as desired without the second part200blocking the aperture15of the first part100. The annular chambers55,135may be seen as having an opening in the first part100like in theFIG.6embodiment. The openings of the annular chamber55,135and the through holes51,131are in fluid communication when aligned. Relative movement of the second part200to the first part100results in movement of the through holes51,131relative to the aperture15. In this way the position of the fluid flow into or out of the surface20(i.e. out of liquid openings13and into discrete openings50) changes relative to the aperture15. In the embodiment ofFIG.10the cross-sectional area of the at least one through hole131,51is smaller than the cross-sectional area of the openings of the annular chambers135,55. This allows relative movement of the second part200to the first part100, thereby moving the position of the through holes131,51relative to the aperture15of the first part100. This movement is achieved whilst maintaining fluid communication between the annular chambers135,55and their associated through holes131,51. In this way a constant flow of fluid into or out of the through holes131,51between the fluid handling structure12and the substrate W is possible. In the embodiment ofFIG.10a bearing is provided between the first part100and the second part200. The bearing may be a frictionless bearing. In one embodiment the bearing is a fluid bearing, for example a liquid bearing or a gas bearing. In a preferred embodiment a radially inward bearing140is a liquid bearing. The liquid bearing may use the same liquid as is provided to the immersion space11. This has the advantage that any leaking of immersion liquid from the bearing140into the space11or the annular chamber135is not deleterious to the immersion liquid in the space11. A radially outward bearing130may be a gas or a liquid bearing. In a preferred embodiment the radially outward bearing130is a gas bearing. This is particularly preferred if a further through hole is provided in the second part200radially outwardly of the discrete openings50for the provision of a gas knife flow of gas onto the substrate W (similar to the embodiment ofFIG.3). The same gas as provided to that gas flow could then be used in the radially outward bearing130. In the embodiment ofFIG.10, the controller500moves the second part200in the same direction relative to the first part100as the substrate W moves relative to the first part100. This results in a reduction in the relative velocity between the second part200and the substrate W compared to the case where no relative movement between the first part100and second part200occurs. This reduction in relative velocity between the second part200and the substrate W means that the meniscus320is more stable and loss of immersion liquid is less likely. The relative movement of the second part200relative to the first part100may be achieved in any way. In the embodiment ofFIG.10movement in one direction is achieved by rolling up the second part200at one edge. This is achieved by providing an axle300around which the second part200is wound. By driving the axle300(and a corresponding axle on the opposite side of the fluid handling structure12in the opposite direction) movement in one direction is achievable. Movement in the direction perpendicular to the axis of the axle300can be achieved by moving the axle300along the axle's longitudinal axis. In the embodiment ofFIG.11a different mechanism to achieve relative movement between the second part200and the first part100is utilized. These can be used in any other embodiment including the embodiment ofFIG.10in place of the rolling up feature. The actuation may be by use of (linearly) actuated rods or wires, by use of a linear motor (actuator) or in any other way. The use of linear actuators gives more design freedom, lower risk of a generation of unwanted particles and better control and more accurate movements. In the case of two or three linear actuators being connected to the second part200by rods, the moving mass of the second part200is minimised (compared to the rolling up embodiment illustrated inFIG.10). In the embodiment ofFIG.11exits60for the flow of gas there through may be provided in the bottom surface of the first part100and so are fixed in relation to the aperture15. Thus, the surface20is defined by both the first part100and the second part200, like in the embodiment ofFIG.4. The embodiments ofFIGS.10and11are advantageous because of the low mass of the moving part. That is, the plate which forms the second part200can be made relatively thin meaning that low forces are required to accelerate and decelerate it. Although the embodiments ofFIGS.10and11are illustrated with liquid openings13for the provision of immersion liquid there through and discrete openings50for the extraction of immersion liquid and gas, other openings and exits may be provided in the second part200(with a corresponding chamber). For example outer extraction openings can be provided. Another embodiment is illustrated schematically inFIG.12. The embodiment ofFIG.12is the same as theFIGS.10and11embodiments except as described below. In the embodiment ofFIG.12instead of forming the annual chambers135,55in the first part100, these are formed in the second part200. Like in the foregoing embodiments, any number of openings13,50and exits60for a flow of fluid into and/or out of the surface20is possible but only openings13and50are illustrated inFIG.12for simplicity. In the embodiment illustrated inFIG.12a liquid opening13for the provision of immersion fluid there out is illustrated as well as a discrete opening50for the extraction of immersion liquid and gas from under the fluid handling structure12. In the embodiment ofFIG.12fluid may be provided to and extracted from the annular chambers135,55through hoses extending between the first part100and the second part200. As a result, any bearing between the first part100and second part200has lower requirements than in the foregoing embodiments as it does not need to be fluid tight. Therefore the design is simplified compared to the embodiments ofFIGS.4-9, at the expense of increased mass of the second part200which moves relative to the substantially stationary first part100. The embodiment ofFIG.12has the advantage that there are no moving parts except for the inner bearing140(which is optional) in fluid communication with immersion fluid in the region. Therefore, particles which might be generated by two surfaces moving relative to each other cannot find their way into the immersion space (on the left hand side ofFIG.12). The embodiment ofFIG.13is the same as the foregoing embodiment except as described below. In the embodiment ofFIG.13the liquid openings13and discrete openings50(and exits60, not illustrated) are provided in the first part100. The liquid openings13and discrete openings50(and exits60) are connected to through holes131,51through the first part100. The movable second part200is in a cavity in the first part100and moves relative to the through holes131,51and they are brought into selected fluid communication with annular chambers135,55in the second part200. In an embodiment, the annular chambers135,55have openings with a cross sectional area similar to that of the through holes131,51in the first part100. In another embodiment, the openings of the annular chambers135,55may be in the form of a slit. Thereby by relative movement of the second part200to the first part100it is possible to select which of the through holes131,51in the first part100are in fluid communication with the annular chambers135,55and so have a fluid flow through them. As a result it is possible to change the position of fluid flow into or out of the surface20relative to the aperture15by changing which of the liquid openings13and discrete openings50are in fluid communication with the annular chambers135,55. For instance, if the second part200is moved relative to the first part100to the left as illustrated, the left hand most through hole131with corresponding opening13will be in fluid communication with the inner annular chamber135and the inner most discrete opening50will be in fluid communication with the outer annular chamber55. However, the outer liquid opening13and outer discrete opening50will not be in fluid communication with their respective annular chambers135,55and will therefore have no fluid flow through them. If the second part200is now moved to the right relative to the first part100, the outer through hole131with corresponding liquid opening13and outer through hole51with corresponding discrete opening50will be in fluid communication with the respective annular chamber135,55such that the position of the flow (of immersion liquid) out of the fluid handling structure12will have moved radially outwardly with respect to the aperture15and the position at which extraction of immersion liquid and gas occurs through the openings50will also have moved radially outwardly with respect to the aperture15. In one embodiment an array of one or both sets of through holes131,51may be provided through the first part100. The through holes131,51may be in the form, for example, of a microsieve. In a microsieve through holes are, for example, of a size of the order of 20 μm with a pitch of 30 μm. Therefore by relative movement of the second part200to the first part100, an almost infinite number of different positions of the fluid flow into or out of the surface20can be achieved. This is because the passage leading from the annular chambers135,55can be aligned with any of the through holes in the micro sieve so that it is possible to choose which of the through holes of the micro sieve have a fluid flow through them. Such an arrangement is advantageous as it is possible to vary the distances of extraction of fluid and/or provision of fluid relative to the aperture15smoothly depending upon the direction and speed of movement of the substrate W under the fluid handling structure12. FIG.14illustrates an embodiment which is the same asFIG.13except as described below. In the embodiment ofFIG.14the liquid openings13, exits60and outer extraction openings800are formed in the first part100and are therefore fixed in position relative to the aperture15. The extraction openings800are illustrated as being in a surface which is in a plane further from the substrate W that the plane of surface20. This need not be the case. In an embodiment extraction openings800could be formed in surface20. The discreet openings are formed by a microsieve with a plurality of through holes51. The second part200sits in a cavity in the first part100and moves relative to the through holes51. The through holes51are brought into selected fluid communication with the annular chamber55in the second part200. In this way the radial position at which immersion liquid and/or gas is/are extracted from between the fluid handling structure12and the substrate W can be moved. In all embodiments, use of a controller500to control the relative positions of the first part100to the second part200may be used to optimise the positions of the fluid flows into and/or out of the surface20thereby to increase the maximum relative speed between the projection system PS and the substrate W before immersion liquid leaks from the region. Other arrangements which are a combination of the above mentioned embodiments are possible. For example, the fluid handling structure12may comprise a third part (not illustrated), for example as the bottom of the first part100in the embodiment ofFIG.13which is then movable relative to the second part200. In this way a wider range of motion is achievable allowing the relative speed between the projection system PS and the substrate W and/or support table WT and thereby even higher throughput. In an embodiment, there is provided a fluid handling structure configured to confine immersion fluid to a region of a lithographic apparatus, the fluid handling structure comprising: an aperture formed therein for the passage there through of a projection beam through the immersion fluid; a first part; and a second part, wherein at least one of the first part and the second part define a surface adapted for the extraction of immersion fluid from the region, wherein the fluid handling structure is adapted to provide a fluid flow into or out of the surface of the fluid handling structure, wherein movement of the first part relative to the second part is effective to change a position of the fluid flow into or out of the surface relative to the aperture, and wherein one of the first part and second part comprises at least one through-hole for the fluid flow there through and the other of the first part and second part comprises at least one opening for the fluid flow there through, the at least one through-hole and at least one opening being in fluid communication when aligned, the movement allowing alignment of the at least one opening with one of the at least one through-hole thereby to change the position of the fluid flow into or out of the surface relative to the aperture. In an embodiment, the movement includes a rotational movement of the first part relative to the second part. In an embodiment, the fluid flow is a fluid flow of gas out of at least one outlet opening in the surface, the at least one outlet opening being an outlet opening of the at least one through-hole. In an embodiment, the at least one outlet opening is arranged such that the fluid flow passes out of the surface along a line, in plan, and the fluid flow is effective to manipulate droplets of the immersion fluid towards the aperture due to the change in position of the fluid flow. In an embodiment, the fluid handling structure further comprises at least one inlet opening in the surface for the extraction of the immersion fluid, the at least one inlet opening being closer to the aperture than the at least one outlet opening. In an embodiment, the at least one inlet opening is in the first part and the at least one outlet opening is in the second part and the surface is defined by substantially co-planar surfaces of the first part and the second part. In an embodiment, the fluid flow is a fluid flow of fluid into at least one inlet opening in the surface for the extraction of the immersion liquid, the at least one inlet opening being an inlet opening of the at least one through-hole. In an embodiment, the fluid flow into or out of the surface forms an area thereon in plan and has a shape thereon surrounding the aperture. In an embodiment, the shape is asymmetrical with respect to an axis of the projection beam passing through the aperture. In an embodiment, the second part defines the surface and the first part is positioned on a side of the second part opposite the surface. In an embodiment, the first part defines the surface and the second part is positioned in a cavity in the first part. In an embodiment, a cross-sectional area of the at least one through-hole is smaller than a cross-sectional area of the at least one opening and the at least one through-hole is in the surface such that by moving the second part relative to the aperture, the position of the through-hole relative to the aperture is changed, whilst the at least one opening and at least one through-hole remain in fluid communication. In an embodiment, the first part comprises a plate and a liquid bearing exists between the first part and the second part. In an embodiment, the first part is adapted to be moved relative to the second part by being rolled up at one edge. In an embodiment, the first part comprises a plurality of said through-holes and movement of the first part relative to the second part allows the at least one opening to align with different through-holes such that the through-hole through which the fluid flow passes can be changed thereby to change the position of the fluid flow into or out of the surface relative to the aperture. In an embodiment, the plurality of through-holes comprises a two-dimensional array of through-holes and the at least one opening is elongate such that the opening can be aligned and in fluid communication with a plurality of adjacent through-holes of the two dimensional array. In an embodiment, the fluid handling structure further comprises a third part, wherein both the first part and the second part are moveable relative to the third part. In an embodiment, there is provided an immersion lithographic apparatus comprising the fluid handling system described herein. In an embodiment, the immersion lithographic apparatus further comprises a projection system for projecting the projection beam through the immersion fluid confined to the region by the fluid handing structure onto a substrate supported on a support table. In an embodiment, the immersion lithographic apparatus further comprises a controller for controlling movement of the first part relative to the second part during movement of the support table relative to the projection system such that a relative speed between the fluid flow and the substrate is lower than would be the case with no movement of the first part relative to the second part. As will be appreciated, any of the above-described features can be used with any other feature and it is not only those combinations explicitly described which are covered in this application. As will be appreciated, any of the above-described features can be used with any other feature and it is not only those combinations explicitly described which are covered in this application. For example, an embodiment of the invention could be applied to the embodiment ofFIG.3. The skilled artisan will appreciate that, in the context of such alternative applications, any use of the terms “wafer” or “die” herein may be considered as synonymous with the more general terms “substrate” or “target portion”, respectively. The substrate referred to herein may be processed, before or after exposure, in for example a track (a tool that typically applies a layer of resist to a substrate and develops the exposed resist), a metrology tool and/or an inspection tool. Where applicable, the disclosure herein may be applied to such and other substrate processing tools. Further, the substrate may be processed more than once, for example in order to create a multi-layer IC, so that the term substrate used herein may also refer to a substrate that already contains multiple processed layers. The terms “radiation” and “beam” used herein encompass all types of electromagnetic radiation, including ultraviolet (UV) radiation (e.g. having a wavelength of or about 365, 248, 193, 157 or 126 nm). The term “lens”, where the context allows, may refer to any one or combination of various types of optical components, including refractive and reflective optical components. While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described. The descriptions above are intended to be illustrative, not limiting. Thus, it will be apparent to one skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below. | 64,086 |
11860547 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. Instead of using air as in dry photolithography, immersion lithography uses a liquid between a bottommost lens and a photosensitive layer to increase the resolution of a solubility pattern transferred from a photomask to the photosensitive layer. The resolution of the solubility pattern transferred onto the photosensitive layer is related to the refractive index of the liquid used in the immersion lithography process. Oftentimes, the liquid used in immersion lithography has a refractive index greater than 1, such as water, for example. The photosensitive layer may be arranged over a semiconductor substrate, and various features are formed on or within the semiconductor substrate based on the solubility pattern transferred from the photomask to the photosensitive layer. Thus, the higher the resolution achieved by the immersion lithography process, the smaller the size of the features formed on or within the semiconductor substrate may be. By reducing the size of features and/or the space between features formed on or within a semiconductor substrate, the size of the overall device is reduced. In immersion lithography, an immersion hood apparatus may comprise input piping used to inject water and output piping used to contain water on an immersion area of the semiconductor substrate. A lithography apparatus comprising a series of lenses, a photomask, and a light source may be arranged over the immersion hood apparatus and are configured to direct light onto the immersion area of the semiconductor substrate. The immersion hood apparatus and the lithography apparatus may stay stationary as the semiconductor substrate is moved around such that the pattern is transferred to the total desired area of the photosensitive layer on the semiconductor substrate. However, in some embodiments, as the semiconductor substrate is moved around, residual liquid (e.g., water) may be left behind on the semiconductor substrate and does not get removed from the semiconductor substrate by the output piping of the immersion hood apparatus. The residual liquid (e.g., water) may damage features on the semiconductor substrate and/or negatively interfere with future processing steps. Various embodiments of the present disclosure relate to an immersion hood apparatus comprising the input and output piping, as well as extractor piping on outermost sidewalls of the immersion hood apparatus. The extractor piping is configured to remove residual liquid left behind on areas of the semiconductor substrate. In some embodiments, the extractor piping is arranged at an angle favorable to remove the residual liquid up and away from the semiconductor substrate. By reducing or even eliminating residual liquid left behind by immersion lithography, the extractor piping on the outermost sidewalls of the immersion hood apparatus mitigates damage to and increases reliability of features on the semiconductor substrate. FIG.1illustrates a cross-sectional view100of some embodiments of an immersion hood apparatus comprising extractor piping. The cross-sectional view100ofFIG.1illustrates an immersion hood apparatus106arranged over a wafer chuck102. In some embodiments, the immersion hood apparatus106comprises an opening116, and a lithography apparatus118is arranged over the immersion hood apparatus106and the opening116. In some embodiments, the lithography apparatus118comprises one or more lenses, a light source, and a photomask between the light source and the one or more lenses. In some embodiments, the wafer chuck102is configured to hold a semiconductor substrate104, which may be or comprise a semiconductor wafer. Further, in some embodiments, a photosensitive layer105is arranged over the semiconductor substrate104. In some embodiments, the immersion hood apparatus106comprises input piping108configured to inject a liquid114between a bottommost surface118bof the lithography apparatus118and portions of the wafer chuck102and the photosensitive layer105arranged directly below the opening116of the immersion hood apparatus106. In some embodiments, the liquid114naturally spreads outwards, away from the opening116and the input piping108of the immersion hood apparatus106. Thus, in some embodiments, the immersion hood apparatus106further comprises output piping110configured to remove the liquid114that travels away from the opening116and the input piping108of the immersion hood apparatus106. In other words, the output piping110is configured to contain the liquid114to an immersion area over the wafer chuck102defined by a perimeter of the output piping110. In some embodiments, the input piping108and the output piping110are arranged on a lower surface of the immersion hood apparatus106. During operation, an immersion lithography process is performed, wherein light from the light source in the lithography apparatus118is directed through the photomask and the one or more lenses to change a solubility of portions of the photosensitive layer105according to the photomask. Because of the liquid114arranged between the lithography apparatus118and the photosensitive layer105, the light also travels through the liquid114. In some embodiments, the liquid114has a refractive index greater than 1. In some embodiments, the liquid114, may comprise, for example, water. The liquid114increases the resolution of the light that reaches the photosensitive layer105according to the photomask, thereby reducing the sizes of features formed on the photosensitive layer105and/or the semiconductor substrate104based on the immersion lithography process. In some embodiments, the wafer chuck102is configured to move and to hold onto the semiconductor substrate104while moving such that the immersion lithography process may be conducted on the entire area of the photosensitive layer105. Then, in some embodiments, the photosensitive layer105may be developed, or exposed to a wet etchant, to remove soluble portions of the photosensitive layer105as defined by the photomask and immersion lithography process. In some embodiments, the immersion hood apparatus106further comprises extractor piping112arranged on outermost sidewalls106sof the immersion hood apparatus106. The extractor piping112is configured to remove residual liquid that escapes from between the immersion hood apparatus106and the wafer chuck102as the wafer chuck102moves during the immersion lithography process. In some embodiments, the extractor piping112comprises a lower segment112L that extends from the outermost sidewalls106sand towards the opening116of the immersion hood apparatus and comprises an upper segment112U that extends from the lower segment112L and vertically away from the wafer chuck102. The lower segment112L of the extractor piping112is continuously connected to the upper segment112U of the extractor piping112. In some embodiments, the lower segment112L meets the upper segment112U at a first angle A1measured on a side of the extractor piping112that is closest to the outermost sidewalls106sof the immersion hood apparatus106. In some embodiments, the first angle A1is an obtuse angle, and thus, is greater than 90 degrees. In some embodiments, the lower segment112L of the extractor piping112is arranged at a second angle A2with respect to the lower surface of the immersion hood apparatus106. In some embodiments, the second angle A2is an acute angle, and thus, is less than 90 degrees. In such embodiments, the somewhat horizontal and somewhat vertical lower segment112L of the extractor piping112increases the effectiveness of the extractor piping112of removing residual liquid up and away from the photosensitive layer105. In some embodiments, the upper segment112U is arranged at a substantially right angle with respect to the lower surface of the immersion hood apparatus106. Further, in some embodiments, the lower segment112L of the extractor piping112has a first diameter d1, and the upper segment112U of the extractor piping112has a second diameter d2. In some embodiments, the first and second diameter d1, d2are each in a range of between, for example, approximately 0.05 millimeters and approximately 100 millimeters. In some embodiments, if the first or second diameters d1, d2are less than 0.05 millimeters, the extractor piping112may be too small to remove the liquid114. In some embodiments, the second diameter d2is greater than or equal to the first diameter d1. Accordingly, in some embodiments, pressure or exhaust velocity in the lower segment112L of the extractor piping112is equal to or greater than pressure or exhaust velocity in the upper segment112U of the extractor piping112. This way, residual liquid that is not arranged directly between the immersion hood apparatus106and the wafer chuck102may be effectively removed by the extractor piping112to reduce defects cause by residual liquid over the photosensitive layer105as the immersion lithography process is conducted. FIG.2illustrates a top-view200of some embodiment of an immersion hood apparatus arranged over a semiconductor substrate. In some embodiments, the immersion hood apparatus106has an overall square-like shape from the top-view200. In some other embodiments, the immersion hood apparatus106may have an overall rectangular-like shape, circular-like shape, or some other shape from the top-view200. In some embodiments, the input piping108and the output piping110of the immersion hood apparatus106may each comprise multiple pipes having openings spaced apart from one another. In some embodiments, the extractor piping112may comprise a continuous opening around the perimeter of the immersion hood apparatus106. In some other embodiments, the extractor piping112may comprise multiple pipes having openings202spaced apart from one another. In some embodiments, the openings202are actually arranged on the outermost sidewall106sof the immersion hood apparatus106, and thus, are illustrated with dotted lines in the top-view200ofFIG.2. In some embodiments, the openings202of each pipe of the extractor piping112may have the first diameter d1. Further, in some embodiments, an inner perimeter of the immersion hood apparatus106may be defined by the opening116in the immersion hood apparatus106. In some embodiments, the opening116may have an overall square-like shape from the top-view200, whereas in other embodiments, the opening116may have an overall rectangular-like shape, circular-like shape, or some other shape from the top-view200. In some embodiments, during parts of the immersion lithography process, the immersion hood apparatus106may completely and directly overlie the semiconductor substrate104, whereas during other parts of the immersion lithography process, the immersion hood apparatus106may only partially and directly overlie the semiconductor substrate104. Thus, in some embodiments and/or during some times of the immersion lithography process, the extractor piping112completely overlies the semiconductor substrate104, whereas in other embodiments and/or during other times of the immersion lithography process, only some of the extractor piping112directly overlies the semiconductor substrate104. Nevertheless, in some embodiments, the extractor piping112is arranged at the outer perimeter of the immersion hood apparatus106on the outermost sidewalls106sof the immersion hood apparatus106to remove any residual liquid arranged on portions of the photosensitive layer105that are not arranged directly between the immersion hood apparatus106and the semiconductor substrate104or that are not arranged directly between the opening116of the immersion hood apparatus106and the semiconductor substrate104. FIG.3illustrates a perspective view300of some embodiments of an outer sidewall of a portion of an immersion hood apparatus comprising extractor piping. In some embodiments, as in the perspective view300ofFIG.3, the extractor piping112comprises a continuous pipe and opening302on the outermost sidewalls106sof the immersion hood apparatus106. FIG.4illustrates a perspective view400of some other embodiments of an outer sidewall of a portion of an immersion hood apparatus comprising extractor piping. In some embodiments, as in the perspective view400ofFIG.4, the extractor piping112comprises multiple pipes with openings202on the outermost sidewalls106sof the immersion hood apparatus106. In such embodiments, the exhaust force or the exhaust velocity of each pipe of the extractor piping112is greater than the exhaust force or the exhaust velocity of the continuous pipe and opening (302ofFIG.3) as illustrated in the embodiment ofFIG.3. Thus, in some embodiments, the extractor piping112comprising multiple pipes with openings202as illustrated inFIG.4has a higher exhaust velocity and thus, is more effective in removing residual water from the photosensitive layer (105ofFIG.1) during the immersion lithography process than extractor piping112that comprises a continuous pipe as illustrated inFIG.3. FIG.5illustrates a cross-sectional view500of some embodiments of an immersion hood apparatus and a lithography apparatus configured to pattern a first area of an underlying photosensitive layer. In some embodiments, the lithography apparatus118comprises a light source510configured to apply light/electromagnetic radiation towards the photosensitive layer105beneath the opening116in the immersion hood apparatus106. In some embodiments, the light source510is configured to apply light having a wavelength in a range of between, for example, approximately 175 nanometers and approximately 200 nanometers. In some embodiments, the lithography apparatus118further comprises a photomask508arranged below the light source510. The photomask508may comprise a solubility pattern having portions that allow light to pass through and other portions that do not allow light to pass through. The light that passes through the photomask508changes the solubility of the photosensitive layer105. Further, in some embodiments, a series of lenses, such as, a first lens502, a second lens504arranged over the first lens502, and a third lens506arranged over the second lens504, are arranged between the bottommost surface118bof the lithography apparatus118and the photomask508. In some embodiments, the first, second, and third lenses502,504,506focus the light from the light source510that travels through the photomask508towards a first area512on the photosensitive layer105. In some embodiments, the first, second, and third lenses502,504,506are used because the photomask508is larger than the desired solubility pattern to be transferred onto the first area512of the photosensitive layer105. Thus, between the first lens502, the second lens504, the third lens506, and/or the liquid114, the solubility pattern defined by the photomask508is reduced in size and transferred onto the first area512of the photosensitive layer105with a high resolution. In some embodiments, the liquid114is arranged between the bottommost lens (i.e., the first lens502) and the first area512of the photosensitive layer105. In some embodiments, when light travels through the liquid114instead of, for example, air, the resolution of the light and thus, the solubility pattern transferred from the photomask508to the first area512of the photosensitive layer105is increased. FIGS.6-15illustrate various views600-1500of some embodiments of a method of conducting an immersion lithography process, wherein extractor piping on outermost sidewalls of an immersion hood apparatus removes residual water as the immersion lithography process is conducted on different areas over a semiconductor substrate. AlthoughFIGS.6-15are described in relation to a method, it will be appreciated that the structures disclosed inFIGS.6-15are not limited to such a method, but instead may stand alone as structures independent of the method. As shown in cross-sectional view600ofFIG.6, a semiconductor substrate104is provided. In some embodiments, the semiconductor substrate104may comprise any type of semiconductor body (e.g., silicon/CMOS bulk, SiGe, SOI, etc.) such as a semiconductor wafer or one or more die on a wafer, as well as any other type of semiconductor and/or epitaxial layers formed thereon and/or otherwise associated therewith. In some embodiments, a photosensitive layer105is formed on the semiconductor substrate104. In some embodiments, the photosensitive layer105comprises a material that may change in solubility when exposed to light having a certain wavelength or within a certain range of wavelengths. In some embodiments, the photosensitive layer105may be formed by way of, for example, a spin-on process or some other deposition process (e.g., physical vapor deposition (PVD), chemical vapor deposition (CVD), atomic layer deposition (ALD), etc.). Further, in some embodiments, other layers and/or features may be arranged between the photosensitive layer105and the semiconductor substrate104, such as, for example, dielectric layers, conductive features (e.g., wires, vias), transistors, or the like. As shown in cross-sectional view700ofFIG.7, the semiconductor substrate104is transported onto a wafer chuck102and is arranged below an immersion hood apparatus106and a lithography apparatus118. In some embodiments, the lithography apparatus118is laterally surrounded by the immersion hood apparatus206and is arranged directly over an opening116of the immersion hood apparatus106. In some embodiments, the lithography apparatus118comprises a light source510, a photomask508, and a first lens502arranged directly over the opening116of the immersion hood apparatus106. In some embodiments, the lithography apparatus118comprises more than one lens, such as, for example, a second lens504and a third lens506in addition to the first lens502. In some embodiments, the immersion hood apparatus106comprises input piping108and output piping110arranged on a lower surface of the immersion hood apparatus106and comprises extractor piping112arranged on outermost sidewalls106sof the immersion hood apparatus106. As shown in cross-sectional view800ofFIG.8, in some embodiments, the immersion hood apparatus106is turned “ON” such that a liquid114is injected802onto an immersion area of the photosensitive layer105via the input piping108of the immersion hood apparatus106. In some embodiments, the liquid114has a refractive index greater than 1. In some embodiments, the liquid114comprises, for example, water. In some embodiments, the input piping108injects802the liquid114at a rate of between, for example, approximately 1000 milliliters per minute to approximately 1350 milliliters per minute. In some embodiments, the liquid114completely fills a space directly between the photosensitive layer105and a bottommost surface118bof the lithography apparatus118and/or directly between a lowermost lens (i.e., the first lens502) and the photosensitive layer105. Meanwhile, in some embodiments, as the input piping108injects802the liquid over the photosensitive layer105, the output piping110removes804any of the liquid114that moves away from the input piping108towards an edge of the semiconductor substrate104. In such embodiments, the output piping110may confine the liquid to the immersion area on the photosensitive layer105, as defined by a perimeter of the output piping110. In some embodiments, the output piping110removes804the liquid114at a rate between, for example, approximately 50 milliliters per minute and approximately 100 milliliters per minute. Further, in some embodiments, the input piping108and the output piping110each have a diameter in a range of between, for example, approximately 0.05 millimeters and approximately 100 millimeters. As shown in cross-sectional view900ofFIG.9, in some embodiments, after the liquid114is arranged between the first lens502and the photosensitive layer105and while the input piping108and the output piping110are continuously injecting802and removing804, respectively, the liquid114, the light source510is turned “ON.” In some embodiments, when the light source510is turned “ON,” light902is directed towards the photomask508. In some embodiments, the light902has a wavelength in a range of between, for example, approximately 175 nanometers and approximately 200 nanometers. It will be appreciated that other wavelengths are also within the scope of this disclosure. In some embodiments, only some of the light902emitted from the light source510is able to pass through the photomask508based on the solubility pattern of the photomask508. Then, in some embodiments, the light902travels through the first, second, and/or third lenses502,504,506and also travels through the liquid114to transfer the solubility pattern of the photomask508to a first area512on the photosensitive layer105. Because the light902travels through the liquid114, the solubility pattern is transferred onto the photosensitive layer105at a higher resolution than if the light902traveled through air before reaching the first area512. FIG.10illustrates a top-view1000of some embodiments of the semiconductor substrate104after the first area512is exposed to the light (902ofFIG.9) according to the photomask (508ofFIG.9). As shown in the top-view1000ofFIG.10, in some embodiments, the first area512is only a small portion of the total area of the photosensitive layer105. In some such embodiments, the steps of the immersion lithography process illustrated inFIGS.8and/or9may be repeated many times to transfer the solubility pattern of the photomask (508ofFIG.9) to the total area of the photosensitive layer105. As shown in cross-sectional view1100A ofFIG.11A, in some embodiments, the wafer chuck102is moved1102such that a new area of the photosensitive layer105may be exposed to the solubility pattern of the photomask508by the immersion lithography process. In some embodiments, the wafer chuck102is configured to securely hold onto the semiconductor substrate104and thus, the photosensitive layer105as the wafer chuck102is moved1102. In some embodiments, the immersion hood apparatus106and the lithography apparatus118remain stationary while the wafer chuck102moves. As the wafer chuck102moves1102, the input and output piping108,110continue to inject802and remove804, respectively, the liquid114to confine the liquid114within the immersion area defined by the perimeter of the output piping110. However, in some embodiments, some of the liquid114inevitably escapes from the immersion area, and thus, residual liquid1104escape to a portion of the photosensitive layer105that is not directly between the immersion hood apparatus106and the wafer chuck102. In such embodiments, the residual liquid1104could damage the photosensitive layer105and/or future processing steps on the semiconductor substrate104. Thus, in some embodiments, the extractor piping112is arranged on the outermost sidewalls106sof the immersion hood apparatus106to remove residual liquid1104arranged outside of the immersion hood apparatus106outer perimeter. In some embodiments, the extractor piping112comprises a lower segment112L and an upper segment112U that meet at a first angle A1that is greater than 90 degrees. Further, in some embodiments, the lower segment112L is arranged at a second angle A2with respect to a lower surface of the immersion hood apparatus106. In some embodiments, the second angle A2is between 0 and 90 degrees. In some embodiments, the lower segment112L of the extractor piping112has a first diameter d1, and the upper segment112U of the extractor piping112has a second diameter d2. In some embodiments, the second diameter d2is greater than or equal to the first diameter d1. In some embodiments, the first and second angles A1, A2as well as the relationship between the first and second diameters d1, d2help increase the exhaust velocity of the extractor piping112to increase the rate and effectiveness of removal of the residual liquid1104. As shown in cross-sectional view1100B ofFIG.11B, as or after the wafer chuck102is moving1102, the extractor piping112may remove1106the residual liquid1104from the photosensitive layer105. Thus, in some embodiments, the cross-sectional view1100A ofFIG.11Aillustrates a first time period, and the cross-sectional view1100B ofFIG.11Billustrates a second time period after the first time period. The previous location1104pof the residual liquid1104is illustrated inFIG.11Bwith dotted lines for convenience. In some embodiments, the extractor piping112is continuously running or is always “ON” while the input and output piping108,110are “ON” such that the extractor piping112may remove any residual liquid1104outside of immersion hood apparatus106at any time during the immersion lithography process. In other embodiments, the extractor piping112may be turned “ON” only while the wafer chuck102moves1102. When the extractor piping112is continuously “ON,” even if, in some embodiments, residual liquid1104is left behind on the photosensitive layer105during various steps of the immersion lithography process, the extractor piping112is arranged on outermost sidewalls106sof the immersion hood apparatus106to remove the residual liquid1104, thereby mitigating damage to the photosensitive layer105and/or overall devices formed on or within the semiconductor substrate104. As shown in cross-sectional view1200ofFIG.12, in some embodiments, after moving (1102ofFIG.11B) the wafer chuck102, the light source510is again turned “ON” to transfer the solubility pattern from the photomask508to a second area1202on the photosensitive layer105. FIG.13illustrates a top-view1300of some embodiments of the semiconductor substrate104after the second area1202is exposed to the light (902ofFIG.12) according to the photomask (508ofFIG.12). In some embodiments, the second area1202may be arranged next to, but not overlapping, with the first area512. It will be appreciated that the size and shape of the first and second areas512,1202from the top-view1300are an example, and that in other embodiments, the size and shape of the first and second areas512,1202may be different than what is illustrated in the top-view1300ofFIG.13. FIG.14illustrates a top-view1400of some embodiments of the photosensitive layer105after the immersion lithography process ofFIGS.8-12, for example, is repeated over the total area of the photosensitive layer105. In some embodiments, the total area of the photosensitive layer105may be defined as the surface area of the top surface of the photosensitive layer105. In some such embodiments, the solubility pattern of the photomask (508ofFIG.12) may be transferred to multiple areas (e.g., the first area512, the second area1202) of the photosensitive layer105such that the total area of the photosensitive layer105comprises soluble regions and insoluble regions. In some other embodiments, the immersion lithography process may only be repeated over part of the total area of the photosensitive layer105. As shown in cross-sectional view1500ofFIG.15, in some embodiments, soluble regions of the photosensitive layer (105ofFIG.14) are removed by a developer or wet etchant such that a patterned photosensitive layer1502is arranged over the semiconductor substrate104. In some embodiments, features of the patterned photosensitive layer1502may a first width w1that is in a range of between, for example, approximately 1 nanometer and approximately 45 nanometers. Similarly, in some embodiments, the features of the patterned photosensitive layer1502may be spaced apart by a first space distance si. In some embodiments, the first space distance si may also be in a range of between, for example, approximately 1 nanometer and approximately 45 nanometers. Accordingly, in some embodiments, the features of the patterned photosensitive layer1502may have a first pitch pi in a range of between, for example, approximately 2 nanometers and approximately 90 nanometers. It will be appreciated that other values for the first width w1, the first space distance si, and the first pitch pi are also within the scope of this disclosure. Further, it will be appreciated that in some embodiments, the method may continue with portions of the semiconductor substrate104and/or layers between the semiconductor substrate104and the patterned photosensitive layer1502being removed according to the patterned photosensitive layer1502and with deposition processes performed to form one or more semiconductor devices on or within the semiconductor substrate104. Because of the extractor piping utilized in the immersion lithography process, features and/or spacing between the features of the semiconductor devices may be relatively small (e.g., less than about 45 nanometers) and damage to such small semiconductor devices is mitigated. FIG.16illustrates a flow diagram of some embodiments of a method1600of performing an immersion lithography process using extractor piping to reduce damage by liquid left behind on a semiconductor substrate from the immersion lithography process. While method1600is illustrated and described below as a series of acts or events, it will be appreciated that the illustrated ordering of such acts or events are not to be interpreted in a limiting sense. For example, some acts may occur in different orders and/or concurrently with other acts or events apart from those illustrated and/or described herein. In addition, not all illustrated acts may be required to implement one or more aspects or embodiments of the description herein. Further, one or more of the acts depicted herein may be carried out in one or more separate acts and/or phases. At act1602, a photosensitive layer is formed over a semiconductor substrate.FIG.6illustrates cross-sectional view600of some embodiments that may correspond to act1602. At act1604, the semiconductor substrate is loaded onto a wafer chuck, wherein an immersion hood apparatus directly overlies the semiconductor substrate, and wherein a light source, photomask, and lens are arranged directly over the semiconductor substrate and within an opening of the immersion hood apparatus.FIG.7illustrates cross-sectional view700of some embodiments that may correspond to act1604. At act1606, a liquid is applied over the semiconductor substrate and directly under the opening in the immersion hood apparatus and lens by using input piping of the immersion hood apparatus. At act1608, output piping of the immersion hood apparatus is used to confine the liquid to an immersion area arranged directly between the opening and the semiconductor substrate.FIG.8illustrates cross-sectional view800of some embodiments that may correspond to acts1606and1608. At act1610, a light source is used to apply light through the photomask, the lens, the liquid, and the photosensitive layer to change the solubility of portions of a first area of the photosensitive layer.FIG.9illustrates cross-sectional view900of some embodiments that may correspond to act1610. At act1612, the wafer chuck is moved, wherein exhaust piping on outer sidewalls of the immersion hood apparatus removes any residual liquid that escapes from the immersion area during the moving of the wafer chuck.FIGS.11A and11Billustrate cross-sectional views1100A and1100B, respectively, of some embodiments that may correspond to act1612. At act1614, the light source is used to apply light through the photomask, the lens, the liquid, and the photosensitive layer to change the solubility of portions of a second area of the photosensitive layer.FIG.12illustrates cross-sectional view1200of some embodiments that may correspond to act1614. Therefore, the present disclosure relates to a method of performing an immersion lithography process over a semiconductor substrate, wherein an immersion hood apparatus comprises extractor piping on outer sidewalls of the immersion hood apparatus to remove any residual liquid that escapes from the immersion hood apparatus and thus, to mitigate damage to devices formed on or within the semiconductor substrate. Accordingly, in some embodiments, the present disclosure relates to a process tool, comprising: a lithography apparatus arranged over a wafer chuck and comprising: a photomask arranged over the wafer chuck, a light source arranged over the photomask, and a lens arranged between the photomask and the wafer chuck; and an immersion hood apparatus arranged over the wafer chuck and laterally around the lithography apparatus, wherein the immersion hood apparatus comprises: input piping arranged on a lower surface of the immersion hood apparatus and configured to distribute a liquid between the lens and the wafer chuck, output piping arranged on the lower surface of the immersion hood apparatus and configured to contain the liquid arranged between the lens and the wafer chuck, and extractor piping arranged on an outer sidewall of the immersion hood apparatus and configured to remove any liquid above the wafer chuck that is outside of the immersion hood apparatus. In other embodiments, the present disclosure relates to a process tool comprising: a wafer chuck configured to hold a semiconductor substrate; a lens arranged over the wafer chuck; a light source arranged over the lens; and an immersion hood apparatus arranged over the wafer chuck, wherein the lens is laterally surrounded by the immersion hood apparatus, and wherein the immersion hood apparatus comprises: input piping arranged on a lower surface of the immersion hood apparatus and configured to distribute a liquid between the lower surface of the immersion hood apparatus and the wafer chuck and between the lens and the wafer chuck, output piping arranged on the lower surface of the immersion hood and configured to contain the liquid arranged between the immersion hood apparatus and the wafer chuck, wherein the output piping laterally surrounds the input piping, and extractor piping arranged on a sidewall of the immersion hood apparatus and configured to remove residual liquid outside of an area of the semiconductor substrate that underlies the immersion hood apparatus and the lens, wherein the extractor piping is farther from the lens than the output piping. In yet other embodiments, the present disclosure relates to a method comprising: forming a photosensitive layer over a semiconductor substrate; loading the semiconductor substrate onto a wafer chuck, wherein an immersion hood apparatus overlies the semiconductor substrate, and wherein a light source, photomask, and lens are arranged over the semiconductor substrate and an opening in the immersion hood apparatus; using input piping of the immersion hood apparatus to apply a liquid over the semiconductor substrate and directly underlying the opening in the immersion hood apparatus and the lens; using output piping of the immersion hood such that the liquid is confined to an immersion area directly between the opening and the semiconductor substrate; using the light source to apply light through the photomask, the lens, the liquid, and the photosensitive layer to change the solubility of portions of a first area of the photosensitive layer according to the photomask; moving the wafer chuck; and using the light source to apply light through the photomask, the lens, the liquid, and the photosensitive layer to change the solubility of portions of a second area of the photosensitive layer according to the photomask, wherein exhaust piping arranged on outer sidewalls of the immersion hood apparatus removes any residual liquid that escaped from the immersion area when the wafer chuck moved. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 37,850 |
11860548 | DETAILED DESCRIPTION Although specific reference may be made in this text to the manufacture of ICs, it should be explicitly understood that the description herein has many other possible applications. For example, it may be employed in the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, liquid-crystal display panels, thin-film magnetic heads, etc. The skilled artisan will appreciate that, in the context of such alternative applications, any use of the terms “reticle”, “wafer” or “die” in this text should be considered as interchangeable with the more general terms “mask”, “substrate” and “target portion”, respectively. In the present document, the terms “radiation” and “beam” are used to encompass all types of electromagnetic radiation, including ultraviolet radiation (e.g. with a wavelength of 365, 248, 193, 157 or 126 nm) and EUV (extreme ultra-violet radiation, e.g. having a wavelength in the range 5-20 nm). The term “optimizing” and “optimization” as used herein refers to or means adjusting a lithographic projection apparatus, a lithographic process, etc. such that results and/or processes of lithography have more desirable one or more characteristics, such as higher accuracy of projection of a design layout on a substrate, a larger process window, etc. Thus, the term “optimizing” and “optimization” as used herein refers to or means a process that identifies one or more values for one or more parameters that provide an improvement, e.g. a local optimum, in at least one relevant metric, compared to an initial set of one or more values for those one or more parameters. “Optimum” and other related terms should be construed accordingly. In an embodiment, optimization steps can be applied iteratively to provide further improvements in one or more metrics. Further, the lithographic projection apparatus may be of a type having two or more tables (e.g., two or more substrate table, a substrate table and a measurement table, two or more patterning device tables, etc.). In such “multiple stage” devices a plurality of the multiple tables may be used in parallel, or preparatory steps may be carried out on one or more tables while one or more other tables are being used for exposures. Twin stage lithographic projection apparatuses are described, for example, in U.S. Pat. No. 5,969,441, incorporated herein by reference. The patterning device referred to above comprises, or can form, one or more design layouts. The design layout can be generated utilizing CAD (computer-aided design) programs, this process often being referred to as EDA (electronic design automation). Most CAD programs follow a set of predetermined design rules in order to create functional design layouts/patterning devices. These rules are set by processing and design limitations. For example, design rules define the space tolerance between circuit devices (such as gates, capacitors, etc.) or interconnect lines, so as to ensure that the circuit devices or lines do not interact with one another in an undesirable way. One or more of the design rule limitations may be referred to as “critical dimensions” (CD). A critical dimension can be defined as the smallest width of a line or hole or the smallest space between two lines or two holes. Thus, the CD determines the overall size and density of the designed device. Of course, one of the goals in device fabrication is to faithfully reproduce the original device design on the substrate (via the patterning device). The term “mask” or “patterning device” as employed in this text may be broadly interpreted as referring to a generic patterning device that can be used to endow an incoming radiation beam with a patterned cross-section, corresponding to a pattern that is to be created in a target portion of the substrate; the term “light valve” can also be used in this context. Besides the classic mask (transmissive or reflective; binary, phase-shifting, hybrid, etc.), examples of other such patterning devices include:a programmable mirror array. An example of such a device is a matrix-addressable surface having a viscoelastic control layer and a reflective surface. The basic principle behind such an apparatus is that (for example) addressed areas of the reflective surface reflect incident radiation as diffracted radiation, whereas unaddressed areas reflect incident radiation as undiffracted radiation. Using an appropriate filter, the undiffracted radiation can be filtered out of the reflected beam, leaving only the diffracted radiation behind; in this manner, the beam becomes patterned according to the addressing pattern of the matrix-addressable surface. The required matrix addressing can be performed using suitable electronic means. More information on such mirror arrays can be gleaned, for example, from U.S. Pat. Nos. 5,296,891 and 5,523,193, which are incorporated herein in their entireties by reference.a programmable LCD array. An example of such a construction is given in U.S. Pat. No. 5,229,872, which is incorporated herein in their entireties by reference. As a brief introduction,FIG.1illustrates an exemplary lithographic projection apparatus10A. Major components are a radiation illumination system12A, which may include a deep-ultraviolet excimer laser source or other type of source including an extreme ultra violet (EUV) source (as discussed above, the lithographic projection apparatus itself need not have the radiation source), illumination optics which define the partial coherence (denoted as sigma) and which may include optics14A,16Aa and16Ab that shape radiation from the illumination system12A; a patterning device18A; and transmission optics16Ac that project an image of the patterning device pattern onto a substrate plane22A. An adjustable filter or aperture20A at the pupil plane of the projection optics may restrict the range of beam angles that impinge on the substrate plane22A, where the largest possible angle defines the numerical aperture of the projection optics NA=n sin(Θmax), n is the index of refraction of the media between the last element of project optics and the substrate. In an optimization process of a system, a figure of merit of the system can be represented as a cost function. The optimization process boils down to a process of finding a set of parameters (design variables) of the system that optimizes (e.g., minimizes or maximizes) the cost function. The cost function can have any suitable form depending on the goal of the optimization. For example, the cost function can be weighted root mean square (RMS) of deviations of certain characteristics (evaluation points) of the system with respect to the intended values (e.g., ideal values) of these characteristics; the cost function can also be the maximum of these deviations (i.e., worst deviation). The term “evaluation points” herein should be interpreted broadly to include any characteristics of the system. The design variables of the system can be confined to finite ranges and/or be interdependent due to practicalities of implementations of the system. In the case of a lithographic projection apparatus, the constraints are often associated with physical properties and characteristics of the hardware such as tunable ranges, and/or patterning device manufacturability design rules, and the evaluation points can include physical points on a resist image on a substrate, as well as non-physical characteristics such as dose and focus. In a lithographic projection apparatus, an illumination system provides illumination (i.e. radiation) to a patterning device and projection optics direct and shape the illumination, via the patterning device, onto a substrate. The term “projection optics” is broadly defined here to include any optical component that may alter the wavefront of the radiation beam. An aerial image (AI) is the radiation intensity distribution at substrate level. A resist layer on the substrate is exposed and the aerial image is transferred to the resist layer as a latent “resist image” (RI) therein. The resist image (RI) can be defined as a spatial distribution of solubility of the resist in the resist layer. A resist model can be used to calculate the resist image from the aerial image, an example of which can be found in U.S. Patent Application Publication No. US 2009-0157360, which is incorporated herein its entirety by reference. The resist model is related only to properties of the resist layer (e.g., effects of chemical processes which occur during exposure, PEB and development). One or more optical properties of the lithographic projection apparatus (e.g., one or properties of the illumination system, the patterning device and the projection optics) dictate the aerial image. Since the patterning device used in the lithographic projection apparatus can be changed, it is desirable to separate the optical properties of the patterning device from the optical properties of the rest of the lithographic projection apparatus including at least the illumination system and the projection optics. An exemplary flow chart for simulating lithography in a lithographic projection apparatus is illustrated inFIG.2. An illumination model31represents one or more optical characteristics (including radiation intensity distribution and/or phase distribution) of the illumination system. A projection optics model32represents one or more optical characteristics (including change to the radiation intensity distribution and/or the phase distribution caused by the projection optics) of the projection optics. A design layout model35represents one or more optical characteristics (including change to the radiation intensity distribution and/or the phase distribution caused by a given design layout) of a design layout, which is the representation of an arrangement of features on or formed by a patterning device. An aerial image36can be simulated from the illumination model31, the projection optics model32and the design layout model35. A resist image38can be simulated from the aerial image36using a resist model37. Simulation of lithography can, for example, predict contours and CDs in the resist image. More specifically, it is noted that the illumination model31can represent the one or more optical characteristics of the illumination system that include, but not limited to, one or more numerical aperture (NA) settings, one or more sigma (σ) settings and/or a particular illumination shape (e.g. off-axis radiation illumination such as annular, quadrupole, dipole, etc.). The projection optics model32can represent the one or more optical characteristics of the projection optics, including aberration, distortion, one or more refractive indexes, one or more physical sizes, one or more physical dimensions, etc. The design layout model35can represent one or more physical properties of a physical patterning device, as described, for example, in U.S. Pat. No. 7,587,704, which is incorporated herein its entirety by reference. The objective of the simulation is to accurately predict, for example, edge placement, aerial image intensity slope and/or CD, which can then be compared against an intended design. The intended design is generally defined as a pre-OPC design layout which can be provided in a standardized digital file format such as GDSII or OASIS or other file format. From this design layout, one or more portions may be identified, which are referred to as “clips”. In an example, a set of clips is extracted, which represents the complicated patterns in the design layout (typically about 50 to 1000 clips, although any number of clips may be used). These patterns or clips represent small portions (e.g., circuits, cells or patterns) of the design and more specifically, the clips typically represent small portions for which particular attention and/or verification is needed. In other words, clips may be the portions of the design layout, or may be similar or have a similar behavior of portions of the design layout, where one or more critical features are identified either by experience (including clips provided by a customer), by trial and error, or by running a full-chip simulation. Clips may contain one or more test patterns or gauge patterns. An initial larger set of clips may be provided a priori by a customer based on one or more known critical feature areas in a design layout which require particular image optimization. Alternatively, in another example, an initial larger set of clips may be extracted from the entire design layout by using some kind of automated (such as machine vision) or manual algorithm that identifies the one or more critical feature areas. In a lithographic projection apparatus, for example, using an EUV (extreme ultra-violet radiation, e.g. having a wavelength in the range 5-20 nm) source or a non-EUV source, reduced radiation intensity may lead to stronger stochastic variation, such as pronounced line width roughness and/or local CD variation in small two-dimensional features such as holes. In a lithographic projection apparatus using EUV radiation, reduced radiation intensity may be attributed to low total radiation output from the EUV radiation source, radiation loss from optics that shape the radiation from the source, transmission loss through the projection optics, high photon energy that leads to fewer photons under a constant dose, etc. The stochastic variation may be attributed to factors such as photon shot noise, photon-generated secondary electrons, photon absorption variation, and/or photon-generated acids in the resist. The small size of features further compounds this stochastic variation. The stochastic variation in smaller features is a significant factor in production yield and justifies inclusion in a variety of optimization processes of the lithographic process and/or lithographic projection apparatus. Under a same radiation intensity, lower exposure time of each substrate leads to higher throughput of a lithographic projection apparatus but stronger stochastic variation. The photon shot noise in a given feature under a given radiation intensity is proportional to the square root of the exposure time. The desire to lower exposure time for the purpose of increasing throughput exists in lithography using EUV and other radiation sources. Therefore, the methods and apparatuses described herein that consider the stochastic variation are not limited to EUV lithography. The throughput can also be affected by the total amount of radiation directed to the substrate. In some lithographic projection apparatuses, a portion of the radiation from the source is sacrificed in order to achieve a desired shape of the illumination. FIG.3Aschematically depicts line edge roughness (LER). Assuming all conditions are identical in three exposures or simulations of exposure of an edge903of a feature on a design layout, the resist images903A,903B and903C of the edge903may have slightly different shapes and locations. Locations904A,904B and904C of the resist images903A,903B and903C may be measured by averaging the resist images903A,903B and903C, respectively, to averages902A,902B, and902C respectively. A stochastic variation such as line edge roughness is usually represented by a parameter of the distribution of the underlying characteristic. In this example, LER of the edge903may be represented by 3σ of the spatial distribution of the edge903, assuming the distribution is a normal distribution. The 3σ may be derived from the locations of the edge903(e.g., the locations904A,904B and904C) in many exposures or simulations of the edge903. LER represents the range in which the edge903probably will fall due to the stochastic effect. For this reason, the LER can also be called stochastic edge placement error (SEPE). LER may be greater than the changes of the edge903position caused by non-stochastic effects. FIG.3Bschematically depicts line width roughness (LWR). Assuming all conditions are identical in three exposures or simulations of exposure of a long rectangle feature910with a width911on a design layout, the resist images910A,910B and910C of the rectangle feature910may have slightly different widths911A,911B and911C, respectively. LWR of the rectangle feature910may be a measure of the distribution of the widths911A,911B and911C. For example, the LWR may be a 3σ of the distribution of the width911, assuming the distribution is a normal distribution. The LWR may be derived from many exposures or simulations of the width911of the rectangle feature910(e.g., the widths911A,911B and911C). In the context of a short feature (e.g., a contact hole), the widths of its images are not well defined because long edges are not available for averaging their locations. A similar quantity, LCDU, may be used to characterize the stochastic variation. The LCDU is a 3σ of the distribution (assuming the distribution is a normal distribution) of measured CDs of images of the short feature. FIG.3Cschematically illustrates how a stochastic variation may affect lithography. In the example inFIG.3C, an intended position of an edge of a feature in an aerial image or resist image is indicated as the dotted line982. The actual edge is indicated as the curve995, which comprises both a stochastic variation (LER in this example) and an error (e.g., caused by other factors such as dose variation, focus variation, illumination shape, patterning device (e.g., mask) error, etc.) unrelated to stochastic effect. The average location of the actual edge is indicated as the solid line981. The difference980between the average location (the solid line981) and the intended location (the dotted line982) is the error unrelated to stochastic effect, which may be referred to as an edge placement error (EPE). The variation of the actual edge relative to the average location is the stochastic variation. The band990around the average location (the solid line981) that encloses the stochastic variation may be called a stochastic variation band, which represents the extent the actual local edge placement may reach due to a stochastic effect. The width of the stochastic variation band may be greater than the EPE. Therefore, the total probabilistic deviation from the intended location (the dotted line982) of the edge may be a sum of the EPE and the stochastic variation band. If there were no stochastic variation, the actual location of the edge in this example would be at the location indicated by the solid line981, which does not merge with a neighboring feature983and thus does not produce a defect. However, when a stochastic variation is present and the stochastic variation band is large enough (e.g., the band990), the actual edge may merge (where marked by the dotted circle) with the neighboring feature983and thus produce a defect. Therefore, it is desirable to evaluate, simulate or reduce a stochastic variation. A method of determining a relationship between a stochastic variation of a characteristic of an aerial image or a resist image and one or more design variables is depicted in a flow chart inFIG.4Aand a schematic inFIG.4B. In step1301, values1503of the characteristic are measured from a plurality of aerial images or resist images1502formed (by actual exposure or simulation) for each of a plurality of sets1501of values of the one or more design variables. In step1302, a value1505of the stochastic variation is determined for each set1501of values of the one or more design variables from a distribution1504of the values1503of the characteristic measured from the aerial images or resist images formed for that set1501of values of the one or more design variables. In step1303, a relationship1506is determined by fitting one or more parameters of a model from the values1504of the stochastic variation and the sets1501of values of the one or more design variables. In an example, the stochastic variation is the LER and the one or more design variables are blurred image ILS (bl_ILS), dose and image intensity. The model may be: LER=a×bl_ILSb×(dose×image intensity)c(Eq. 30) The parameters a, b and c may be determined by fitting. The blurred image ILS (bl_ILS) is the image log slope (ILS) with a spatial blur applied thereto. The spatial blur may represent blur of a resist image due to diffusion of a chemical species generated in a resist layer by exposure to radiation. FIG.5Ashows a result of fitting using the model in Eq. 30. Values of LER1400(as an example of the stochastic variation) of more than 900 different features including long trenches1401, long lines1402, short lines1403, short trenches1404, short line ends1405, and short trench ends1406, at a constant image intensity and a constant dose, are determined following the method inFIG.4AandFIG.4B. The parameters a and b in Eq. 30 (parameter c is rolled into parameter a because dose weighted blurred image intensity is constant) are determined by fitting the values of LER with values of the design variable, bl_ILS. The fitting result is shown in curve1410. FIG.5Bshows a result of fitting1510using the model in Eq. 30. Values of LCDU1500(as an example of the stochastic variation) of CD in the width direction and of CD in the length direction of a 20 by 40 nm trench1505at a variety of doses and a variety of image intensities are determined using the method inFIG.4AandFIG.4B. The parameters a, b and c in Eq. 30 are determined by fitting the values of LWR with values of the design variable, bl_ILS, dose and image intensity. Once the relationship between a stochastic variation of a characteristic of an aerial image or a resist image and one or more design variables is determined by a method such as the method inFIG.4AandFIG.4B, a value of the stochastic variation may be calculated for that characteristic using the relationship.FIG.6shows an exemplary flow chart for this calculation. In step1610, a set of conditions (e.g., NA, a, dose, focus, resist chemistry, one or more projection optics parameters, one or more illumination parameters, etc.) are selected. In step1620, the values of the one or more design variables are calculated under these conditions. For example, values of edge position of a resist image and bl_ILS along the edges. In step1630, values of the stochastic variation are calculated from the relationship between the stochastic variation and the one or more design variables. For example, in an example, the stochastic variation is the LER of the edges. In optional step1640, a noise vector may be defined, whose frequency distribution approximately matches real substrate measurements. In optional step1650, the noise vector is overlaid on the results (e.g., stochastic edge of the aerial image or resist image). The relationship between a stochastic variation of a characteristic of an aerial image or a resist image and one or more design variables may also be used to identify one or more “hot spots”1700of the aerial image or resist image, as shown inFIG.7. A “hot spot” can be defined as a location on the image where the stochastic variation is beyond a certain magnitude. For example, if two positions on two nearby edges have large values of LER, these two positions have a high chance of joining each other. In an example, values of a stochastic variation (and/or a function thereof) at a plurality of conditions and at a plurality of values of the one or more design variables may be calculated and compiled in a non-transitory computer-readable medium1800, as shown inFIG.8, such as a database stored on a hard drive. A computer may query the medium1800and calculate a value of the stochastic variation from the content of the medium1800. Determination of a stochastic variation of a characteristic of an aerial/resist image may be useful in many ways in the lithographic process. In one example, the stochastic variation may be taken into account in optical proximity correction (OPC). As an example, OPC addresses the fact that the final size and placement of an image of the design layout projected on the substrate will not be identical to, or simply depend only on the size and placement of, the design layout on the patterning device. It is noted that the terms “mask”, “reticle”, “patterning device” are utilized interchangeably herein. Also, person skilled in the art will recognize that, especially in the context of lithography simulation/optimization, the term “mask”/“patterning device” and “design layout” can be used interchangeably, as in lithography simulation/optimization, a physical patterning device is not necessarily used but a design layout can be used to represent a physical patterning device. For the small feature sizes and high feature densities present on some design layouts, the position of a particular edge of a given feature will be influenced to a certain extent by the presence or absence of other adjacent features. These proximity effects arise from minute amounts of radiation coupled from one feature to another and/or non-geometrical optical effects such as diffraction and interference. Similarly, proximity effects may arise from diffusion and other chemical effects during, e.g., post-exposure bake (PEB), resist development, and etching that generally follow lithography. To help ensure that the projected image of the design layout is in accordance with requirements of a given target device design, proximity effects should be predicted and compensated for, using a sophisticated numerical model, correction or pre-distortion of the design layout. The article “Full-Chip Lithography Simulation and Design Analysis—How OPC Is Changing IC Design”, C. Spence, Proc. SPIE, Vol. 5751, pp 1-14 (2005), which is incorporated herein its entirety by reference, provides an overview of “model-based” optical proximity correction processes. In a typical high-end design almost every feature of the design layout has some modification in order to achieve high fidelity of the projected image to the target design. These modifications may include shifting or biasing of edge positions or line widths as well as application of “assist” features that are intended to assist projection of other features. Application of model-based OPC to a target design involves good process models and considerable computational resources, given the many millions of features typically present in a chip design. However, applying OPC is generally not an “exact science”, but an empirical, iterative process that does not always compensate for all possible proximity effects. Therefore, the effect of OPC, e.g., a design layout after application of OPC and/or any other RET, should be verified by design inspection, i.e. intensive full-chip simulation using a calibrated numerical process model, in order to reduce or minimize the possibility of design flaws being built into the patterning device pattern. This is driven by the enormous cost of making high-end patterning devices, which run in the multi-million dollar range, as well as by the impact on turn-around time by reworking or repairing actual patterning devices once they have been manufactured. Both OPC and full-chip RET verification may be based on numerical modeling systems and methods as described, for example in, U.S. Patent Application Publication No. US 2005-0076322 and an article titled “Optimized Hardware and Software For Fast, Full Chip Simulation”, by Y. Cao et al., Proc. SPIE, Vol. 5754, 405 (2005), which are incorporated herein in their entireties by reference. One RET is related to adjustment of the global bias (also referred to as “mask bias”) of the design layout. The global bias is the difference between the patterns in the design layout and the patterns intended to print on the substrate. For example, ignoring (de-)magnification by projection optics, a circular pattern of 25 nm diameter may be printed on the substrate by a 50 nm diameter pattern in the design layout or by a 20 nm diameter pattern in the design layout but with high dose. In addition to optimization to design layouts or patterning devices (e.g., OPC), the illumination can also be optimized, either jointly with patterning device optimization or separately, in an effort to improve the overall lithography fidelity. Many off-axis illuminations, such as annular, quadrupole, and dipole, have been introduced, and have provided more freedom for OPC design, thereby improving the imaging results. Off-axis illumination is a way to resolve fine structures (i.e., target features) contained in the patterning device. However, when compared to a traditional illumination, an off-axis illumination usually provides less radiation intensity for the aerial image (AI). Thus, it becomes desirable to attempt to optimize the illumination to achieve the optimal balance between finer resolution and reduced radiation intensity. Numerous illumination optimization approaches can be found, for example, in an article by Rosenbluth et al., titled “Optimum Mask and Source Patterns to Print a Given Shape”, Journal of Microlithography, Microfabrication, Microsystems 1(1), pp. 13-20, (2002)), which is incorporated herein its entirety by reference. The illumination shape (sometimes referred to as an illumination source) is partitioned into several regions, each of which corresponds to a certain region of the pupil spectrum. Then, the distribution is assumed to be uniform in each illumination shape region and the brightness of each region is optimized for process window. However, such an assumption that the distribution is uniform in each region is not always valid, and as a result the effectiveness of this approach suffers. In another example set forth in an article by Granik, titled “Source Optimization for Image Fidelity and Throughput”, Journal of Microlithography, Microfabrication, Microsystems 3(4), pp. 509-522, (2004), which is incorporated herein its entirety by reference, several existing illumination optimization approaches are overviewed and a method based on illuminator pixels is proposed that converts the optimization problem into a series of non-negative least square optimizations. Though these methods demonstrate some success, they typically require multiple complicated iterations to converge. In addition, it may be difficult to determine the appropriate/optimal values for some extra parameters, such as γ in Granik's method, which dictates the trade-off between optimizing the illumination for substrate image fidelity and the smoothness requirement of the illumination. For low k1photolithography, optimization of both the illumination and patterning device (sometimes referred to as source mask optimization (SMO)) is useful to help ensure a viable process window for projection of critical patterns. Some algorithms (e.g., Socha et. al., Proc. SPIE vol. 5853, 2005, p. 180, which is incorporated herein its entirety by reference) discretize illumination into independent illumination points and the patterning device into diffraction orders in the spatial frequency domain, and separately formulate a cost function (which is defined as a function of one or more selected design variables) based on a process window metric, such as exposure latitude, which could be predicted by an optical imaging model from illumination point intensities and patterning device diffraction orders. The term “design variables” as used herein comprises a set of parameters of an apparatus or process of a device manufacturing process such as of a lithographic projection apparatus or of a lithographic process, for example, parameters a user of the lithographic projection apparatus can adjust, or image characteristics a user can adjust by adjusting those parameters. It should be appreciated that any one or more characteristics of a device manufacturing process or apparatus, including one or more characteristics of the illumination, the patterning device, the projection optics, and/or resist, can be represented by the design variables in the optimization. The cost function is often a non-linear function of the design variables. Then standard optimization techniques are used to optimize the cost function. Relatedly, the pressure of ever decreasing design rules have driven semiconductor chipmakers to move deeper into the low k1lithography era with existing 193 nm ArF lithography. Lithography towards lower k1puts heavy demands on RET, exposure tools, and the need for litho-friendly design. 1.35 ArF hyper numerical aperture (NA) exposure tools may be used in the future. To help ensure that device design can be produced on to the substrate with workable process window, illumination-patterning device optimization (referred to herein as source-mask optimization or SMO) is becoming a significant RET for 2× nm node. An illumination and patterning device (design layout) optimization method and system that allows for simultaneous optimization of the illumination and patterning device using a cost function without constraints and within a practicable amount of time is described in U.S. Patent Application Publication No. US 2011-0230999, which is hereby incorporated in its entirety by reference. Another SMO method and system that involves optimizing the illumination by adjusting pixels of the illumination is described in U.S. Patent Application Publication No. 2010/0315614, which is hereby incorporated in its entirety by reference. In a lithographic projection apparatus, as an example, a cost function may be expressed as CF(z1,z2, . . . ,zN)=Σp=1pwpfp2(z1,z2, . . . ,zN) (Eq. 1) wherein (z1, z2, . . . , zN) are N design variables or values thereof. fp(z1, z2, . . . , zN) can be a function of the design variables (z1, z2, . . . , zN) such as a difference between an actual value and an intended value of a characteristic at an evaluation point for a set of values of the design variables of (z1, z2, . . . , zN). wpis a weight constant associated with fp(z1, z2, . . . , zN). An evaluation point or pattern more critical than others can be assigned a higher wpvalue. Patterns and/or evaluation points with larger number of occurrences may be assigned a higher wpvalue, too. Examples of the evaluation points can be any physical point or pattern on the substrate, any point on a virtual design layout, or resist image, or aerial image, or a combination thereof. fp(z1, z2, . . . , zN) can also be a function of one or more stochastic variations such as the LWR, LER, and/or LCDU, which are in turn functions of the design variables (z1, z2, . . . , zN). fp(z1, z2, . . . , zN) may be an explicit function of a stochastic variation, such as fp(LER)=LER2(z1, z2, . . . , zN). fp(z1, z2, . . . , zN) may be an explicit function of a variable that is a function of a stochastic variation such as LER. For example, bl_ILS may be a function of LER as indicated by Eq. 30 and fp(bl_ILS(LER))=fp(LERa×(dose×imageintensity)cb).fp(z1,z2,…,zN) may be a variable that affects a stochastic variation such as LER. So, optimization using a cost function that includes fp(z1, z2, . . . , zN) that represents a stochastic variation may lead to values of the one or more design variables that reduce or minimize the stochastic variation. The cost function may represent any one or more suitable characteristics of the lithographic projection apparatus, lithographic process or the substrate, for instance, focus, CD, image shift, image distortion, image rotation, stochastic variation, throughput, LCDU, or a combination thereof. LCDU is local CD variation (e.g., three times of the standard deviation of the local CD distribution). In one example, the cost function represents (i.e., is a function of) LCDU, throughput, and the stochastic variations. In one example, the cost function represents (e.g., includes a fp(z1, z2, . . . , zN) that is a function of) EPE, throughput, and the stochastic variations. In one example, the cost function includes a fp(z1, z2, . . . , zN) that is a function of EPE and a fp(z1, z2, . . . , zN) that is a function of a stochastic variation such as LER. In one example, the design variables (z1, z2, . . . , zN) comprise one or more selected from dose, global bias of the patterning device, shape of illumination, or a combination thereof. Since it is the resist image that often dictates the pattern on a substrate, the cost function may include a function that represents one or more characteristics of the resist image. For example, fp(z1, z2, . . . , zN) of such an evaluation point can be simply a distance between a point in the resist image to an intended position of that point (i.e., edge placement error EPEp(z1, z2, . . . , zN)). The design variables can include any adjustable parameter such as an adjustable parameter of the illumination, the patterning device, the projection optics, dose, focus, etc. The lithographic apparatus may include components collectively called as “wavefront manipulator” that can be used to adjust the shape of a wavefront and intensity distribution and/or phase shift of a radiation beam. In an example, the lithographic apparatus can adjust a wavefront and intensity distribution at any location along an optical path of the lithographic projection apparatus, such as before the patterning device, near a pupil plane, near an image plane, and/or near a focal plane. The wavefront manipulator can be used to correct or compensate for certain distortions of the wavefront and intensity distribution and/or phase shift caused by, for example, the illumination system, the patterning device, temperature variation in the lithographic projection apparatus, thermal expansion of components of the lithographic projection apparatus, etc. Adjusting the wavefront and intensity distribution and/or phase shift can change values of the evaluation points and the cost function. Such changes can be simulated from a model or actually measured. Of course, CF (z1, z2, . . . , zN) is not limited to the form in Eq. 1. CF(z1, z2, . . . , zN) can be in any other suitable form. According to an example, a cost function representing both EPE and LER may have the form: CF(z1,z2,…,zN)=∑p=1P(wpEPEp2(z1,z2,…,zN)+spLERp2(z1,z2,…,zN)) This is because EPE and LER both have a dimension of length. Therefore, they can be directly added. Alternative cost functions may be used, including cost functions in which LER is included in EPE. Eq. 30 links bl_ILS to LER. Therefore, optimization using a cost function representing bl_ILS is similar to optimization using a cost function representing LER. Greater bl_ILS leads to lesser LER and vice versa. According to an example, a cost function may represent both EPE and bl_ILS (or normalized ILS (NILS)). However, EPE and bl_ILS (or NILS) might not be added directly because bl_ILS does not measure a length and EPE does, or NILS is dimensionless and EPE has a dimension of length. Therefore, representing bl_ILS (or NILS) by a function that represents a length makes directly adding that representation to EPE possible. ILS is defined as ILS=∂lnI/∂x. bl_ILS is spatially blurred ILS. NILS is defined as =CD×ILS. These definitions suggest a function that can represent ILS, bl_ILS or NILS and represents a length, and thus allows directly adding to EPE.FIG.9AandFIG.9Beach shows intensity of an image (aerial or resist) across an edge of a pattern in a direction (x) perpendicular to that edge. Higher slope of the intensity with respect to x means higher ILS, bl_ILS and NILS. The example ofFIG.9Athus has a higher ILS, bl_ILS and NILS than the example ofFIG.9B. The edge location Xeshifts with the intensity sufficient to expose the resist I. The intensity sufficient to expose the resist I changes with the dose, when the duration of exposure is fixed. Therefore, the amount of shift (“EPEILS” hereafter, e.g., 2911 and 2912) of the edge location Xecaused by a given amount of change in the dose (e.g., ±δ relative to nominal dose, which may be a parameter a user chooses) is determined by ILS, bl_ILS or NILS. The EPEILSin the example ofFIG.9Ais smaller than the EPEILSin the example ofFIG.9Bbecause the example ofFIG.9Athus has a higher ILS, bl_ILS and NILS than the example ofFIG.9B. The EPEILSis thus an example of a function that can represent ILS, bl_ILS or NILS and represents a length, allowing directly adding to EPE in a cost function. EPEILScan be written as EPEILS=1ILS(xe(0))(11+δ-1)≅1ILS(xe(0))(-δ). where/LS(xe(0)) is a function of the design variables (z1, z2, . . . , zN). A cost function that represents both EPE and ILS, bl_ILS or NILS, according to an example, may have the form: CF(z1,z2,…,zN)=∑p=1P(wpEPEp2(z1,z2,…,zN)❘δ=0+sp(EPEILS)2)=∑p=1P(wpEPEp2(z1,z2,…,zN)❘δ=0+sp(δILS(xe(0)))2). where EPEp(z1, z2, . . . , zN)|δ=0is the EPE value at the nominal dose, p is the p-th evaluation point, and Spis the weight for the EPEILSterm. So, for example, optimization by minimizing this cost function maximizes ILS(xe(0)), and thus minimizes LER. According to an example, the weight of the EPEILSterm (δILS(xe(0)))2 can be reduced relative to the weight of the EPE terms (e.g., EPEp2)) when the EPE terms increase, so that the EPEILSterm (δILS(xe(0)))2 does not dominate the EPE terms EPEp2). If the EPEILSterm dominates, the EPE terms will not be reduced sufficiently by the optimization. For example, when |EPEp| is above a user-selected offset, sp=0 when |EPEp|≥OF (thereby the optimization ignores the EPEILSterm and only reduces the EPE terms) and sp≠0 when |EPEp|≤OF, where OF is the offset. For example, wp={wdefault,when❘"\[LeftBracketingBar]"EPEp❘"\[RightBracketingBar]"≤OFwdefault+woffset,when❘"\[LeftBracketingBar]"EPEp❘"\[RightBracketingBar]">OF. Higher weight of the EPE terms will make the optimization favor reduction of the EPE terms in the optimization using the cost function. FIG.10schematically shows the curves of the cost function as a function of EPEpwhere the weight wp={wdefault,when❘"\[LeftBracketingBar]"EPEp❘"\[RightBracketingBar]"≤OFwdefault+woffset,when❘"\[LeftBracketingBar]"EPEp❘"\[RightBracketingBar]">OF. AsFIG.10shows, the EPE terms account for a greater proportion of the cost function when |EPEp|>OF because the weight wphas a greater value. The design variables may have constraints, which can be expressed as (z1, z2, . . . , zN)∈Z, where Z is a set of possible values of the design variables. One possible constraint on the design variables may be imposed by a desired throughput of the lithographic projection apparatus. A lower bound of desired throughput leads to an upper bound on the dose and thus has implications for the stochastic variation (e.g., imposing a lower bound on the stochastic variation). Shorter exposure time and/or lower dose generally leads to higher throughput but greater stochastic variation. Consideration of substrate throughput and minimization of the stochastic variation may constrain the possible values of design variables because the stochastic variation is a function of the design variables. Without such a constraint imposed by the desired throughput, the optimization may yield a set of values of the design variables that are unrealistic. For example, if the dose is a design variable, without such a constraint, the optimization may yield a dose value that makes the throughput economically impossible. However, the usefulness of constraints should not be interpreted as a necessity. For example, the throughput may be affected by the pupil fill ratio. For some illumination designs, a low pupil fill ratio may discard radiation, leading to lower throughput. Throughput may also be affected by the resist chemistry. Slower resist (e.g., a resist that requires higher amount of radiation to be properly exposed) leads to lower throughput. The optimization process therefore is to find a set of values of the one or more design variables, under the constraints (z1, z2, . . . , zN)∈Z, that optimize the cost function, e.g., to find: (z~1,z~2,…,z~N)=argmin(z1,z2,⋯,zN)∈ZCF(z1,z2,…,zN)(Eq.2) A general method of optimizing, according to an example, is illustrated inFIG.11. This method comprises a step302of defining a multi-variable cost function of a plurality of design variables. The design variables may comprise any suitable combination selected from design variables representing one or more characteristics of the illumination (300A) (e.g., pupil fill ratio, namely percentage of radiation of the illumination that passes through a pupil or aperture), one or more characteristics of the projection optics (300B) and/or one or more characteristics of the design layout (300C). For example, the design variables may include design variables representing one or more characteristics of the illumination (300A) and of the design layout (300C) (e.g., global bias) but not of one or more characteristics of the projection optics (300B), which leads to a SMO. Or, the design variables may include design variables representing one or more characteristics of the illumination (300A) (optionally polarization), of the projection optics (300B) and of the design layout (300C), which leads to a illumination-patterning device (e.g., mask)-projection system (e.g., lens) optimization (SMLO). In step304, the design variables are simultaneously adjusted so that the cost function is moved towards convergence. In step306, it is determined whether a predefined termination condition is satisfied. The predetermined termination condition may include various possibilities, e.g., one or more selected from: the cost function may be minimized or maximized, as required by the numerical technique used, the value of the cost function has been equal to a threshold value or has crossed the threshold value, the value of the cost function has reached within a preset error limit, and/or a preset number of iterations is reached. If a condition in step306is satisfied, the method ends. If the one or more conditions in step306is not satisfied, the steps304and306are iteratively repeated until a desired result is obtained. The optimization does not necessarily lead to a single set of values for the one or more design variables because there may be a physical restraint, caused by a factor such as pupil fill factor, resist chemistry, throughput, etc. The optimization may provide multiple sets of values for the one or more design variables and associated performance characteristics (e.g., the throughput) and allows a user of the lithographic apparatus to pick one or more sets.FIG.22shows several relations of the throughput (in the unit of number of substrates per hour) in the horizontal axis and a measure of the stochastic variation, for example, the average of the worst corner CDU and LER in the vertical axis, to resist chemistry (which may be represented by the dose required to expose the resist), pupil fill ratio (also known as “pupil fill factor”), illumination efficiency (e.g., the ratio of mirrors that direct radiation to the patterning device and the total available mirrors in the illuminator) and mask bias. Trace1811shows these relations with 100% pupil fill factor and a fast resist. Trace1812shows these relations with 100% pupil fill factor and a slow resist. Trace1821shows these relations with 60% pupil fill factor and the fast resist. Trace1822shows these relations with 60% pupil fill factor and the slow resist. Trace1831shows these relations with 29% pupil fill factor and the fast resist. Trace1832shows these relations with 29% pupil fill factor and the slow resist. The optimization may present all these possibilities to the user so the user may choose the pupil factor, the resist chemistry based on his specific requirement of the stochastic variation and/or throughput. The optimization may further include calculating a relation between a throughput and a pupil fill factor, resist chemistry and a mask bias. The optimization may further include calculating a relation between a measure of a stochastic variation and a pupil fill factor, resist chemistry and a mask bias. According to an example, also as schematically illustrated in the flow chart ofFIG.23, an optimization may be carried out under each of a set of values of the one or more design variables (e.g., an array, a matrix, or a list of values of the global bias and mask anchor bias) (Step1910). In an example, the cost function of the optimization is a function of one or more measures (e.g., LCDU) of the stochastic variation. Then, in step1920, various characteristics of the process, the aerial image, and/or resist image (e.g., critical dimension uniformity (CDU), depth of focus (DOF), exposure latitude (EL), mask error enhancement factor (MEEF), LCDU, throughput, etc.) may be presented (e.g., in a 3D plot) to a user of the optimization for each set of values of the one or more design variables. In optional step1930, the user selects a set of values of the one or more design variables based on his one or more desired characteristics. The flow may be implemented via an XML file or any script language. The illumination, patterning device and projection optics can be optimized alternatively (referred to as Alternative Optimization) or optimized simultaneously (referred to as Simultaneous Optimization). The terms “simultaneous”, “simultaneously”, “joint” and “jointly” as used herein mean that the one or more design variables representing one or more characteristics of the illumination, patterning device, projection optics and/or any other design variable, are allowed to change at the same time. The term “alternative” and “alternatively” as used herein mean that not all of the design variables are allowed to change at the same time. InFIG.11, the optimization of all the design variables is executed simultaneously. Such a flow may be called simultaneous flow or co-optimization flow. Alternatively, the optimization of all the design variables is executed alternatively, as illustrated inFIG.12. In this flow, in each step, some design variables are fixed while other design variables are optimized to optimize the cost function; then in the next step, a different set of variables are fixed while the others are optimized to minimize or maximize the cost function. These steps are executed alternatively until convergence or a certain terminating condition is met. As shown in the non-limiting example flowchart ofFIG.12, first, a design layout (step402) is obtained, then a step of illumination optimization is executed in step404, where the one or more design variables of the illumination are optimized (SO) to minimize or maximize the cost function while other design variables are fixed. Then in the next step406, a patterning device (e.g., mask) optimization (MO) is performed, where the design variables of the patterning device are optimized to minimize or maximize the cost function while other design variables are fixed. These two steps are executed alternatively, until a certain terminating condition is met in step408. One or more various termination conditions can be used, such as the value of the cost function becomes equal to a threshold value, the value of the cost function crosses the threshold value, the value of the cost function reaches within a preset error limit, a preset number of iterations is reached, etc. Note that SO-MO-Alternative-Optimization is used as an example for the alternative flow. The alternative flow can take many different forms, such as SO-LO-MO-Alternative-Optimization, where SO, LO (projection optics optimization) is executed, and MO alternatively and iteratively; or first SMO can be executed once, then execute LO and MO alternatively and iteratively; and so on. Another alternative is SO-PO-MO (illumination optimization, polarization optimization and patterning device optimization). Finally the output of the optimization result is obtained in step410, and the process stops. The pattern selection algorithm, as discussed before, may be integrated with the simultaneous or alternative optimization. For example, when an alternative optimization is adopted, first a full-chip SO can be performed, one or more ‘hot spots’ and/or ‘warm spots’ are identified, then a MO is performed. In view of the present disclosure numerous permutations and combinations of sub-optimizations are possible in order to achieve the desired optimization results. FIG.13Ashows one exemplary method of optimization, where a cost function is minimized or maximized. In step S502, initial values of one or more design variables are obtained, including one or more associated tuning ranges, if any. In step S504, the multi-variable cost function is set up. In step S506, the cost function is expanded within a small enough neighborhood around the starting point value of the one or more design variables for the first iterative step (i=0). In step S508, standard multi-variable optimization techniques are applied to the cost function. Note that the optimization problem can apply constraints, such as the one or more tuning ranges, during the optimization process in S508or at a later stage in the optimization process. Step S520indicates that each iteration is done for the one or more given test patterns (also known as “gauges”) for the identified evaluation points that have been selected to optimize the lithographic process. In step S510, a lithographic response is predicted. In step S512, the result of step S510is compared with a desired or ideal lithographic response value obtained in step S522. If the termination condition is satisfied in step S514, i.e. the optimization generates a lithographic response value sufficiently close to the desired value, then the final value of the design variables is outputted in step S518. The output step may also include outputting one or more other functions using the final values of the design variables, such as outputting a wavefront aberration-adjusted map at the pupil plane (or other planes), an optimized illumination map, and/or optimized design layout etc. If the termination condition is not satisfied, then in step S516, the values of the one or more design variables is updated with the result of the i-th iteration, and the process goes back to step S506. The process ofFIG.13Ais elaborated in details below. In an exemplary optimization process, no relationship between the design variables (z1, z2, . . . , zN) and fp(z1, z2, . . . , zN) is assumed or approximated, except that fp(z1, z2, . . . , zN) is sufficiently smooth (e.g. first order derivatives ∂fp(z1,z2,…,zN)∂zn, (n=1, 2, ⋅ ⋅ ⋅ N) exist), which is generally valid in a lithographic projection apparatus. An algorithm, such as the Gauss-Newton algorithm, the Levenberg-Marquardt algorithm, the Broyden-Fletcher-Goldfarb-Shanno algorithm, the gradient descent algorithm, the simulated annealing algorithm, the interior point algorithm, and the genetic algorithm, can be applied to find ({tilde over (z)}1, {tilde over (z)}2, . . . , {tilde over (z)}N). Here, the Gauss-Newton algorithm is used as an example. The Gauss-Newton algorithm is an iterative method applicable to a general non-linear multi-variable optimization problem. In the i-th iteration wherein the design variables (z1, z2, . . . , zN) take values of (z1i, z2i, . . . , zNi), the Gauss-Newton algorithm linearizes fp(z1, z2, . . . , zN) in the vicinity of (z1i, z2i, . . . , zNi), and then calculates values (z1(i+1), z2(i+1), . . . , zN(i+1)) in the vicinity of (z1i, z2i, . . . , zNi) that give a minimum of CF (z1, z2, . . . , zN). The design variables (z1, z2, . . . , zN) take the values of (z1(i+1), z2(i+1), . . . , zN(i+1)) in the (i+1)-th iteration. This iteration continues until convergence (i.e. CF(z1, z2, . . . , zN). does not reduce any further) or a preset number of iterations is reached. Specifically, in the i-th iteration, in the vicinity of (z1i, z2i, . . . , zNi), fp(z1,z2,…,zN)≈fp(z1i,z2i,…,zNi)+∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…,zN=zNi(zn=zni)(Eq.3) Under the approximation of Eq. 3, the cost function becomes: CF(z1,z2,…,zN)=∑p=1Pwpfp2(z1,z2,…,zN)=∑p=1Pwp(fp(z1i,z2i,…,zNi)+∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…zN=zNi(zn=zni))2(Eq.4) which is a quadratic function of the design variables (z1, z2, . . . , zN). Every term is constant except the design variables (z1, z2, . . . , zN). If the design variables (z1, z2, . . . , zN) are not under any constraints, (z1(i+1), z2(i+1), . . . , zN(i+1)) can be derived by solving N linear equations: ∂CF(z1,z2,…,zN)∂zn=0, wherein n=1, 2, . . . , N. If the design variables (z1, z2, . . . , zN) are under constraints in the form of J inequalities (e.g. tuning ranges of (z1, z2, . . . , zN)) Σn=1NAnjzn≤Bj, for j=1, 2, . . . , J; and K equalities (e.g. interdependence between the design variables) Σn=1NCnkzn≤Dk, for k=1, 2, . . . , K, the optimization process becomes a classic quadratic programming problem, wherein Anj, Bj, Cnk, Dkare constants. Additional constraints can be imposed for each iteration. For example, a “damping factor” ΔD, can be introduced to limit the difference between (z1(i+1), z2(i+1), . . . , zN(i+1)) and (z1i, z2i, . . . , zNi), so that the approximation of Eq. 3 holds. Such constraints can be expressed as zni−ΔD≤zn≤zni+ΔD. (z1(i+1), z2(i+1), . . . , zN(i+1)) can be derived using, for example, methods described in Numerical Optimization (2nded.) by Jorge Nocedal and Stephen J. Wright (Berlin New York: Vandenberghe. Cambridge University Press). Instead of minimizing the RMS of fp(z1, z2, . . . , zN), the optimization process can minimize magnitude of the largest deviation (the worst defect) among the evaluation points to their intended values. In this approach, the cost function can alternatively be expressed as CF(z1,z2,…,zN)=max1≤p≤Pfp(z1,z2,…,zN)CLp(Eq.5) wherein CLpis the maximum allowed value for fp(z1, z2, . . . , zN). This cost function represents the worst defect among the evaluation points. Optimization using this cost function minimizes magnitude of the worst defect. An iterative greedy algorithm can be used for this optimization. The cost function of Eq. 5 can be approximated as: CF(z1,z2,…,zN)=∑p=1pwp(fp(z1,z2,…,zN)CLp)q(Eq.6) wherein q is an even positive integer such as at least 4, or at least 10. Eq. 6 mimics the behavior of Eq. 5, while allowing the optimization to be executed analytically and accelerated by using methods such as the deepest descent method, the conjugate gradient method, etc. Minimizing the worst defect size can also be combined with linearizing of fp(z1, z2, . . . , zN). Specifically, fp(z1, z2, . . . , zN) is approximated as in Eq. 3. Then the constraints on worst defect size are written as inequalities ELp≤fp(z1, z2, . . . , zN)≤EUp, wherein ELpand EUp, are two constants specifying the minimum and maximum allowed deviation for the fp(z1, z2, . . . , zN). Plugging Eq. 3 in, these constraints are transformed to, for p=1, . . . P, ∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…zN=zNizn≤EUp+∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…zN=zNizni-fp(z1i,z2i,…,zNi)and(Eq.6′)-∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…zN=zNizn≤-EUp-∑n=1N∂fp(z1,z2,…,zN)∂zn|z1=z1i,z2=z2i,…zN=zNizni+fp(z1i,z2i,…,zNi)(6″) Since Eq. 3 is generally valid only in the vicinity of (z1, z2, . . . , zN), in case the desired constraints ELp≤fp(z1, z2, . . . , zN) EUpcannot be achieved in such vicinity, which can be determined by any conflict among the inequalities, the constants ELpand EUpcan be relaxed until the constraints are achievable. This optimization process minimizes the worst defect size in the vicinity of (z1, z2, . . . , zN), i. Then each step reduces the worst defect size gradually, and each step is executed iteratively until certain terminating conditions are met. This will lead to optimal reduction of the worst defect size. Another way to minimize the worst defect is to adjust the weight wpin each iteration. For example, after the i-th iteration, if the r-th evaluation point is the worst defect, wrcan be increased in the (i+1)-th iteration so that the reduction of that evaluation point's defect size is given higher priority. In addition, the cost functions in Eq. 4 and Eq. 5 can be modified by introducing a Lagrange multiplier to achieve compromise between the optimization on RMS of the defect size and the optimization on the worst defect size, i.e., CF(z1,z2,…,zN)=(1-λ)∑p=1pwpfp2(z1,z2,…,zN)+λmax1≤p≤Pfp(z1,z2,…,zN)CLp(Eq.6′′′) where λ is a preset constant that specifies the trade-off between the optimization on RMS of the defect size and the optimization on the worst defect size. In particular, if λ=0, then this becomes Eq. 4 and the RMS of the defect size is only minimized; while if λ=1, then this becomes Eq. 5 and the worst defect size is only minimized; if 0<λ<1, then both are taken into consideration in the optimization. Such optimization can be solved using multiple methods. For example, the weighting in each iteration may be adjusted, similar to the one described previously. Alternatively, similar to minimizing the worst defect size from inequalities, the inequalities of Eq. 6′ and 6″ can be viewed as constraints of the design variables during solution of the quadratic programming problem. Then, the bounds on the worst defect size can be relaxed incrementally or increase the weight for the worst defect size incrementally, compute the cost function value for every achievable worst defect size, and choose the design variable values that minimize the total cost function as the initial point for the next step. By doing this iteratively, the minimization of this new cost function can be achieved. Optimizing a lithographic projection apparatus can expand the process window. A larger process window provides more flexibility in process design and chip design. The process window can be defined as a set of focus and dose values for which the resist image is within a certain limit of the design target of the resist image. Note that all the methods discussed here may also be extended to a generalized process window definition that can be established by different or additional base parameters in addition to exposure dose and defocus. These may include, but are not limited to, optical settings such as NA, sigma, aberration, polarization, or an optical constant of the resist layer. For example, as described earlier, if the process window (PW) also comprises different mask bias, then the optimization includes the minimization of MEEF, which is defined as the ratio between the substrate EPE and the induced mask edge bias. The process window defined on focus and dose values only serve as an example in this disclosure. A method of maximizing the process window, according to an example, is described below. In a first step, starting from a known condition (f0, ε0) in the process window, wherein f0is a nominal focus and ε0is a nominal dose, minimizing one of the cost functions below in the vicinity (f0±Δf, ε0±ε): CF(z1,z2,…,zN,f0,ε0)=max(f,ε)=(f0±Δf,ε0±ε)maxp❘"\[LeftBracketingBar]"fp(z1,z2,…,zN,f,ε)❘"\[RightBracketingBar]"or(Eq.7)CF(z1,z2,…,zN,f0,ε0)=∑(f,ε)=(f0±Δf,ε0±ε)∑pwpfp2(z1,z2,…,zN,f,ε)or(Eq.7′)CF(z1,z2,…,zN,f0,ε0)=(1-λ)∑(f,ε)=(f0±Δf,ε0±ε)∑pwpfp2(z1,z2,…,zN,f,ε)+λmax(f,ε)=(f0±Δf,ε0±ε)maxp❘"\[LeftBracketingBar]"fp(z1,z2,…,zN,f,ε)❘"\[RightBracketingBar]"(Eq.7″) If the nominal focus f0and nominal dose ε0are allowed to shift, they can be optimized jointly with the design variables (z1, z2, . . . , zN). In the next step, (f0±Δf, ε0±ε) is accepted as part of the process window, if a set of values of (z1, z2, . . . , zN, f, ε) can be found such that the cost function is within a preset limit. If the focus and dose are not allowed to shift, the design variables (z1, z2, . . . , zN) are optimized with the focus and dose fixed at the nominal focus f0and nominal dose ε0. In an alternative example, (f0±D f, E0±E) is accepted as part of the process window, if a set of values of (z1, z2, . . . , zN) can be found such that the cost function is within a preset limit. The methods described earlier in this disclosure can be used to minimize the respective cost functions of Eqs. 7, 7′, or 7″. If the design variables represent one or more characteristics of the projection optics, such as the Zernike coefficients, then minimizing the cost functions of Eqs. 7, 7′, or 7″ leads to process window maximization based on projection optics optimization, i.e., LO. If the design variables represent one or more characteristics of the illumination and patterning device in addition to those of the projection optics, then minimizing the cost function of Eqs. 7, 7′, or 7″ leads to process window maximizing based on SMLO, as illustrated inFIG.11. If the design variables represent one or more characteristics of the illumination and patterning device, then minimizing the cost functions of Eqs. 7, 7′, or 7″ leads to process window maximization based on SMO. The cost functions of Eqs. 7, 7′, or 7″ can also include at least one fp(z1, z2, . . . , zN) such as described herein, that is a function of one or more stochastic variations such as the LWR, local CD variation of 2D features, and/or throughput. FIG.14shows one specific example of how a simultaneous SMLO process can use a Gauss Newton Algorithm for optimization. In step S702, starting values of one or more design variables are identified. A tuning range for each variable may also be identified. In step S704, the cost function is defined using the one or more design variables. In step S706, the cost function is expanded around the starting values for all evaluation points in the design layout. In optional step S710, a full-chip simulation is executed to cover all critical patterns in a full-chip design layout. A desired lithographic response metric (such as CD or EPE) is obtained in step S714, and compared with predicted values of those quantities in step S712. In step S716, a process window is determined. Steps S718, S720, and S722are similar to corresponding steps S514, S516and S518, as described with respect toFIG.13A. As mentioned before, the final output may be, for example, a wavefront aberration map in the pupil plane, optimized to produce the desired imaging performance. The final output may be, for example, an optimized illumination map and/or an optimized design layout. FIG.13Bshows an exemplary method to optimize the cost function where the design variables (z1, z2, . . . , zN) include design variables that may only assume discrete values. The method starts by defining the pixel groups of the illumination and the patterning device tiles of the patterning device (step802). Generally, a pixel group or a patterning device tile may also be referred to as a division of a lithographic process component. In one exemplary approach, the illumination is divided into 117 pixel groups, and 94 patterning device tiles are defined for the patterning device, substantially as described above, resulting in a total of 211 divisions. In step804, a lithographic model is selected as the basis for lithographic simulation. A lithographic simulation produces results that are used in calculations of one or more lithographic metrics, or responses. A particular lithographic metric is defined to be the performance metric that is to be optimized (step806). In step808, the initial (pre-optimization) conditions for the illumination and the patterning device are set up. Initial conditions include initial states for the pixel groups of the illumination and the patterning device tiles of the patterning device such that references may be made to an initial illumination shape and an initial patterning device pattern. Initial conditions may also include mask bias, NA, and/or focus ramp range. Although steps802,804,806, and808are depicted as sequential steps, it will be appreciated that in other examples, these steps may be performed in other sequences. In step810, the pixel groups and patterning device tiles are ranked. Pixel groups and patterning device tiles may be interleaved in the ranking Various ways of ranking may be employed, including: sequentially (e.g., from pixel group1to pixel group117and from patterning device tile1to patterning device tile94), randomly, according to the physical locations of the pixel groups and patterning device tiles (e g, ranking pixel groups closer to the center of the illumination higher), and/or according to how an alteration of the pixel group or patterning device tile affects the performance metric. Once the pixel groups and patterning device tiles are ranked, the illumination and patterning device are adjusted to improve the performance metric (step812). In step812, each of the pixel groups and patterning device tiles are analyzed, in order of ranking, to determine whether an alteration of the pixel group or patterning device tile will result in an improved performance metric. If it is determined that the performance metric will be improved, then the pixel group or patterning device tile is accordingly altered, and the resulting improved performance metric and modified illumination shape or modified patterning device pattern form the baseline for comparison for subsequent analyses of lower-ranked pixel groups and patterning device tiles. In other words, alterations that improve the performance metric are retained. As alterations to the states of pixel groups and patterning device tiles are made and retained, the initial illumination shape and initial patterning device pattern changes accordingly, so that a modified illumination shape and a modified patterning device pattern result from the optimization process in step812. In other approaches, patterning device polygon shape adjustments and pairwise polling of pixel groups and/or patterning device tiles are also performed within the optimization process of812. In an example, the interleaved simultaneous optimization procedure may include altering a pixel group of the illumination and if an improvement of the performance metric is found, the dose or intensity is stepped up and/or down to look for further improvement. In a further example, the stepping up and/or down of the dose or intensity may be replaced by a bias change of the patterning device pattern to look for further improvement in the simultaneous optimization procedure. In step814, a determination is made as to whether the performance metric has converged. The performance metric may be considered to have converged, for example, if little or no improvement to the performance metric has been witnessed in the last several iterations of steps810and812. If the performance metric has not converged, then the steps of810and812are repeated in the next iteration, where the modified illumination shape and modified patterning device from the current iteration are used as the initial illumination shape and initial patterning device for the next iteration (step816). The optimization methods described above may be used to increase the throughput of the lithographic projection apparatus. For example, the cost function may include a fp(z1, z2, . . . , zN) that is a function of the exposure time. In an example, optimization of such a cost function is constrained or influenced by a measure of the stochastic variation or other metric. Specifically, a computer-implemented method to increase a throughput of a lithographic process may comprise optimizing a cost function that is a function of one or more stochastic variations of the lithographic process and a function of an exposure time of the substrate, in order to reduce or minimize the exposure time. In one example, the cost function includes at least one fp(z1, z2, . . . , zN) that is a function of one or more stochastic variations. The one or more stochastic variations may include LWR and/or local CD variation of 2D features. In one example, the one or more stochastic variations include one or more stochastic variations of one or more characteristics of an aerial image or a resist image. For example, such a stochastic variation may include line edge roughness (LER), line width roughness (LWR) and/or local critical dimension uniformity (LCDU). Including one or more stochastic variations in the cost function allows finding a value of one or more design variables that minimize the one or more stochastic variations, thereby reducing risk of defects due to stochastic variation. FIG.15Ashows a flow chart for a method of identifying a hot spot of an aerial image or resist image based on a stochastic variation (e.g., LER) of a characteristic or on a variable (e.g., bl_ILS, ILS, or NILS) that is a function of or affects a stochastic variation, according to an example. In optional step2510, a value of a variable (e.g., bl_ILS, ILS, or NILS) that is a function of or affects a stochastic variation (e.g., LER) for a characteristic (e.g., edge location) of an aerial image or resist image is obtained. In step2520, a value of the stochastic variation (e.g., LER) of the characteristic is obtained (e.g., from the value of the variable). In step2530, a range of the characteristic is obtained. The range may be due to any suitable limitation. For example, when the stochastic variation is LER, the range may be dictated by a geometry of the pattern of the design layout. For example, the maximum of the LER may not exceed the width of a gap from an edge to its neighboring edge. In step2540, the value of the stochastic variation is compared with the range. If the stochastic variation exceeds the range, the characteristic is identified as a hot spot in step2550. Further processing, such as optimization to reduce the stochastic variation, may be carried out for that characteristic identified as a hot spot. FIG.15Bshows a flow chart for a method of identifying a hot spot of an aerial image or resist image based on a stochastic variation (e.g., LER) of a characteristic (e.g., edge location) of an aerial image or resist image or on a variable (e.g., bl_ILS, ILS, or NILS) that is a function of or affects the stochastic variation, according to an example. In step2610, a range of the characteristic is obtained. In step2620, a range of the stochastic variation (e.g., LER) or a range of the variable (e.g., bl_ILS, ILS, or NILS) is obtained based on the range of the characteristic. In step2630, a value of the stochastic variation or a value of the variable is obtained. In step2640, the value of the stochastic variation or the value of the variable is compared with the respective range thereof. If the value of the stochastic variation or the value of the variable exceeds the respective range thereof, the characteristic is identified as a hot spot in step2650. Further processing, such as optimization to reduce the stochastic variation, may be carried out for that characteristic identified as a hot spot. FIG.16shows a flow chart for a method of reducing a stochastic variation (e.g., LER) of one or more characteristics (e.g., edge location) of an aerial image or resist image, according to an example. In step2710, obtain the one or more characteristics by identifying them as a hot spot from a portion of a design layout, for example, using the method ofFIG.15AorFIG.15B. In step2720, reducing the stochastic variation of the one or more characteristics, for example, by using a cost function that represents at least the stochastic variation or a variable (e.g., bl_ILS, ILS, or NILS) that is a function of or affects the stochastic variation. In step2730, re-identifying a hot spot from the portion of the design layout. In step2740, determine if a hot spot is identified. If a hot spot is identified, proceed to step2750; if none is identified, the method ends. In step2750, change one or more parameters of the optimization (e.g., δ and/or the user-selected offset) and the method reiterates to step2720and perform the optimization with the changed one or more parameter. In an alternative, the one or more parameters may be part of the design layout and steps2740and2750may be eliminated. FIG.17is a block diagram that illustrates a computer system100which can assist in implementing the optimization methods and flows disclosed herein. Computer system100includes a bus102or other communication mechanism for communicating information, and a processor104(or multiple processors104and105) coupled with bus102for processing information. Computer system100also includes a main memory106, such as a random access memory (RAM) or other dynamic storage device, coupled to bus102for storing information and instructions to be executed by processor104. Main memory106also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor104. Computer system100further includes a read only memory (ROM)108or other static storage device coupled to bus102for storing static information and instructions for processor104. A storage device110, such as a magnetic disk or optical disk, is provided and coupled to bus102for storing information and instructions. Computer system100may be coupled via bus102to a display112, such as a cathode ray tube (CRT) or flat panel or touch panel display for displaying information to a computer user. An input device114, including alphanumeric and other keys, is coupled to bus102for communicating information and command selections to processor104. Another type of user input device is cursor control116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor104and for controlling cursor movement on display112. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. A touch panel (screen) display may also be used as an input device. According to one example, portions of the optimization process may be performed by computer system100in response to processor104executing one or more sequences of one or more instructions contained in main memory106. Such instructions may be read into main memory106from another computer-readable medium, such as storage device110. Execution of the sequences of instructions contained in main memory106causes processor104to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory106. In an alternative example, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, the description herein is not limited to any specific combination of hardware circuitry and software. The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor104for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as storage device110. Volatile media include dynamic memory, such as main memory106. Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise bus102. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor104for execution. For example, the instructions may initially be borne on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system100can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector coupled to bus102can receive the data carried in the infrared signal and place the data on bus102. Bus102carries the data to main memory106, from which processor104retrieves and executes the instructions. The instructions received by main memory106may optionally be stored on storage device110either before or after execution by processor104. Computer system100may also include a communication interface118coupled to bus102. Communication interface118provides a two-way data communication coupling to a network link120that is connected to a local network122. For example, communication interface118may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface118may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface118sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link120typically provides data communication through one or more networks to other data devices. For example, network link120may provide a connection through local network122to a host computer124or to data equipment operated by an Internet Service Provider (ISP)126. ISP126in turn provides data communication services through the worldwide packet data communication network, now commonly referred to as the “Internet”128. Local network122and Internet128both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link120and through communication interface118, which carry the digital data to and from computer system100, are exemplary forms of carrier waves transporting the information. Computer system100can send messages and receive data, including program code, through the network(s), network link120, and communication interface118. In the Internet example, a server130might transmit a requested code for an application program through Internet128, ISP126, local network122and communication interface118. One such downloaded application may provide for the illumination optimization of the example, for example. The received code may be executed by processor104as it is received, and/or stored in storage device110, or other non-volatile storage for later execution. In this manner, computer system100may obtain application code in the form of a carrier wave. FIG.18schematically depicts an exemplary lithographic projection apparatus whose illumination could be optimized utilizing the methods described herein. The apparatus comprises:an illumination system IL, to condition a beam B of radiation. In this particular case, the illumination system also comprises a radiation source SO;a first object table (e.g., patterning device table) MT provided with a patterning device holder to hold a patterning device MA (e.g., a reticle), and connected to a first positioner to accurately position the patterning device with respect to item PS;a second object table (substrate table) WT provided with a substrate holder to hold a substrate W (e.g., a resist-coated silicon wafer), and connected to a second positioner to accurately position the substrate with respect to item PS;a projection system (“lens”) PS (e.g., a refractive, catoptric or catadioptric optical system) to image an irradiated portion of the patterning device MA onto a target portion C (e.g., comprising one or more dies) of the substrate W. As depicted herein, the apparatus is of a transmissive type (i.e., has a transmissive patterning device). However, in general, it may also be of a reflective type, for example (with a reflective patterning device). The apparatus may employ a different kind of patterning device to classic mask; examples include a programmable mirror array or LCD matrix. The source SO (e.g., a mercury lamp or excimer laser, LPP (laser produced plasma) EUV source) produces a beam of radiation. This beam is fed into an illumination system (illuminator) IL, either directly or after having traversed conditioning means, such as a beam expander Ex, for example. The illuminator IL may comprise adjusting means AD for setting the outer and/or inner radial extent (commonly referred to as σ-outer and σ-inner, respectively) of the intensity distribution in the beam. In addition, it will generally comprise various other components, such as an integrator IN and a condenser CO. In this way, the beam B impinging on the patterning device MA has a desired uniformity and intensity distribution in its cross-section. It should be noted with regard toFIG.18that the source SO may be within the housing of the lithographic projection apparatus (as is often the case when the source SO is a mercury lamp, for example), but that it may also be remote from the lithographic projection apparatus, the radiation beam that it produces being led into the apparatus (e.g., with the aid of suitable directing mirrors); this latter scenario is often the case when the source SO is an excimer laser (e.g., based on KrF, ArF or F2lasing). The beam PB subsequently intercepts the patterning device MA, which is held on a patterning device table MT. Having traversed the patterning device MA, the beam B passes through the lens PL, which focuses the beam B onto a target portion C of the substrate W. With the aid of the second positioning means (and interferometric measuring means IF), the substrate table WT can be moved accurately, e.g. so as to position different target portions C in the path of the beam PB. Similarly, the first positioning means can be used to accurately position the patterning device MA with respect to the path of the beam B, e.g., after mechanical retrieval of the patterning device MA from a patterning device library, or during a scan. In general, movement of the object tables MT, WT will be realized with the aid of a long-stroke module (coarse positioning) and a short-stroke module (fine positioning), which are not explicitly depicted inFIG.18. However, in the case of a stepper (as opposed to a step-and-scan tool) the patterning device table MT may just be connected to a short stroke actuator, or may be fixed. The depicted tool can be used in two different modes:In step mode, the patterning device table MT is kept essentially stationary, and an entire patterning device image is projected in one go (i.e., a single “flash”) onto a target portion C. The substrate table WT is then shifted in the x and/or y directions so that a different target portion C can be irradiated by the beam PB;In scan mode, essentially the same scenario applies, except that a given target portion C is not exposed in a single “flash”. Instead, the patterning device table MT is movable in a given direction (the so-called “scan direction”, e.g., the y direction) with a speed v, so that the projection beam B is caused to scan over a patterning device image; concurrently, the substrate table WT is simultaneously moved in the same or opposite direction at a speed V=Mv, in which M is the magnification of the lens PL (typically, M=¼ or ⅕). In this manner, a relatively large target portion C can be exposed, without having to compromise on resolution. FIG.19schematically depicts another exemplary lithographic projection apparatus1000whose illumination could be optimized utilizing the methods described herein. The lithographic projection apparatus1000comprises:a source collector module SOan illumination system (illuminator) IL configured to condition a radiation beam B (e.g. EUV radiation).a support structure (e.g. a patterning device table) MT constructed to support a patterning device (e.g. a mask or a reticle) MA and connected to a first positioner PM configured to accurately position the patterning device;a substrate table (e.g. a wafer table) WT constructed to hold a substrate (e.g. a resist coated wafer) W and connected to a second positioner PW configured to accurately position the substrate; anda projection system (e.g. a reflective projection system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g. comprising one or more dies) of the substrate W. As here depicted, the apparatus1000is of a reflective type (e.g. employing a reflective patterning device). It is to be noted that because most materials are absorptive within the EUV wavelength range, the patterning device may have multilayer reflectors comprising, for example, a multi-stack of Molybdenum and Silicon. In one example, the multi-stack reflector has a 40 layer pairs of molybdenum and silicon where the thickness of each layer is a quarter wavelength. Even smaller wavelengths may be produced with X-ray lithography. Since most material is absorptive at EUV and x-ray wavelengths, a thin piece of patterned absorbing material on the patterning device topography (e.g., a TaN absorber on top of the multi-layer reflector) defines where features would print (positive resist) or not print (negative resist). Referring toFIG.19, the illuminator IL receives an extreme ultra violet radiation beam from the source collector module SO. Methods to produce EUV radiation include, but are not necessarily limited to, converting a material into a plasma state that has at least one element, e.g., xenon, lithium or tin, with one or more emission lines in the EUV range. In one such method, often termed laser produced plasma (“LPP”) the plasma can be produced by irradiating a fuel, such as a droplet, stream or cluster of material having the line-emitting element, with a laser beam. The source collector module SO may be part of an EUV radiation system including a laser, not shown inFIG.19, for providing the laser beam exciting the fuel. The resulting plasma emits output radiation, e.g., EUV radiation, which is collected using a radiation collector, disposed in the source collector module. The laser and the source collector module may be separate entities, for example when a CO2 laser is used to provide the laser beam for fuel excitation. In such cases, the laser is not considered to form part of the lithographic apparatus and the radiation beam is passed from the laser to the source collector module with the aid of a beam delivery system comprising, for example, suitable directing mirrors and/or a beam expander. In other cases the source may be an integral part of the source collector module, for example when the source is a discharge produced plasma EUV generator, often termed as a DPP source. The illuminator IL may comprise an adjuster for adjusting the angular intensity distribution of the radiation beam. Generally, at least the outer and/or inner radial extent (commonly referred to as σ-outer and σ-inner, respectively) of the intensity distribution in a pupil plane of the illuminator can be adjusted. In addition, the illuminator IL may comprise various other components, such as facetted field and pupil mirror devices. The illuminator may be used to condition the radiation beam, to have a desired uniformity and intensity distribution in its cross section. The radiation beam B is incident on the patterning device (e.g., mask) MA, which is held on the support structure (e.g., patterning device table) MT, and is patterned by the patterning device. After being reflected from the patterning device (e.g. mask) MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioner PW and position sensor PS2 (e.g. an interferometric device, linear encoder or capacitive sensor), the substrate table WT can be moved accurately, e.g. so as to position different target portions C in the path of the radiation beam B. Similarly, the first positioner PM and another position sensor PS1 can be used to accurately position the patterning device (e.g. mask) MA with respect to the path of the radiation beam B. Patterning device (e.g. mask) MA and substrate W may be aligned using patterning device alignment marks M1, M2 and substrate alignment marks P1, P2. The depicted apparatus1000could be used in at least one of the following modes: 1. In step mode, the support structure (e.g. patterning device table) MT and the substrate table WT are kept essentially stationary, while an entire pattern imparted to the radiation beam is projected onto a target portion C at one time (i.e. a single static exposure). The substrate table WT is then shifted in the X and/or Y direction so that a different target portion C can be exposed. 2. In scan mode, the support structure (e.g. patterning device table) MT and the substrate table WT are scanned synchronously while a pattern imparted to the radiation beam is projected onto a target portion C (i.e. a single dynamic exposure). The velocity and direction of the substrate table WT relative to the support structure (e.g. patterning device table) MT may be determined by the (de-)magnification and image reversal characteristics of the projection system PS. 3. In another mode, the support structure (e.g. patterning device table) MT is kept essentially stationary holding a programmable patterning device, and the substrate table WT is moved or scanned while a pattern imparted to the radiation beam is projected onto a target portion C. In this mode, generally a pulsed radiation source is employed and the programmable patterning device is updated as required after each movement of the substrate table WT or in between successive radiation pulses during a scan. This mode of operation can be readily applied to maskless lithography that utilizes programmable patterning device, such as a programmable mirror array of a type as referred to above. FIG.20shows the apparatus1000in more detail, including the source collector module SO, the illumination system IL, and the projection system PS. The source collector module SO is constructed and arranged such that a vacuum environment can be maintained in an enclosing structure220of the source collector module SO. An EUV radiation emitting plasma210may be formed by a discharge produced plasma source. EUV radiation may be produced by a gas or vapor, for example Xe gas, Li vapor or Sn vapor in which the very hot plasma210is created to emit radiation in the EUV range of the electromagnetic spectrum. The very hot plasma210is created by, for example, an electrical discharge causing an at least partially ionized plasma. Partial pressures of, for example, 10 Pa of Xe, Li, Sn vapor or any other suitable gas or vapor may be required for efficient generation of the radiation. In an example, a plasma of excited tin (Sn) is provided to produce EUV radiation. The radiation emitted by the hot plasma210is passed from a source chamber211into a collector chamber212via an optional gas barrier or contaminant trap230(in some cases also referred to as contaminant barrier or foil trap) which is positioned in or behind an opening in source chamber211. The contaminant trap230may include a channel structure. Contamination trap230may also include a gas barrier or a combination of a gas barrier and a channel structure. The contaminant trap or contaminant barrier230further indicated herein at least includes a channel structure, as known in the art. The collector chamber211may include a radiation collector CO which may be a so-called grazing incidence collector. Radiation collector CO has an upstream radiation collector side251and a downstream radiation collector side252. Radiation that traverses collector CO can be reflected off a grating spectral filter240to be focused in a virtual source point IF along the optical axis indicated by the dot-dashed line ‘O’. The virtual source point IF is commonly referred to as the intermediate focus, and the source collector module is arranged such that the intermediate focus IF is located at or near an opening221in the enclosing structure220. The virtual source point IF is an image of the radiation emitting plasma210. Subsequently the radiation traverses the illumination system IL, which may include a facetted field mirror device22and a facetted pupil mirror device24arranged to provide a desired angular distribution of the radiation beam21, at the patterning device MA, as well as a desired uniformity of radiation intensity at the patterning device MA. Upon reflection of the beam of radiation21at the patterning device MA, held by the support structure MT, a patterned beam26is formed and the patterned beam26is imaged by the projection system PS via reflective elements28,30onto a substrate W held by the substrate table WT. More elements than shown may generally be present in illumination optics unit IL and projection system PS. The grating spectral filter240may optionally be present, depending upon the type of lithographic apparatus. Further, there may be more mirrors present than those shown in the Figures, for example there may be 1-6 additional reflective elements present in the projection system PS than shown inFIG.20. Collector optic CO, as illustrated inFIG.20, is depicted as a nested collector with grazing incidence reflectors253,254and255, just as an example of a collector (or collector mirror). The grazing incidence reflectors253,254and255are disposed axially symmetric around the optical axis O and a collector optic CO of this type may be used in combination with a discharge produced plasma source, often called a DPP source. Alternatively, the source collector module SO may be part of an LPP radiation system as shown inFIG.21. A laser LA is arranged to deposit laser energy into a fuel, such as xenon (Xe), tin (Sn) or lithium (Li), creating the highly ionized plasma210with electron temperatures of several10's of eV. The energetic radiation generated during de-excitation and recombination of these ions is emitted from the plasma, collected by a near normal incidence collector optic CO and focused onto the opening221in the enclosing structure220. U.S. Patent Application Publication No. US 2013-0179847 is hereby incorporated by reference in its entirety. The concepts disclosed herein may simulate or mathematically model any generic imaging system for imaging sub wavelength features, and may be especially useful with emerging imaging technologies capable of producing increasingly shorter wavelengths. Emerging technologies already in use include EUV (extreme ultra violet), DUV lithography that is capable of producing a 193 nm wavelength with the use of an ArF laser, and even a 157 nm wavelength with the use of a Fluorine laser. Moreover, EUV lithography is capable of producing wavelengths within a range of 20-5 nm by using a synchrotron or by hitting a material (either solid or a plasma) with high energy electrons in order to produce photons within this range. While the concepts disclosed herein may be used for imaging on a substrate such as a silicon wafer, it shall be understood that the disclosed concepts may be used with any type of lithographic imaging systems, e.g., those used for imaging on substrates other than silicon wafers. The above-described techniques have been described for the specific application of improving the specific lithographic process of imaging a portion of a design layout onto a substrate using a lithographic apparatus. Embodiments generally provide techniques that use image-related metrics to improve any of the manufacture, testing, measurement and other processes of semiconductor structures on a substrate. In particular, a new image-related metric is generated. The new image-related metric is referred to throughout the present document as overlay margin. Overlay margin provides an indication of the tolerance against overlay errors in features that are being manufactured. Embodiments also provide techniques for improving the determination of control parameters in any of the processes performed during the manufacture, testing, measurement and other processes that may be performed with respect to a device (e.g., a semiconductor structure) on a substrate, including in dependence on the overlay margin. Overlay margin may be determined from a plurality of images of different layers and parts of a substrate. Each image may be obtained by an imaging device, such as an e-beam based metrology apparatus or any type of scanning electron microscope. An e-beam apparatus (for example manufactured by HMI) may have a 10 μm by 10 μm field of view. The processes that may be improved by the techniques of embodiments include any of: a lithographic process, a scanning process, a priming process, a resist coating process, a soft baking process, a post-exposure baking process, a development process, a hard baking process, a measurement/inspection process, an etching process, an ion-implantation process, a metallization process, an oxidation process and/or a chemo-mechanical polishing process. The overlay margin (as an example) may be used to determine one or more control parameters for any of these processes as well as any combination selected from these processes. Embodiments may include performing both computational metrology and control processes. The computational processes comprise obtaining one or more images of parts of a substrate on each of a plurality of layers of the substrate. Each obtained image comprises features comprised by a structure that is being manufactured on the substrate. An overlay margin is calculated in dependence on one or more properties of the features, such as contours of the features. One or more control parameters for controlling one or more processes in the manufacturing and/or other process of the features can then then determined in dependence on the overlay margin. FIG.24shows an image of a feature on part of a substrate. The image may represent, for example, a 10 μm by 10 μm area on the substrate. The thick line in the image is a target contour of one of the features. The thin line in the image is the actual outline of the manufactured feature. Although the ideal shape of the feature may be a rectangle, the target contour is curved/rounded since this is the closest possible shape to a rectangle that can be manufactured and therefore the best contour that can actually be achieved. The ideal shape may alternatively be used as the target shape. FIG.25shows a plurality of stacked images. The images that have been stacked may have each been obtained from one or more corresponding images of the same feature in different layers of a substrate and/or images of a plurality of features on the same layer of a substrate. The images may additionally, or alternatively, be of features on a plurality of different substrates and/or images of the same feature on the same layer of the substrate but taken by different imaging devices. When stacking the images, an alignment process has been performed. The alignment process may be based on aligning the images in dependence on one or more reference positions in, or superimposed onto, each of the images so that there is no overlay error between the images. For example, the alignment process may comprise aligning the target designs of the features in the images so that there is no overlay error between the target designs. The alignment process may be based on aligning the images in dependence on GDS/GDSII data. The effect of performing the alignment process is to remove the effects of any overlay error between the different images. The overlay margin is a measure of the stochastic variation of features in the stack of aligned images. The overlay margin may be calculated in dependence on the differences between the contours of corresponding features in the aligned versions of the images. The overlay margin may also be calculated in dependence on the target contours for the features. For example, for each of the images, the overlay margin may be calculated in dependence on a comparison of the feature in the image with the target of the feature. The differences between the contours of features in an image and the contours of features in other images, as well as target contours for the features, can be determined by a plurality of well-known specific image-related metrics, such as critical dimension uniformity (CDU), line width roughness (LWR), critical dimension amplitude and/or placement errors. Overlay margin is related to the known image-metric edge placement error (EPE). EPE is an image-metric that provides an overall representation of the differences between the contours of one or more images of features and a target contours for the features. EPE includes the overlay error between the images of features and the target contours for the features. Overlay margin differs from EPE in that it does not include the overlay error between images of the feature because the overlay error is removed by the above-described alignment process. A way of determining overlay margin is shown in Equation 8. Overlay Margin=EPE−Overlay Error (Eq. 8) Accordingly, the overlay margin may be calculated by calculating the EPE and the overlay error. The overlay error may be calculated in dependence on the alignments performed on the images. The overlay margin may then be calculated by subtracting the overlay error from a calculation of the EPE. It should be noted that the overlay error in Eqn. 8 may be calculated as a combination of an actual overlay amount and a design specification. This is because a failure condition may occur when it is desired for there to be an overlap between features in different layers of a structure but, even though an overlap occurs, a required area of overlap is not achieved. Similarly, a failure condition may occur when it is desired for there to be a separation of features in different layers of a structure but, even though the features are separated, a required amount of separation is not achieved. The design specification includes the required area of overlap of features and/or the required amount of separation of features. It is therefore appropriate to calculate the overlay error in dependence on a combination of the actual overlay amount and the design specification. The overlay margin may alternatively be determined in dependence on combination of contributions to the overlay margin in the aligned images. This is shown in an equation below for overlay margin: OVmargin=HROPC2+3σPBA2+6σLWR2+(3σCDU2)2(Eq.9) wherein HROPCis dependent on an error caused by optical proximity correction, σPBAis dependent on an error caused by proximity bias average (PBA), σLWRis dependent on an error caused by line width roughness, and σCDUis dependent on an error caused by critical dimension uniformity. In Eqn. 9, the contributions to determined overlay margin are OPC, PBA, LWR and CDU. Embodiments include alternative constructions of equations of determining the overlay margin that include one or more further contributions to the overlay margin and/or do not include one or more of the contributions to the overlay margin included in Eqn. 9. The overlay margin may be calculated in dependence on all of the contributions to an EPE calculation apart from the overlay error. Each of the images is typically of only a small part of the substrate. For example, each image may represent a 10 μm by 10 μm area on the substrate. An overlay margin may calculated in dependence on a plurality of images of different layers of the same part of the substrate. This is the local overlay margin for that part of the substrate. A plurality of local overlay margins may be calculated for each of a plurality of different parts of the substrate with each of the local overlay margins being calculated in dependence on a plurality of images of different layers of the same part of the substrate. The local overlay margins may be obtained either at all locations on the substrate or at only some of locations on the substrate. When the local overlay margins are obtained at only some of locations on the substrate, the locations may be selected so as to provide a fingerprint of the substrate. Each image may additionally, or alternatively, be considered as comprising a plurality of sections. Local overlay margins may be calculated for each of the sections of an image such that there are a plurality of local overlay margins for each image. The overlay margin of a substrate may comprise a plurality of local overlay margins with each of the local overlay margins being calculated in dependence on images of a different part of a substrate and/or sections of the images. An overlapping overlay margin may be defined as the minimum overlay margin of features within an image and/or section of an image. The overlay margin may be represented as an overlay margin map that shows the local variations of the overlay margin across a substrate. The overlay margin may alternatively be represented as an overlapping overlay margin map that shows the local variations of the overlapping overlay margin across a substrate. A global overlay margin may be calculated that is an average of the local overlay margins and/or overlapping overlay margins of the substrate. The overlay margin, and representations of the overlay margin, may be calculated for each of a plurality of values of each parameter that may contribute to the overlay margin. The dependence of the overlay margin on each parameter may be calculated, or inferred, from the overlay margins calculated for the values of the parameter. The dependence of the overlay margin on a plurality of parameters may also be determined. For example, an overlapping overlay margin map may be generated that shows the variation of the overlapping overlay margin across the surface of a substrate between two or more layers of the substrate. The overlapping overlay margin map may be determined as a function of critical dimension (CD). A multi-dimensional metric is therefore generated that can be used for overlay and CD co-optimization. The parameters that may contribute to the overlay margin may include focus, dose, illumination pupil shape (e.g. ellipticity), optical aberration (e.g. coma, spherical, astigmatism), etch rate, overlay, contrast, critical dimension, chuck temperature, gas flow and/or RF power distribution. The dependence of the overlay margin on one or more of these parameters may be determined. The yield of a manufacturing process is dependent on the occurrence of manufacturing errors. Manufacturing errors can occur when a desired area of overlap between features in different layers of a structure does not occur. Manufacturing errors also can occur when a minimum desired separation of features in different layers of a structure is not achieved. EPE is a measure of the positional variation of features and contours of the features and can be used to determine an expected yield of correctly manufactured structures and/or the probability of the structures being incorrectly manufactured. Due to the relationship between overlay margin and EPE, as shown in Eqn. 8, overlay margin may be used to determine an allowable amount of overlay error in order to achieve the expected yield of correctly manufactured structures and/or the probability of the structures being incorrectly manufactured. The overlay error is dependent on a number of controllable parameters. The one or more values, and possible one or more ranges of values, of one or more parameters that influence the overlay error may be therefore determined in dependence on the overlay margin so that the overlay error is within a range that is expected to achieve an expected yield. The expected yield may be the desired yield according to a manufacturing specification. Embodiments include determining one or more parameters for controlling the manufacturing, inspection and/or testing processes of structures on a substrate in dependence on the overlay margin. One or more of the parameters that may be controlled in dependence on the overlay margin include: focus, dose, illumination pupil shape (e.g. ellipticity), optical aberration (e.g. coma, spherical, astigmatism), etch rate, overlay, contrast, critical dimension, chuck temperature, gas flow and/or RF power distribution. The processes that are controlled by the one or more parameters may be a lithographic process, a priming process, a resist coating process, a soft baking process, a post-exposure baking process, a development process, a hard baking process, a measurement/inspection process, an etching process, an ion-implantation process, a metallization process, an oxidation process and/or a chemo-mechanical polishing process. The permissible level of EPE is dependent on the manufacturing specification. The manufacturing specification may be dependent on one or more selected from: a desired yield, a maximum probability of the features being incorrectly manufactured, a determined maximum allowable magnitude of an EPE, a determined maximum allowable overlay error, and/or a desired yield of devices. As described above, the EPE is dependent on the overlay margin and the overlay error. Accordingly, the overlay margin allows the restraints on the overlay error to be determined so that the EPE is at a particular level. The dependence of the overlay error on each parameter may be determined. The one or more values, and/or one or more ranges of values, for each of the one or more parameters may therefore be determined in dependence on the overlay margin. A process parameter may be determined in dependence on one or more selected from: an overlay margin map, one or more local overlay margins and/or a global overlay margin. Parameters that affect the overlay error may be co-determined such that the applied value of one of the control parameters is dependent on an applied value of another of the control parameters. The co-determination of at least two of the control parameters may be dependent on the combined effect of the at least two control parameters and/or the interdependence of the at the least two control parameters. By co-determining control parameters, the combined effects of control parameters, and/or the interdependence of effects of control parameters, can be used to advantageously improve the determination of control parameters for improving yield, or optimizing with respect to any other goal. One or more restraints on the rate of change and/or range of values one or more control parameters during a process may be determined. For example, during the manufacture of a device, there may be a limit on the extent that the focus may change between two different locations on a substrate due to the rate at which focus can be changed and the movement speed. Embodiments include using the determined one or more restraints of the one or more control parameters to perform an optimization process on the one or more control parameters given the permissible overlay error. For example, given the one or more restraints on the value of a parameter that may be applied, a parameter may be set at a level that results in an increased contribution to the overlay error. This may be made possible, with the total overlay error remaining within an acceptable range, by controlling another parameter to reduce its contribution to the overlay error. The overlay error may be dependent on at least one of the co-determined control parameters and the dimensions of features manufactured on a device may be dependent on at least one other one of the co-determined control parameters. At least one of the co-determined control parameters may include focus, dose, illumination pupil shape, optical aberration, etch rate, overlay, contrast, critical dimension, chuck temperature, gas flow and/or RF power distribution. As described above, the relationship between the overlay margin and one or more applied parameters may be determined. The one or more applied values, and applicable one or more ranges of values, of one or more parameters may be determined in dependence on how the one or more parameters affect the overlay margin. The determination of the one or more applied values and applicable one or more ranges may be made in dependence on the effect of the one or more parameters on both the overlay margin and the overlay error. The co-determination of the applied values and applicable ranges for a plurality of parameters may be made in dependence on the effect of the plurality of parameters on both the overlay margin and the overlay error. For example, one or more parameters may be determined so as to minimize the overlay margin so as to reduce one or more restraints on the overlay error. This may allow one or more other parameters to be set at a value that increases the contribution of those one or more other parameters to the overlay error. In particular, an overlapping overlay margin map may be determined as a function of critical dimension (CD). This may then be used for overlay and CD co-optimization. FIG.26is a flowchart of a process for determining an image-metric of features on a substrate according to an embodiment. In step2601, the process begins. In step2603, a first image of a plurality of features on a substrate is obtained. In step2605, one or more further images are obtained of a corresponding plurality of features on the substrate, wherein at least one of the one or more further images is of a different layer of the substrate than the first image. In step2607, aligned versions of the first and one or more further images are generated by performing an alignment process on the first and one or more further images, wherein the alignment process substantially removes the effect of any overlay error between the features in the first image and the corresponding features in each of the one or more further images. In step2609, an image-metric is calculated in dependence on a comparison of the features in the aligned version of the first image and the corresponding features in the aligned versions of the one or more further images. In step2611, the process ends. Embodiments include a number of modifications and variations to known processes. Embodiments also include the above-described techniques being applied with an alternative definition of overlay margin. For example, the overlay margin may alternatively be defined as: Overlay Margin=EPE−(all errors except overlay errors) (Eq. 10) The determination of overlay margin in Eq.10 may be in dependence on a combination of the contributions to the overlay margin. This is shown in Eq. 11 below: OVmargin=EPE-(HROPC2+3σPBA2+6σLWR2+(3σCDU2)2)(Eq.11) Any of the techniques described throughout the present document can be used to determine and optimize image-related metrics of embodiments. Embodiments determine one or more control parameters for controlling one or more processes in the manufacture of a device. The processes include any process, including a measurement process, and can be performed by any known apparatus. One or more processes according to embodiments can be controlled by a computing system executing instructions for performing the one or more processes and that are stored on a non-transitory computer readable medium. The system according to embodiments may comprise a computing system and an electron beam apparatus, wherein the electron beam apparatus is arranged to obtain images of one or more substrates. The system may comprise a lithographic apparatus and/or a metrology apparatus. The yield of a manufacturing process (such as a semiconductor manufacturing process) depends on the amount of defects that are present on the end product. Defects may be caused by, e.g., features not being correctly transferred (printed) to, for example, a resist layer on the substrate. Features may be missing or placed and/or dimensioned incorrectly such that neighboring features may merge. Some examples of types of defects that may occur are shown inFIGS.27A to27F. FIGS.27A to27Dshow defects that may be identified within a single layer. The type of defect shown inFIG.27Ais when the feature for a target design of the feature is entirely missing. The type of defect shown inFIG.27Bis when a feature is formed for a target design but the feature is formed in the incorrect position and does not overlap with the target design. The type of defect shown inFIG.27Cis when a plurality of features are formed for a target design of a single feature. The type of defect shown inFIG.27Dis when two neighboring formed features overlap with each other and have merged. FIGS.27E and27Fshow defects that may be identified across two layers. InFIG.27E, a defect is caused by a feature that has been formed in one layer not overlapping as required with a design target in another layer. InFIG.27F, a defect has been caused by a feature that has been formed in one layer overlapping with a design target in another layer when no overlap was intended. It is known, with optical lithography, that the CD of features is dependent on the applied dose and/or focus. This is the basis for Bossung curve analysis. A target CD is therefore associated with a process window of focus and dose values for achieving the target CD. It is known for a process window for focus and dose values to be determined by measuring the CD of the features of interest at a plurality of different focus and exposure conditions, referred to as a focus exposure matrix (FEM) process. The CD of the features may be measured by a metrology tool. The CD value that is used may be the mean CD (μ) and determined by averaging the individual measurement values of CD so as to form a mean CD process window. The focus exposure matrix conditions may be unique per exposed die and so a sampling of the features of interest within each die are chosen. The density of the sampling can be optimized in dependence on the time required by metrology tool to take the measurements. For an accurate determination of the mean CD the sampling does not need to be extensive. The center of a mean CD process window represents the dose and focus conditions that provide the least mean CD variation due to fluctuations in the dose or focus. It is known for the focus and dose values that provide the center of the mean CD process window to be the focus and dose values used in a lithography process. In addition to determining the mean CD (μ), statistical methods on the variation of the CD of the features obtained by the metrology tool can be used to determine the standard deviation (α), variance (σ2), skewness (γ) and kurtosis (κ). During an optical lithography process, using a specific focus condition and a specific dose condition, metrology sampling within each die (intra-field), die to die (inter-field), substrate to substrate, and lot to lot, may generate measured data of one or more dimensions of features and the measured data may be analyzed. A CD uniformity may be determined, that is related to the standard deviation or variance of the measurements of one or more dimensions of the features, and can be used to monitor and/or control the production process. The standard deviation of the measurements may be used as the CD uniformity. However, the variance of the measurements may alternatively be used as the CD uniformity. In CD-SEM metrology, a CD-SEM tool obtains data across an area of a substrate. The area that data can be obtained from is the field of view of the CD-SEM tool. The field of view may be 10 μm by 10 μm (or larger). The CD-SEM tool may be an e-beam apparatus. The tool used to obtain the data may be any suitable type of SEM, such as a SEM manufactured by HMI. FIG.28shows examples of measured data, obtained by a CD-SEM tool, of dense contact hole arrays. From the CD-SEM measurement data, dimensional data, such as the CDs of contact holes, across a region, such as a field of view of the CD-SEM tool, can be obtained and an analysis of defects can be performed. Within a field of view, for each of a plurality of occurrences of a feature of interest, one or more properties of the feature can be measured. The measurements can be used to determine the mean CD of each field of view within a die and also, by averaging the mean CD of each field of view within the die, the overall mean CD of that die. The measurements can also be used to determine the within field of view CD uniformity and within die CD uniformity. For example, the local CD uniformity (LCDU) may be the standard deviation of the CD measurements within a single CD-SEM field of view. The magnitude of the local CD uniformity may be substantially the same from location to location, die to die, substrate to substrate, and lot to lot. Current state of the art lithography processes may achieve below 1 nm CD uniformity when comparing mean CD values within a die, die to die and between substrates. However, the local CD uniformity performance for critical layers may exceed 1 nm. It has been realized that the properties of a statistical parameter that is at least partially dependent on the CD variation, such as the local CD uniformity, the within die uniformity, the die to die uniformity and/or observed printed defects, can be used to improve the process control. More particularly, the occurrence of defects, and thereby the yield of a manufacturing process, depends on a statistical parameter that describes the CD variation. By further determining the process window in dependence on a statistical parameter that is dependent on the CD variation, an improved process window may be determined over a process window that is only dependent on the mean CD. The measurement data used to obtain the statistical parameter that includes the CD variation may be obtained from an after development inspection of features formed within a resist applied to the substrate and/or from an after etch inspection of features formed within a layer applied to the substrate. From the measurement data it is possible to extract both dimensional data and defect data, wherein the defects may include one or more missing features or the merging of at least two features. The measurement data may comprise data associated with one or more dimensions of any feature of interest, such as contact holes, lines and spaces and/or more complicated two-dimensional product features. Typically the measurement data comprises product feature data obtained by an electron-beam metrology tool, such as a CD-SEM or a large field of view electron-beam tool. The metrology tool may be capable of measuring variation of one or more dimensions of the features at a per feature basis in order for LCDU metrics to be determined. The statistical parameter may be based on measurement data comprising dimensional and defect data of features across multiple layers of a substrate. For example, the measurement data may comprise dimensional data of features comprised within at least two layers on the substrate and the determined defects, may be the type of defect shown inFIGS.27E and/or27F, associated with defects occurring across the at least two layers. Embodiments provide a new indicator for predicting the probability of defects based on obtained measurements of features, such as CD-SEM metrology measurements. Embodiments use image analysis techniques on measured data of features on a substrate, such as that shown inFIG.28, to determine if defects have occurred. Examples of the types of defects that may be detected are shown inFIG.27. A defect may be, for example, a missing contact hole or contact holes that have merged. Missing contact holes may be caused by the CD being too small. Merging contact holes may be caused by the CD being too large. The contact holes have become so wide that the resist wall between them collapses and two separate contact holes merge to become one large contact hole. The number of features of interest and the defects of features of interest in a field of view may be counted. The proportion of defects may be determined as the ratio of the number of defective features of interest to the total number of features of interest. A defect probability may be determined that is the same as, or based on, the proportion of defects in one or more fields of view. For example, the defect probability may be determined as the average of all of the determined proportion of defects for a plurality of fields of view. A defect probability may be determined only for defects in features of interest or for all features. FIG.29shows the relationship between the determined defect probability of features and the mean CD of features. All of the points inFIG.29have been generated by performing a FEM process. A series of exposures were performed for a dense contact hole array, such as shown inFIG.28, for a plurality of focus and dose conditions. For each focus and dose condition, both the mean CD and the defect probability were determined. The defect probability values shown inFIG.29fall into three groups. A first group of the defect probability values shown inFIG.29has a line of best fit with a negative gradient. The first group of defect probability values indicates that defect probability increases for mean CDs below a threshold value of about 22 nm. A second group of the defect probability values shown inFIG.29has a line of best fit with a positive gradient. The second group of defect probability values indicates that defect probability increases for mean CDs above the threshold value. A third group of the defect probability values shown inFIG.29have a line of best fit that is the substantially horizontal line at the bottom ofFIG.29. The third group of the defect probability values indicate that when the mean CD is in the range between 15 nm and 25 nm, the defect probability is very low. Although the first group and second group of defect probability values are consistent with each other and identify a threshold value of the mean CD that has the lowest defect probability, the first and second groups of defect probability values are inconsistent with the third group of defect probability values that indicate that a low defect probability can be achieved when the mean CD is instead anywhere within a range. The mean CD on its own is therefore an unreliable predictor of an appropriate value of mean CD for achieving a particular defect probability. Embodiments include generating a statistical parameter for predicting the defect probability. The statistical parameter is referred to herein as a Tail CD. The Tail CD may be determined in dependence on both the mean CD as well as the statistical variation of the CD, referred to herein as the CD variation. The CD variation may be dependent on the above-described CD uniformity. The CD variation may therefore be determined in dependence on the LCDU of features of interest in one or more fields of view. For example, the CD variation may be dependent on the average of the LCDU values for all of the fields of view. In particular, the CD variation may be determined to be a multiple of the average of the LCDU values for all of the fields of view and, for each field of view, the LCDU value may be determined to be the standard deviation of the CD values of features in the field of view. The multiple may be three. A first way of calculating the Tail CD may be as the mean CD minus the CD variation. A second way of calculating the Tail CD may be as the mean CD plus the CD variation. The Tail CD may be calculated according to the first way when the mean CD, and/or CD variation, is below a threshold value and the Tail CD may be calculated according to the second way when the mean CD, and/or CD variation, is at, or above, the threshold value. The threshold value may be, for example, determined in dependence on a calculation of a cumulative probability of defect occurrence on either side of the threshold value. For example, the threshold value may correspond to an equal cumulative defect probability for a mean CD, and/or CD variation, smaller than the threshold value and a mean CD, and/or CD variation, at or larger than the threshold value. FIG.30shows the relationship between the defect probability and Tail CD. The graph inFIG.30may be referred to as a defect probability relationship. The Tail CD values shown inFIG.30have been obtained by performing a FEM process with a large number of focus and dose conditions. At each focus and dose condition, the properties of approximately 6500 contact holes were measured and used to determine the mean CD and LCDU values. The proportion of defects was also determined and used to determine the defect probability. The Tail CD has been calculated as described above. That is to say, the CD variation is determined to be three times the average of the LCDU values for all of the fields of view and, for each field of view, the LCDU value is the standard deviation of the CD values of features in the field of view. The results inFIG.30indicate that there is a central range of Tail CD values for which the defect probability is low. The defect probability increases when the Tail CD is lower than the lower extreme of the range and increases when the Tail CD is larger than the upper limit of the range. The central range of Tail CD values therefore indicates Tail CDs for which the defect probability is low. Unlike the values shown in theFIG.29, the values shown inFIG.30fall into groups that are consistent with each other. The Tail CD is therefore a more reliable indicator of the defect probability than the mean CD. A process window may be determined in dependence on the central range of Tail CD values. The determined process window should be wide enough to tolerate all sources of CD non-uniformity combined in order to achieve acceptable yield at the defect probability level corresponding to the process window. The defect probability level may be required to be 1 e−7 or lower. The graph shown inFIG.30is a statistical distribution. Embodiments include determining the correlation relationships within the statistical distribution. The correlation relationships can be used to extrapolate the results and to assist the making of deductions in dependence on the statistical distribution. The tails of the statistical distribution can be characterized by the moments of the distribution. For a pure Gaussian distribution, the mean and the variance, the first and second central moments respectively, are all that is necessary to describe the distribution. The statistical distribution shown inFIG.30is not a pure Gaussian distribution because it is skewed. That is to say, the values of the Tail CD to the left of the central range have a line of best fit with a different gradient from a line of best fit of the values of the Tail CD to the right of the central range. A third central moment, skewness (γ), and a fourth central moment, kurtosis (κ), can be defined to describe the distribution shown inFIG.30. A left tail correlator χL may be determined for the values of the Tail CD lower than values of the Tail CD in the central range. This is for Tail CDs that are calculated as the difference between the mean CD and the CD variation. A right tail correlator χR may be determined for the values of the Tail CD larger than values of the Tail CD in the central range. This is for Tail CDs that are calculated as the sum of the mean CD and the CD variation. Each value of χL and χR may be separately determined in dependence on the standard deviation, skewness and kurtosis of the statistical distribution of the Tail CD values. When χL,R=(σ+γ)×κ; χL,R are the tail correlators of a Skew-Normal approximation. The tail correlators may be used to characterize the Tail CD statistical distribution. Embodiments include using the tail correlators to determine one or more formulae for describing each of the tails of the statistical distribution. The formulae can be used to extrapolate each of the tails. The extrapolation allows defect probabilities outside of the range provided by the results in the statistical distribution to be estimated. FIG.31shows a line of the correlation relationship of Tail CD values that are lower than values of the Tail CD in the central range of the statistical distribution. The line of the correlation relationship has been extrapolated. The extrapolation allows any defect probability to be estimated. As shown inFIG.31and for the particular data that underliesFIG.31, the extrapolation allows the determination that a Tail CD that is equal to, or larger than, 19 nm is required for providing a defect probability of 1 e−6. Due to the large number of measurements required to measure a low defect probability, it may only be possible to determine the Tail CD for providing a low defect probability by using such an extrapolation of the values in the Tail CD. FIG.32shows the line of the correlation relationship for values that are larger than values of the Tail CD in the central range of the statistical distribution as well as the line of the correlation relationship for values that are lower than values of the Tail CD in the central range. The point where the lines of each correlation relationship intersect each other indicates the Tail CD for providing the minimum achievable defect probability. This value of the Tail CD may be selected as a target Tail CD and the process window of focus and dose values may be determined as appropriate focus and dose values for providing the target Tail CD. Alternatively, a target range of Tail CD values over which the required defect probability is at, or below, a particular level may be determined. The upper and lower limits of the target range may be determined as the Tail CD values of each of the lines of the correlation relationships at the required defect probability. The process window of focus and dose values may be determined as focus and dose values for providing Tail CDs within the target range. The center of the process window may be determined as the focus and dose values that provide the Tail CD at the mid-point of the target range of Tail CD values. The results inFIG.32demonstrate that by determining a smaller and re-centered process window according to the techniques according to embodiments, the number of defects may be reduced by a factor of three. FIG.33shows how a target value of Tail CD, or target range of Tail CDs, that correspond to achieving a desired defect probability can be used to determine the process windows of the focus and dose values. For each focus value and dose value, a Tail CD may either be directly calculated from measured data, inferred from extrapolations of the available data or estimated using other techniques. The process windows for the focus and dose can be determined as the focus and dose values that together provide the target Tail CD, or a Tail CD within the target range of Tail CDs. By selecting the target focus value to be in the center of the focus process window, and the target dose value to be in the center of the dose process window, the probability of the number of defects increasing due to fluctuations in the actual applied focus and dose values from their target values is reduced. Although embodiments have been described with reference to determining focus and dose values, the techniques of embodiments may also be used to determine any of: an etch tool setting, a deposition tool setting, a resist development setting, a laser bandwidth, an optical aberration and/or a dynamic parameter of a lithographic apparatus (such as a process setting in a mechatronic system). FIG.34shows example statistical distributions of Tail CD that have been generated according to the techniques according to embodiments. One of the distributions was generated using resist A that had a nominal required dose of approximately 55 mJ/cm2. The other distribution was generated using resist B that had a nominal required dose of approximately 24 mJ/cm2. A comparison of the Tail CDs indicates that resist B has a larger CD variation and smaller process window than resist A. The characterization of a process according to embodiments may therefore be used to aid the selection of the photoresist used in the process of manufacturing features. Embodiments include calculating the Tail CD in alternative ways. For example, embodiments include the Tail CD being calculated based on cumulative probabilities. In some applications, it may be that one type of defect is more serious than another. For example, merging contact holes may be a defect that is completely unacceptable but missing contact holes may be a defect that is tolerable. The merging contact holes defect may be caused by a CD that is too large whereas the missing contact holes defect may be caused by a CD that is too small. The different types of defect are therefore caused by opposite extremes of the CD. Embodiments include determining a process window, in dependence on a determined correlation relationship between defect probability and Tail CD, with the process window biased towards reducing the probability of the unacceptable defect occurring. The process window may be determined in dependence on only the line of the correlation relationship for the unacceptable defect. Such a process window would provide a defect probability for the unacceptable defect that is below a required value, even though the process window may increase the number of tolerable defects that occur. Embodiments also include a process window being determined based on one or more additional criteria. For example, a dose value may be determined that both provides a Tail CD associated with an acceptable defect probability and is also low enough to allow a lithography apparatus to meet a certain productivity requirement, such as throughput. Embodiments include a preferred process setting being a minimum dose setting for which the probability of occurrence of defects meets the manufacturing requirements. Further control of the dose may be performed only at locations on a substrate that are associated with a Tail CD for which the predicted defect probability exceeds a certain value. Focus control may be based on a similar principle. Known feature characteristics, for example using layout information, can be used to predict which focus deviations would give rise to an unacceptable increase in defect probability. Further focus control can be performed at locations on the substrate associated with the unacceptable increase in defect probability. Embodiments also include using one or more known process settings applied to substrates, such as focus and dose maps, for targeting the defect inspection. Only the substrates, or the parts of a substrate, may be inspected that have Tail CDs that are associated with an increased risk of defect occurrences. Alternatively, or additionally, features may be selected for inspection based on their feature specific defect based process window. That is to say, features with a narrow range of Tail CDs for which the defect probability meets a process limit may be selected for defect inspection, while features that are more robust to defect occurrences are not inspected. This may result in a substantial decrease in the metrology measurement time. FIG.35shows a method of determining a characteristic of one or more processes for manufacturing features on a substrate according to an embodiment. In step3501, the process starts. In step3503, image data of a plurality of features on a least part of at least one region on a substrate is obtained. In step3505, the image data is used to obtain measured data of one or more dimensions of each of at least some of the plurality of features. In step3507, an overall statistical parameter is determined that is dependent on the variation of the measured data of one or more dimensions of each of at least some of the plurality of features. In step3509, a probability of defective manufacture of features is determined in dependence on a determined number of defective features in the image data. In step3511, the characteristic of the one or more processes is determined to comprise the probability of defective manufacture of features and the overall statistical parameter. In step3513, the process ends. As depicted inFIG.33, a target value of Tail CD, or target range of Tail CDs, that corresponds to achieving a desired defect probability can be used to determine the process windows of the focus and dose values. It may be the case that for each focus and/or dose value a large number of CD values are available (for example due to the availability of massive metrology using large field of view (FOV) e-beam tooling). Instead of defining the process window based on a mean CD, Tail CD or other derived metric it is proposed to derive a probability of the CD associated with a certain focus and/or dose value being within a pre-defined range. The pre-defined range may be for example associated with a CD range for which the device comprising the feature for which the CD is measured will function within its one or more specification limits (e.g. the device will yield). The probability may be expressed as a percentage of probability of the CD being compliant with a certain pre-defined range. The resulting process window may be constructed as a matrix of percentile values, wherein a row corresponds for example to a behavior of the percentile value through focus and a column to a behavior of the percentile value through dose. A process window according to this description is depicted inFIG.36. For example the values of the focus and dose for which the probability equals or exceeds a pre-defined range of 99-100% is depicted in the Figure as a closed curve. Basically any combination of focus and dose conditions which lie within this example curve (contour) are in this case associated with a high probability of the CD being within its specified range. The process window may further be generalized to any desired parameter of variation. For example, in addition or alternatively to focus and/or dose, aberration, overlay, one or more dynamic performance related parameters (Moving Average, Moving Standard Deviation) or any other parameter related to the performance of a lithographic process may be used. Further, the process window may be specified in terms of a percentile value defined in a vector space having more than 2 dimensions, for example the vector space may have 3 dimensions, the dimensions associated with for example dose, focus and contrast of the imaging (for example based on one or more characteristics of the projection system of the lithographic apparatus). The process window may further be defined based on a different metric than CD, for example a process window associated with a measured overlay (error), image contrast (NILS, ILS), EPE or local CD variations (local critical dimension uniformity (LCDU), LWR, LER) may also be envisioned. In an example the process window is defined as a matrix of values of a probability of EPE being within a pre-defined range across a range of overlay error values, dose values and focus values (3-dimensional space). After establishing a process window according to the method described above it is possible to select a suitable work point of the lithographic process. In an example, the work point of a lithographic process is selected based on a dose value associated with the highest probability of the CD being within its specified range (for example target CD+/−10% deviation). In this example, the process window is a vector of probability values, each value being determined for a different dose value. In another example, the process window is used to derive a contour within a multi-dimensional vector space associated with a minimally acceptable probability of the CD/overlay/EPE/image contrast being within a specified range. The center of the contour may be taken as the most promising work point to configure the lithographic process. For example the minimum acceptable probability of the EPE being within a range of −3.5 nm to +4.5 nm may be taken as 95%. In case the process window is defined within a focus—dose space, the best focus and dose value may be derived from the contour within the space associated with a probability value >=95%, for example based on the center of the contour. This is depicted inFIG.37. The elliptical contour as drawn in the Figure represents a collection of focus and dose values for which the lithographic process is just delivering CD values meeting a certain criterion. The center of the contour ‘X’ (where the dashed lines intersect, focus=0.02 μm and dose=53 mJ in this case) may be considered a good workpoint for the lithographic process. In an embodiment a plurality of distributions of values of a performance parameter is obtained, each distribution of the values of the performance parameter is associated with a different processing condition. Subsequently for each distribution of values of the performance parameter an indicator of a probability of the performance parameter being within a specified range is derived to obtain a plurality of probability indicator values, each probability indicator value associated with a different processing condition. Based on a relation between the value of the probability indicator and the processing condition, a desired processing condition is selected. In an embodiment, the performance parameter is one or more selected from: CD, EPE, overlay (error), local CDU, LER, LWR, image contrast (NILS or ILS) or yield of the process. In an embodiment, the processing condition relates to one or more process parameters such as: focus, dose, optical aberration level(s) of a projection system of a lithographic apparatus, overlay, and/or one or more dynamic conditions associated with synchronization error between a patterning device and a substrate table. In an embodiment, the different processing conditions are selections from a vector or matrix of values of one or more process parameters, such as focus and/or dose. In an embodiment, the probability indicator is a percentage of measured or simulated samples comprised within a distribution of values of the performance parameter which meet a pre-defined criterion, such as a specified range of the performance parameter. In an embodiment, the desired processing condition is selected based on one or more processing conditions associated with a maximum value of the probability indicator. In an embodiment, the desired processing condition is selected based on a processing condition associated with one or more values of one or more process parameters for which the probability indicator is close to, or equals, a minimum required probability value. As mentioned previously a relation between the value of the probability indicator and the processing condition may be used to select a desired processing condition. The relation is representative for the process of manufacturing semiconductor devices; a preferred process is robust against the processing conditions (for example focus/dose variations) as this will likely be reflected by a satisfactory yield of the process. Often the yield is expressed a number of or fraction of dies which are functioning within specification. In case the process is fixed it may be advantageous to select process conditions giving the most optimum, or at least an improved, yield (compared to current conditions). Assuming the probability indicator has been determined across a process window (typically dose and focus parameter values) and dose and focus parameter values are available for (at least part of) a substrate of interest, it is possible to derive a map of the probability indicator across (at least part of) the substrate of interest. Basically per substrate location a particular value for the probability indicator may be calculated based on the relation and the substrate specific process condition data (focus and dose parameter values). Although the example mentions focus and dose values also any other relevant process parameter may be in scope and available for the substrate of interest (reticle CD and overlay data, aberration data, stage performance data and other process parameters of potentially other processing tools such as etch related parameters). Once the substrate specific process condition data is combined with the relation a probability of failure metric can be derived across the at least part of the substrate of interest. The probability indicator being associated with a probability of a dimension or position of a feature being within a pre-defined range may be translated to a probability of failure (defect probability) based on pre-defined requirements on the feature (typically based on its role within a device provided to the substrate of interest and the impact of dimensional and/or positional deviations of the feature on electrical characteristics of the device after fabrication). Hence a map of the failure probability is obtained across the at least part of the substrate for the substrate specific process conditions. The map may be further processed to depict aggregated failure probabilities; for example per die, exposure field or even per functional device area within the dies on the at least part of the substrate of interest. Further a threshold for the failure probability may be defined. The threshold may for example represent a value of the failure probability which is believed to be the maximum allowable value still giving acceptable device performance. Using the threshold the failure probability map may for example used to derive a number or fraction of dies on the substrate of interest which are yielding. Further based on the failure probability map and the substrate specific process condition data an improved process condition may be determined which is expected to improve the number or fraction of for example yielding dies on the substrate of interest. For example an updated target dose and focus value may be determined that is predicted to improve the yield (based on the known relation between the probability indicator and dose/focus values and the known distribution of focus and dose values for the substrate of interest). Further processing data associated with previous layers as provided to the substrate of interest may be taken into account for determining an improved process condition for a current layer on the substrate of interest. For example pattern placement error (PPE), CD or overlay information associated with one or more previous layers may be used to enhance the determining of the failure probability values and subsequently may result in determination of an adjusted best processing condition prediction. For example a certain PPE fingerprint may indicate that it is advantageous to slightly bias the device dimensions of a subsequent layer in order to guarantee good contact between the features comprised within the previous and subsequent layer(s). The bias may be a decision to lower the target dose with a certain amount in order to increase the dimensions of the features within a current layer in order to at least partially compensate the relatively large PPE as observed in a previous layer. In an embodiment a method is provided to determine a probability of failure of one or more semiconductor devices provided to a substrate, the method comprising: obtaining a relation between the value of a probability indicator and a processing condition; obtaining substrate specific values of one or more parameters associated with the processing condition across at least part of the substrate; and combining the relation and the substrate specific values to determine the probability of failure across at least part of the substrate. In an embodiment the processing condition comprises values of an effective dose and focus deviation across at least part of the substrate. In an embodiment the method further comprises determining a yield metric representative for a fraction or number of yielding dies on the at least part of the substrate based on the probability of failure across at least part of the substrate and a selected threshold of the probability of failure. In an embodiment the method further comprises determining an improved processing condition based on an expected improvement of the yield metric. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. Further embodiments of the disclosure are given in the list of numbered clauses below:1. A method of determining a characteristic of one or more processes for manufacturing features on a substrate, the method comprising:obtaining image data of a plurality of features on a least part of at least one region on a substrate;using the image data to obtain measured data of one or more dimensions of each of at least some of the plurality of features;determining an overall statistical parameter that is dependent on the variation of the measured data of one or more dimensions of each of at least some of the plurality of features;determining a probability of defective manufacture of features in dependence on a determined number of defective features in the image data; anddetermining the characteristic of the one or more processes to comprise the probability of defective manufacture of features and the overall statistical parameter.2. The method according to clause 1, wherein the image data is obtained from a plurality of regions on the substrate.3. The method according to clause 2, further comprising:determining, for each of the plurality of regions, a local statistical parameter that is dependent on the variation of the measured data of one or more dimensions of a plurality of features in the region; anddetermining the overall statistical parameter in dependence on a plurality of the local statistical parameters.4. The method according to clause 3, wherein the local statistical parameter of each region is dependent on the local critical dimension uniformity of features in the region.5. The method according to any preceding clause, further comprising determining a defect probability relationship that indicates the relationship between determined probabilities of defective manufacture of features and respective overall statistical parameters,wherein determining the defect probability relationship comprises generating a plurality of characteristics for one or more processes performed in the manufacturing of features on the substrate, wherein each of the plurality characteristics is generated by performing the method under different conditions of the one or more processes.6. The method according to any preceding clause, wherein each overall statistical parameter is generated in dependence on:a mean value of one or more dimensions of a plurality of features; anda variation value that is dependent on the variation of one or more dimensions of the plurality of features.7. The method according to clause 6, wherein the variation value is the standard deviation of one or more dimensions of the plurality of features.8. The method according to clause 6 or clause 7, wherein each overall statistical parameter is generated in dependence on either:the difference between the mean value and a multiple of the variation value; orthe sum of the mean value and the multiple of the variation value.9. The method according to clause 8, wherein the multiple of the variation value is three.10. The method according to clause 8 or clause 9, wherein:each overall statistical parameter is generated in dependence on the difference between the mean value and a multiple of the variation value when the mean value is below a threshold value; andeach overall statistical parameter is generated in dependence on the sum of the mean value and the multiple of the variation value when the mean value is at or above the threshold value.11. The method according to clause 10, wherein the threshold value is determined as the threshold value for which a first cumulative probability value and a second cumulative probability value are the same; wherein:the first cumulative probability value is the cumulative probability of the occurrence of defects for all mean values of a plurality of statistical parameters that are below the threshold value; andthe second cumulative probability value is the cumulative probability of the occurrence of defects for all mean values of a plurality of statistical parameters that are at or above the threshold value.12. The method according to clause 5, or any clause dependent on clause 5, further comprising determining one or more formulae for describing each of the tails of the defect probability relationship.13. The method according to clause 12, further comprising:using the one or more formulae to estimate a minimum achievable defect probability;determining a value of the overall statistical parameter that corresponds to the estimated minimum achievable defect probability; anddetermining one or more process windows for the one or more processes in dependence on the determined value of the overall statistical parameter.14. The method according to clause 12, further comprising:using the one or more formulae to determine a range of values of the overall statistical parameter over which the defect probability is at, or below, a user determined level; anddetermining one or more process windows for the one or more processes in dependence on the determined range of values of the overall statistical parameter.15. The method according to clause 12, further comprising:using one or more formulae for describing only one of the tails of the defect probability relationship to determine a value of the overall statistical parameter at and either above or below which the defect probability is at, and either above or below, a user determined level; anddetermining one or more process windows for the one or more processes in dependence on the determined values of the overall statistical parameter.16. The method according to any of clauses 13 to 15, wherein determining a process window of a process comprises determining a process setting in dependence on a known, or estimated, relationship between the process setting of each process and the value of the overall statistical parameter.17. The method according to clause 16, wherein the process settings comprise one or more selected from: a focus setting, a dose setting, an etch tool setting, a laser bandwidth setting, an optical aberration setting, a dynamic parameter setting of a lithographic apparatus, a deposition tool setting and/or a resist development setting.18. The method according to clause 5, or any clause dependent on clause 5, wherein the defect probability relationship is determined by generating a plurality of characteristics at each of a plurality of process settings of a dose process and at each of a plurality of process settings of a focus process.19. The method according to any preceding clause, wherein the measured data is obtained by an after development inspection of features formed with a resist applied to the substrate.20. The method according to any preceding clause, wherein the measured data is obtained by an after etch inspection of features formed within a layer applied to the substrate.21. The method according to any preceding clause, wherein the determined defective features in the image data comprise missing features when a feature should be present and the merging of at least two features when the at least two features should be separated from each other.22. The method according to any preceding clause, wherein the measured data comprises data of features comprised within two or more layers on the substrate and the determined defective features in the image data comprises too large an error in the relative positioning of features comprised by different layers.23. The method according to clause 5, or any clause dependent from clause 5, further comprising:determining, for each of a plurality of photoresists, a defect probability relationship and one or more process windows in dependence on the defect probability relationship; andselecting a photoresist for use in the process of manufacturing features in dependence on the determined one or more process windows.24. A system configured to perform the method of any preceding clause.25. The system according to clause 24, wherein the system comprises a computing system and an electron beam apparatus, wherein:the electron beam apparatus is arranged to obtain images of a substrate; andthe computing system is arranged to receive the obtained images of a substrate and perform the method of any of clauses 1 to 23.26. The system according to clause 24 or clause 25, wherein the system comprises a lithographic apparatus and/or a metrology apparatus.27. A non-transitory computer-readable medium comprising instructions that, when executed, are configured to cause the manufacturing process of a device on a substrate to be controlled according to a method according to any of clauses 1 to 23.28. A method for determining a desired processing condition, the method comprising:obtaining a plurality of distributions of values of a performance parameter, each distribution of the values of the performance parameter is associated with a different processing condition;deriving for each distribution of values of the performance parameter an indicator of a probability of the performance parameter being within a specified range to obtain a plurality of probability indicator values, each probability indicator value associated with a different processing condition; anddetermining the desired processing condition based on a relation between the value of the probability indicator and the processing condition.29. The method of clause 28, wherein the performance parameter is one or more selected from: critical dimension, edge placement error, overlay (error), local critical dimension uniformity, line edge roughness, line width roughness, image contrast (normalized image log slope or image log slope) or yield of the process.30. The method of clause 28 or clause 29, wherein the processing condition relates to one or more process parameters such as: focus, dose, optical aberration level(s) of a projection system of a lithographic apparatus, overlay, and/or a dynamic condition associated with synchronization error between a patterning device and a substrate table.31. The method of any of clauses 28 to 30, wherein the different processing conditions are selections from a vector or matrix of values of one or more process parameters, such as focus and/or dose.32. The method of any of clauses 28 to 31, wherein the probability indicator is a percentage of measured or simulated samples comprised within a distribution of values of the performance parameter which meet a pre-defined criterion, such as a specified range of the performance parameter.33. The method of any of clauses 28 to 32, wherein the desired processing condition is selected based on one or more processing conditions associated with a maximum value of the probability indicator.34. The method of any of clauses 28 to 32, wherein the desired processing condition is selected based on a processing condition associated with one or more values of one or more process parameters for which the probability indicator is close to, or equals, a minimum required probability value.35. A method of determining a probability of failure of one or more semiconductor devices provided to a substrate, the method comprising: obtaining the relation between the value of the probability indicator and the processing condition using the method of clause 28; obtaining substrate specific values of one or more parameters associated with the processing condition across at least part of the substrate; and combining the relation and the substrate specific values to determine the probability of failure across at least part of the substrate.36. The method of clause 35, wherein the processing condition comprises values of an effective dose and focus deviation across at least part of the substrate.37. The method of clause 35 or 36, further comprising determining a yield metric representative for a fraction or number of yielding dies on the at least part of the substrate based on the probability of failure across at least part of the substrate and a selected threshold of the probability of failure.38. The method of clause 37, further comprising determining an improved processing condition based on an expected improvement of the yield metric. Embodiments are provided according to the following clauses:39. A method of determining a characteristic of one or more processes for manufacturing features on a substrate, the method comprising:obtaining image data of a plurality of features on a least part of at least one region on a substrate;using the image data to obtain one or more dimensions of at least some features out of the plurality of features and determine a value of a statistical parameter based on the variation of the one or more dimensions;obtaining a probability of defective manufacture of features in dependence on a determined number of defective features in the image data; anddetermining the characteristic of the one or more processes by deriving a relation between the probability of defective manufacture of features and the statistical parameter.40. The method according to clause 39, wherein the image data is obtained from a plurality of regions on the substrate.41. The method according to clause 40, further comprising:determining, for each of the plurality of regions, a value of a local statistical parameter that is dependent on the variation of the one or more dimensions of a plurality of features in the region; anddetermining the value of the statistical parameter in dependence on values of the local statistical parameter.42. The method according to clause 41, wherein the value of the local statistical parameter of each region is dependent on the local critical dimension uniformity of features in the region.43. The method according to clause 39, further comprising determining a defect probability relationship that indicates the relationship between determined probabilities of defective manufacture of features and values of respective statistical parameters,wherein determining the defect probability relationship comprises generating a plurality of characteristics for one or more processes performed in the manufacturing of features on the substrate, wherein each of the plurality characteristics is generated by performing the method under different conditions of the one or more processes.44. The method according to clause 39, wherein the value of the statistical parameter is generated in dependence on:a mean value of one or more dimensions of a plurality of features; anda variation value that is dependent on the variation of one or more dimensions of the plurality of features.45. The method according to clause 43, or any clause dependent on clause 43, wherein the defect probability relationship is determined by generating a plurality of characteristics at each of a plurality of dose settings of a process and at each of a plurality of focus settings of a process.46. The method according to clause 39, wherein the determined defective features in the image data comprise missing features when a feature should be present and the merging of at least two features when the at least two features should be separated from each other.47. The method according to clause 43, further comprising:determining, for each of a plurality of photoresists, a defect probability relationship and one or more process windows in dependence on the defect probability relationship; andselecting a photoresist for use in the process of manufacturing features in dependence on the determined one or more process windows.48. A method comprising:obtaining a plurality of distributions of values of a performance parameter, each distribution of the values of the performance parameter is associated with a different processing condition;deriving for each distribution of values of the performance parameter an indicator of a probability of the performance parameter being within a specified range to obtain a plurality of probability indicator values, each probability indicator value associated with a different processing condition; anddetermining a desired processing condition based on a relation between the value of the probability indicator and the processing condition.49. The method of clause 48, wherein the performance parameter is one or more selected from: critical dimension, edge placement error, overlay (error), local critical dimension uniformity, line edge roughness, line width roughness, image contrast (normalized image log slope or image log slope) or yield of the process.50. The method of clause 48, wherein the different processing conditions are selections from a vector or matrix of values of one or more process parameters, such as focus and/or dose.51. The method of clause 48, wherein the probability indicator is a percentage of measured or simulated samples comprised within a distribution of values of the performance parameter which meet a pre-defined criterion, such as a specified range of the performance parameter.52. The method of clause 48, wherein the desired processing condition is selected based on a processing condition associated with one or more values of one or more process parameters for which the probability indicator is close to, or equals, a minimum required probability value.53. A method of determining a probability of failure of one or more semiconductor devices provided to a substrate, the method comprising:obtaining a relation between i) the value of a probability indicator associated with a probability of a feature on the substrate having a dimension and/or position within a certain range and ii) a processing condition of the substrate;obtaining substrate specific values of one or more parameters associated with the processing condition across at least part of the substrate; andcombining the relation and the substrate specific values to determine the probability of failure across at least part of the substrate.54. The method of clause 53, further comprising determining a yield metric representative for a fraction or number of yielding dies on the at least part of the substrate based on the probability of failure across at least part of the substrate and a selected threshold of the probability of failure.55. The method of clause 54, further comprising determining an improved processing condition based on an expected improvement of the yield metric.56. A non-transitory computer-readable medium comprising instructions that, when executed, are configured to cause the manufacturing process of a device on a substrate to be controlled according to a method according to clause 39. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. In addition, where this application has listed the steps of a method or procedure in a specific order, it may be possible, or even expedient in certain circumstances, to change the order in which some steps are performed, and it is intended that the particular steps of the method or procedure claims set forth here below not be construed as being order-specific unless such order specificity is expressly stated in the claim. | 176,237 |
11860549 | DETAILED DESCRIPTION Before describing embodiments of the invention in detail, it is instructive to present an example environment in which embodiments of the present invention may be implemented. FIG.1at200shows a lithographic apparatus LA as part of an industrial production facility implementing a high-volume, lithographic manufacturing process. In the present example, the manufacturing process is adapted for the manufacture of for semiconductor products (integrated circuits) on substrates such as semiconductor wafers. The skilled person will appreciate that a wide variety of products can be manufactured by processing different types of substrates in variants of this process. The production of semiconductor products is used purely as an example which has great commercial significance today. Within the lithographic apparatus (or “litho tool”200for short), a measurement station MEA is shown at202and an exposure station EXP is shown at204. A control unit LACU is shown at206. In this example, each substrate visits the measurement station and the exposure station to have a pattern applied. In an optical lithographic apparatus, for example, a projection system is used to transfer a product pattern from a patterning device MA onto the substrate using conditioned radiation and a projection system. This is done by forming an image of the pattern in a layer of radiation-sensitive resist material. The term “projection system” used herein should be broadly interpreted as encompassing any type of projection system, including refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors such as the use of an immersion liquid or the use of a vacuum. The patterning MA device may be a mask or reticle, which imparts a pattern to a radiation beam transmitted or reflected by the patterning device. Well-known modes of operation include a stepping mode and a scanning mode. As is well known, the projection system may cooperate with support and positioning systems for the substrate and the patterning device in a variety of ways to apply a desired pattern to many target portions across a substrate. Programmable patterning devices may be used instead of reticles having a fixed pattern. The radiation for example may include electromagnetic radiation in the deep ultraviolet (DUV) or extreme ultraviolet (EUV) wavebands. The present disclosure is also applicable to other types of lithographic process, for example imprint lithography and direct writing lithography, for example by electron beam. The lithographic apparatus control unit LACU which controls all the movements and measurements of various actuators and sensors to receive substrates W and reticles MA and to implement the patterning operations. LACU also includes signal processing and data processing capacity to implement desired calculations relevant to the operation of the apparatus. In practice, control unit LACU will be realized as a system of many sub-units, each handling the real-time data acquisition, processing and control of a subsystem or component within the apparatus. Before the pattern is applied to a substrate at the exposure station EXP, the substrate is processed in at the measurement station MEA so that various preparatory steps may be carried out. The preparatory steps may include mapping the surface height of the substrate using a level sensor and measuring the position of alignment marks on the substrate using an alignment sensor. The alignment marks are arranged nominally in a regular grid pattern. However, due to inaccuracies in creating the marks and also due to deformations of the substrate that occur throughout its processing, the marks deviate from the ideal grid. Consequently, in addition to measuring position and orientation of the substrate, the alignment sensor in practice must measure in detail the positions of many marks across the substrate area, if the apparatus is to print product features at the correct locations with very high accuracy. The apparatus may be of a so-called dual stage type which has two substrate tables, each with a positioning system controlled by the control unit LACU. While one substrate on one substrate table is being exposed at the exposure station EXP, another substrate can be loaded onto the other substrate table at the measurement station MEA so that various preparatory steps may be carried out. The measurement of alignment marks is therefore very time-consuming and the provision of two substrate tables enables a substantial increase in the throughput of the apparatus. If the position sensor IF is not capable of measuring the position of the substrate table while it is at the measurement station as well as at the exposure station, a second position sensor may be provided to enable the positions of the substrate table to be tracked at both stations. Lithographic apparatus LA may for example is of a so-called dual stage type which has two substrate tables and two stations—an exposure station and a measurement station—between which the substrate tables can be exchanged. Within the production facility, apparatus200forms part of a “litho cell” or “litho cluster” that contains also a coating apparatus208for applying photosensitive resist and other coatings to substrates W for patterning by the apparatus200. At an output side of apparatus200, a baking apparatus210and developing apparatus212are provided for developing the exposed pattern into a physical resist pattern. Between all of these apparatuses, substrate handling systems take care of supporting the substrates and transferring them from one piece of apparatus to the next. These apparatuses, which are often collectively referred to as the track, are under the control of a track control unit which is itself controlled by a supervisory control system SCS, which also controls the lithographic apparatus via lithographic apparatus control unit LACU. Thus, the different apparatus can be operated to maximize throughput and processing efficiency. Supervisory control system SCS receives recipe information R which provides in great detail a definition of the steps to be performed to create each patterned substrate. Once the pattern has been applied and developed in the litho cell, patterned substrates220are transferred to other processing apparatuses such as are illustrated at222,224,226. A wide range of processing steps is implemented by various apparatuses in a typical manufacturing facility. For the sake of example, apparatus222in this embodiment is an etching station, and apparatus224performs a post-etch annealing step. Further physical and/or chemical processing steps are applied in further apparatuses,226, etc. Numerous types of operation can be required to make a real device, such as deposition of material, modification of surface material characteristics (oxidation, doping, ion implantation etc.), chemical-mechanical polishing (CMP), and so forth. The apparatus226may, in practice, represent a series of different processing steps performed in one or more apparatuses. As another example, apparatus and processing steps may be provided for the implementation of self-aligned multiple patterning, to produce multiple smaller features based on a precursor pattern laid down by the lithographic apparatus. As is well known, the manufacture of semiconductor devices involves many repetitions of such processing, to build up device structures with appropriate materials and patterns, layer-by-layer on the substrate. Accordingly, substrates230arriving at the litho cluster may be newly prepared substrates, or they may be substrates that have been processed previously in this cluster or in another apparatus entirely. Similarly, depending on the required processing, substrates232on leaving apparatus226may be returned for a subsequent patterning operation in the same litho cluster, they may be destined for patterning operations in a different cluster, or they may be finished products to be sent for dicing and packaging. Each layer of the product structure requires a different set of process steps, and the apparatuses226used at each layer may be completely different in type. Further, even where the processing steps to be applied by the apparatus226are nominally the same, in a large facility, there may be several supposedly identical machines working in parallel to perform the step226on different substrates. Small differences in set-up or faults between these machines can mean that they influence different substrates in different ways. Even steps that are relatively common to each layer, such as etching (apparatus222) may be implemented by several etching apparatuses that are nominally identical but working in parallel to maximize throughput. In practice, moreover, different layers require different etch processes, for example chemical etches, plasma etches, according to the details of the material to be etched, and special requirements such as, for example, anisotropic etching. The previous and/or subsequent processes may be performed in other lithography apparatuses, as just mentioned, and may even be performed in different types of lithography apparatus. For example, some layers in the device manufacturing process which are very demanding in parameters such as resolution and overlay may be performed in a more advanced lithography tool than other layers that are less demanding. Therefore some layers may be exposed in an immersion type lithography tool, while others are exposed in a ‘dry’ tool. Some layers may be exposed in a tool working at DUV wavelengths, while others are exposed using EUV wavelength radiation. In order that the substrates that are exposed by the lithographic apparatus are exposed correctly and consistently, it is desirable to inspect exposed substrates to measure properties such as overlay errors between subsequent layers, line thicknesses, critical dimensions (CD), etc. Accordingly a manufacturing facility in which litho cell LC is located also includes metrology system which receives some or all of the substrates W that have been processed in the litho cell. Metrology results are provided directly or indirectly to the supervisory control system SCS. If errors are detected, adjustments may be made to exposures of subsequent substrates, especially if the metrology can be done soon and fast enough that other substrates of the same batch are still to be exposed. Also, already exposed substrates may be stripped and reworked to improve yield, or discarded, thereby avoiding performing further processing on substrates that are known to be faulty. In a case where only some target portions of a substrate are faulty, further exposures can be performed only on those target portions which are good. Also shown inFIG.1is a metrology apparatus240which is provided for making measurements of parameters of the products at desired stages in the manufacturing process. A common example of a metrology station in a modern lithographic production facility is a scatterometer, for example a dark-field scatterometer, an angle-resolved scatterometer or a spectroscopic scatterometer, and it may be applied to measure properties of the developed substrates at 220 prior to etching in the apparatus222. Using metrology apparatus240, it may be determined, for example, that important performance parameters such as overlay or critical dimension (CD) do not meet specified accuracy requirements in the developed resist. Prior to the etching step, the opportunity exists to strip the developed resist and reprocess the substrates220through the litho cluster. The metrology results242from the apparatus240can be used to maintain accurate performance of the patterning operations in the litho cluster, by supervisory control system SCS and/or control unit LACU206making small adjustments over time, thereby minimizing the risk of products being made out-of-specification, and requiring re-work. Additionally, metrology apparatus240and/or other metrology apparatuses (not shown) can be applied to measure properties of the processed substrates232,234, and incoming substrates230. The metrology apparatus can be used on the processed substrate to determine important parameters such as overlay or CD. A metrology apparatus suitable for use in embodiments of the invention is shown inFIG.2(a). This is purely an example metrology apparatus and any suitable metrology apparatus for measuring a process parameter such as overlay on a substrate may be used. A target T and diffracted rays of measurement radiation used to illuminate the target are illustrated in more detail inFIG.2(b). The metrology apparatus illustrated is of a type known as a dark field metrology apparatus. The metrology apparatus may be a stand-alone device or incorporated in either the lithographic apparatus LA, e.g., at the measurement station, or the lithographic cell LC. An optical axis, which has several branches throughout the apparatus, is represented by a dotted line O. In this apparatus, light emitted by source11(e.g., a xenon lamp) is directed onto substrate W via a beam splitter15by an optical system comprising lenses12,14and objective lens16. These lenses are arranged in a double sequence of a 4F arrangement. A different lens arrangement can be used, provided that it still provides a substrate image onto a detector, and simultaneously allows for access of an intermediate pupil-plane for spatial-frequency filtering. Therefore, the angular range at which the radiation is incident on the substrate can be selected by defining a spatial intensity distribution in a plane that presents the spatial spectrum of the substrate plane, here referred to as a (conjugate) pupil plane. In particular, this can be done by inserting an aperture plate13of suitable form between lenses12and14, in a plane which is a back-projected image of the objective lens pupil plane. In the example illustrated, aperture plate13has different forms, labeled13N and13S, allowing different illumination modes to be selected. The illumination system in the present examples forms an off-axis illumination mode. In the first illumination mode, aperture plate13N provides off-axis from a direction designated, for the sake of description only, as ‘north’. In a second illumination mode, aperture plate13S is used to provide similar illumination, but from an opposite direction, labeled ‘south’. Other modes of illumination are possible by using different apertures. The rest of the pupil plane is desirably dark as any unnecessary light outside the desired illumination mode will interfere with the desired measurement signals. As shown inFIG.2(b), target T is placed with substrate W normal to the optical axis O of objective lens16. The substrate W may be supported by a support (not shown). A ray of measurement radiation I impinging on target T from an angle off the axis O gives rise to a zeroth order ray (solid line 0) and two first order rays (dot-chain line +1 and double dot-chain line −1). It should be remembered that with an overfilled small target, these rays are just one of many parallel rays covering the area of the substrate including metrology target T and other features. Since the aperture in plate13has a finite width (necessary to admit a useful quantity of light, the incident rays I will in fact occupy a range of angles, and the diffracted rays 0 and +1/−1 will be spread out somewhat. According to the point spread function of a small target, each order +1 and −1 will be further spread over a range of angles, not a single ideal ray as shown. Note that the grating pitches of the targets and the illumination angles can be designed or adjusted so that the first order rays entering the objective lens are closely aligned with the central optical axis. The rays illustrated inFIGS.2(a) and2(b)are shown somewhat off axis, purely to enable them to be more easily distinguished in the diagram. At least the 0 and +1 orders diffracted by the target T on substrate W are collected by objective lens16and directed back through beam splitter15. Returning toFIG.2(a), both the first and second illumination modes are illustrated, by designating diametrically opposite apertures labeled as north (N) and south (S). When the incident ray I of measurement radiation is from the north side of the optical axis, that is when the first illumination mode is applied using aperture plate13N, the +1 diffracted rays, which are labeled +1(N), enter the objective lens16. In contrast, when the second illumination mode is applied using aperture plate13S the −1 diffracted rays (labeled −1(S)) are the ones which enter the lens16. A second beam splitter17divides the diffracted beams into two measurement branches. In a first measurement branch, optical system18forms a diffraction spectrum (pupil plane image) of the target on first sensor19(e.g. a CCD or CMOS sensor) using the zeroth and first order diffractive beams. Each diffraction order hits a different point on the sensor, so that image processing can compare and contrast orders. The pupil plane image captured by sensor19can be used for many measurement purposes such as reconstruction used in methods described herein. The pupil plane image can also be used for focusing the metrology apparatus and/or normalizing intensity measurements of the first order beam. In the second measurement branch, optical system20,22forms an image of the target T on sensor23(e.g. a CCD or CMOS sensor). In the second measurement branch, an aperture stop21is provided in a plane that is conjugate to the pupil-plane. Aperture stop21functions to block the zeroth order diffracted beam so that the image of the target formed on sensor23is formed only from the −1 or +1 first order beam. The images captured by sensors19and23are output to processor PU which processes the image, the function of which will depend on the particular type of measurements being performed. Note that the term ‘image’ is used here in a broad sense. An image of the grating lines as such will not be formed, if only one of the −1 and +1 orders is present. The particular forms of aperture plate13and field stop21shown inFIG.2are purely examples. In another embodiment of the invention, on-axis illumination of the targets is used and an aperture stop with an off-axis aperture is used to pass substantially only one first order of diffracted light to the sensor. In yet other embodiments, 2nd,3rdand higher order beams (not shown inFIG.2) can be used in measurements, instead of or in addition to the first order beams. The target T may comprise a number of gratings, which may have differently biased overlay offsets in order to facilitate measurement of overlay between the layers in which the different parts of the composite gratings are formed. The gratings may also differ in their orientation, so as to diffract incoming radiation in X and Y directions. In one example, a target may comprise two X-direction gratings with biased overlay offsets +d and −d, and Y-direction gratings with biased overlay offsets +d and −d. Separate images of these gratings can be identified in the image captured by sensor23. Once the separate images of the gratings have been identified, the intensities of those individual images can be measured, e.g., by averaging or summing selected pixel intensity values within the identified areas. Intensities and/or other properties of the images can be compared with one another. These results can be combined to measure different parameters of the lithographic process The largest area which can be exposed in a single exposure of a lithographic apparatus is defined by its maximum scanning field area. This is defined by the width of the exposure slit in a first direction (often designated the x-direction) and the maximum scan length in the orthogonal direction of the same (substrate) plane (often designated the y-direction). In some cases, the die area (the substrate area of the device being manufactured, referred to herein as the substrate field area or substrate region area) is larger than the maximum scanning field area. In this situation, some or all layers of the device need to be exposed on the substrate region (or substrate field) in multiple (e.g., two) separate adjacent exposures. For example: a substrate region twice as large as the maximum scanning field area may be exposed in two exposures: a first exposure using a first reticle comprising a first pattern is used to print a first substrate sub-region (e.g., first half in terms of area on the substrate) of the layer, and a second reticle comprising a second pattern is used to print a second sub-region of the layer (e.g., the second half) on the substrate, adjacent the first half therefore forming the complete layer. The two halves can be referred to as having been “stitched” together, with the process sometimes referred to as intra-die stitching. As already discussed, one or more performance parameters, such as overlay, are measured using a metrology device on processed substrates and process corrections are determined which aim to minimize the overlay error for subsequent substrates and/or subsequent layers of the same substrate. The process corrections are typically implemented by the lithographic apparatus in a feedback loop. For a stitched die, the overlay corrections for the layer comprising the two (or more) images to be stitched may be determined by measuring overlay on the preceding (underlying) layer (or from a previous substrate) and splitting these corrections between the corresponding sub-regions (e.g., two halves), such that the corrections for each sub-region of the full field are determined respectively from measurements on the corresponding sub-region in the previous layer. Overlay corrections may be determined as coefficients (sometimes referred to as k-parameters) for a polynomial in a best fit method such that a correction based on the polynomial minimizes the measured overlay (e.g., on average over the fitted area) when applied. The determined k-parameters may be fed back to the lithographic apparatus in the form of a sub-recipe characterized by the k-parameters. Because the corrections for each sub-region of the substrate field are calculated separately, the overlay corrections at the interface of the sub-regions (i.e., the stich) are not matched by default. This means that an overlay or matching error could be introduced when stitching the two sub-regions together. Overlay at the stitch is also influenced by the use of a limited number of measurement points (targets) available. This typically results in the overlay correction being optimized only to the points measured, often at the expense of the overlay at unmeasured locations (e.g., including the stitch region). It is possible to provide for “stitched targets” for direct measurement of overlay at the stitched region. This may comprise providing a first half of an overlay target on the first sub-region (e.g., patterned from the first reticle) and a second (complementary) half of the overlay target on the second sub-region (e.g., patterned from the second reticle). Of course, this is only possible where there is sufficient space on the reticle/die. Measurement of the position of one half of the target with respect to the other half provides an indication of a relative positioning metric referred to herein as “stitched overlay” (not true overlay as it relates to relative positioning within a single layer). When calculating overlay corrections, knowledge of which marks are conventional overlay targets and which ones are stitched targets is required, as, when calculating overlay corrections, these two target types need to be treated differently. A mixture of stitched and conventional target types is envisaged herein, according to an optional embodiment. In addition, the concepts may be combined into a stitched overlay target which has a grating or similar structure in a first layer overlaid with the two stitch target halves. This allows measurement of conventional overlay in the stich region in addition to “stitched overlay”. FIG.3(a)is a schematic example of overlay (as represented by the vectors305a-305f) measured on a full substrate field (substrate region)300. A subsequent layer is exposed over the field300, in two exposures, each exposure defining a sub-region (sub-field or half-field). This may be because the maximum scanning field area of the lithographic apparatus may be smaller than the required substrate field are for a particular die (required die size).FIG.3(b)shows the subsequent two stitched sub-regions310a,310b. The vectors315a-315frepresent the corresponding corrections (as actuated) for the measured overlay305a-305faccording to a known method. In such a known method, overlay correction for the top sub-region310aand bottom sub-region310bare determined using only the measurements (i.e., vectors305a-305cand305d-305frespectively) relating to the corresponding sub-region of the full field300. In this simplified example, the corrections for each sub-region are represented by only three vectors (in reality there would be more measurements, e.g., as defined by the number of overlay targets). Corrections for overlay are typically determined per field; and may be based on a minimization of an average or maximum overlay error across the field. Also, once corrections are determined, the degree to which they can be actuated is limited in magnitude and frequency, as the reticle and wafer stages of the lithographic apparatus have only limited degrees of freedom; corrections can only be applied using fluid motions as the stages cannot suddenly change speed or direction. Therefore the overlay corrections cannot be determined nor actuated ideally and consequently there will always be an overlay residual over the region. In this example the correction vector315dclosest to the stitching boundary in the bottom sub-region310bdoes not completely cancel out its corresponding overlay vector305d. The result is an overlay residual at the stitching boundary area320. However, the stitching boundary area may be the substrate field area for which overlay is most critical. To address this, it is proposed is to impose an additional boundary condition when determining the corrections (e.g., when generating the sub-recipe correction parameters). The boundary condition increases the degree of matching between the overlay corrections either side of the boundary within the stitching boundary area and/or ensures better quality overlay corrections (i.e., a bias towards smaller or zero overlay residuals) within this boundary area. In an embodiment, the boundary condition can either be strict; i.e., imposing that only a very minimal overlay error is allowed (e.g., by imposing a zero or very low error threshold within this boundary region). This approach ensures the best matching of the two sub-regions, but risks larger overlay errors elsewhere in the region. Therefore, whether this approach is justified should be evaluated on a per case basis. Alternatively, or in combination, the boundary condition may comprise an imposed weighting or weight factor, weighted in favor of corrections which minimize overlay within the boundary stitching area with respect to the remaining area of each sub-region. Such a weighting may comprise, for example, a constraint when fitting the correction polynomial which favors results which minimize overlay within the stitching boundary area with respect to the rest of the field. This approach (or a threshold approach with a less stringent threshold) potentially allows greater flexibility in determining corrections, which could be beneficial to the overlay in the rest of the region. In an embodiment, the boundary condition may be settable between a weighting approach or a boundary area threshold approach (or a combination thereof) and/or enable the weighting/threshold to be varied. FIG.3(c)shows the result of a strict boundary condition which imposes that there be no residual within the stitching boundary area320(i.e., the overlay vectors305cand305dare perfectly cancelled by respective correction vectors325cand325d. The trade-off is that the correction vectors325a,325b,325e,325fmay be less effective at correcting the overlay vectors305a,305b,305e,305fcompared to the correction vectors315a,315b,315e,315f. In particular, it can be seen that correction vector325fis a visibly notably poorer correction than corresponding correction vector315f. FIG.4illustrates a similar example of the methodology described herein.FIG.4(a)shows an example present correction strategy, where the correction determined for the top sub-region400(a) comprises a magnification in the x-direction greater than 1 and the correction determined for the bottom sub-region400(b) comprises a magnification in the x-direction smaller than 1. The result is an overlay discontinuity within the stitching boundary area410.FIG.4(b)shows the result of imposing a boundary condition as described. The determined correction recipes will tend to avoid recipes with larger overlay errors within the stitching boundary area410, thereby mitigating the discontinuity within the stitching boundary area410. Such a correction, for example, may comprise a trapezoid shaped (x-direction) magnification correction over the two sub-fields400(a)′,400(b)′, as illustrated. The trade-off may be increased overlay residuals across the individual fields, away from the stitching boundary area410. FIG.5is a flowchart describing a method according to an embodiment. At step500, overlay measurements are made on a substrate, said measurements including those relating to a region on the substrate onto which a die will be exposed in two (or more) separate adjacent exposures. For example, the required die area may be larger than a maximum scanning field area of the lithographic apparatus. At optional step510, a boundary condition strategy may be selected. The boundary condition may be based on (for example) a threshold based strategy (setting a maximum residual value within the boundary area), a zero residual strategy (allowing no residual within the boundary error) and/or a weighting based strategy (applying a constraint favoring results which tend toward minimal residual in the boundary area). This step may also comprise setting the degree of weighting and/or any threshold value (as appropriate). In an alternative embodiment, the strategy may be predetermined and/or fixed and this step not performed. At step520, separate corrections are determined for the exposure of each sub-region, taking into account the boundary condition imposed. Optional step530may model and evaluate the result of the corrections determined at step520. In an embodiment, this step may determine whether the die is yielding. If it is determined at this step that the die may be non-yielding; i.e., overlay is out of specification either according to the (more stringent) specification within the stitching boundary area or the specification elsewhere on the die, then the method may return to step510and an alternative boundary condition strategy may be selected. Finally, at step540, the two adjacent sub-regions are exposed in separate exposures based on their respective corrections. The control routine described above is described in terms of optimizing overlay corrections for the case when two or multiple sub-regions, each of which requires an individual exposure, are stitched together to cover a larger exposure field area (or full field area, maximum scanning field area) in a layer on the substrate. An extension of the basic concept will now be described. This comprises an optimization routine applicable for exposing a subsequent layer (e.g., covering a full exposure field area) over the stitched exposure (e.g., comprising a first and second sub-region) in the previous layer. This may occur, for example, when different apparatuses, each having different maximum scanning field areas, are used to expose the two layers. As such, disclosed is a method for determining a correction for control of a manufacturing process for providing structures to a region (defined by a maximum scanning field area), the region comprising a plurality of sub-regions; the method comprising obtaining measurement data relating to a process parameter of the manufacturing process for the region; and determining a correction for the manufacturing apparatus providing structures to said region on the substrate based on a separate consideration for each of said plurality of sub-regions. The correction may comprise a second layer correction for control of the manufacturing process for forming second layer structures in a second on said region, wherein each of the sub-regions corresponds with a separately exposed sub-region in a first layer. In such an embodiment, the single exposure field area (e.g. required die size) in the subsequent layer may be equal to the combined field area of the (e.g., stitched) sub-regions in the previous layer. In this case, overlay corrections may be determined which take into account the division of the full field area into sub-regions in the previous layer. It will be apparent that the sub-regions of a stitched die may not be perfectly positioned and/or oriented with respect to each other and/or other layers. For example, one or both sub-regions may have a tilt error or a magnification error with respect to a desired orientation or magnification; this tilt or magnification may be the same or different for the two sub-regions. Therefore, correction capabilities are proposed which are configured to better match full field exposure layers to partial field exposure layers. Such a method may comprise obtaining metrology data describing the relative positioning and orientation of the sub-fields (e.g. alignment data, overlay data) relating to a first layer. The metrology data may comprise determining the sub-fields' relative orientation and magnification via classical metrology with respect to an underlying layer, or directly via stitching methods (e.g., measurement of stitching targets). A lithographic apparatus interface should be defined or provided which allows definition of a substrate grid in a second (subsequent) layer to be matched to the previous layer at the sub-field level. An algorithm may then generate a full-field control recipe for the second layer (based on the metrology data) such that the implemented full-field control optimally matches with the substrate sub-grids (grids for each sub-field) of the first layer. For example, the correction may be determined based on a direct optimization of the sub-field control profile, e.g., a separate minimization of an average or maximum (e.g., overlay) error/residual across each sub-region of the full scanning field. As an alternative to direct optimization, a correction method for a second layer may be based on a definition of a boundary area. The boundary area may or may not coincide with the boundary area defined in the previous (stitched) layer. In one embodiment, the boundary area may define an evaluation zone (or sub-field, sub-region) which, as before, may be around and/or centered on the interface between two sub-regions in the previous layer. This method may comprise explicitly optimizing the corrections within the stitching boundary area. As such, the overlay corrections in the second layer may be determined in accordance with the three areas (the two sub-regions and the boundary area). As with the previous embodiments, a boundary condition (e.g., threshold condition and/or weighting) may be imposed when determining the correction for the second layer. In another embodiment, it may be proposed not to take into account the boundary area when determining the corrections for the second layer, such that there is a more fluid control across the two sub-fields; the boundary area being potentially subject to different orientations and/or magnifications resultant from the respective sub-field exposures. This is essentially similar to the previous example, but with a zero weighting boundary condition given to the boundary area. While the above description is described in terms of measuring overlay in an earlier layer for determining corrections for subsequent layers on the same substrate, the concept is equally applicable to measuring overlay on previous substrates and using these measurements to determine corrections (for the same layer or other layers) on subsequent substrates (of the same lot or for subsequent lots). Corrections could also be determined based on a combination of measurements from preceding layers of the same substrate and measurements from previous substrates. More generally, in addition to overlay, the concepts described herein can also be used for measurement and monitoring of other relevant processing parameters such as edge placement error. Another process parameter which may be measured and monitored is “stitched overlay”. This is not overlay in the conventional sense, as it relates to matching within a single layer. Instead, this stitched overlay is a relative positional metric describing the relatively positioning of the two sub-regions with respect to each other. The abovementioned “stitched targets” and/or “stitched overlay targets” may be provided and measured to do this. It may also be appreciated that the two sub-regions may be overlapping (at least partially) in the stitching boundary area to form a complete “stitched target” and/or “stitched overlay target”. For example, a box-in-box type arrangement may comprise a first image of a box in the first sub-region being formed inside a second image of a box in the second sub-region, the two sub-regions overlapping in the boundary area where the stitched target is formed. Alternatively, the two images may each comprise grating structures which are designed to be interlaced when imaged at the boundary area. Misalignment of such an interlaced target would manifest as asymmetry which could then be measured similarly to overlay targets. Such a stitched target may be formed over another grating in another layer (or another grating formed over it) to provide for a stitched overlay target. Further embodiments of the invention are disclosed in the list of numbered embodiments below:1. A method for determining a correction for control of at least one manufacturing apparatus used in a manufacturing process for providing structures to a region on a substrate, said region comprising a plurality of sub-regions; the method comprising:obtaining measurement data relating to a process parameter of the manufacturing process for the region; anddetermining a correction for the manufacturing apparatus based on said measurement data, wherein said correction is configured to maintain the process parameter within a specified range across a boundary between two of said sub-regions and/or to better correct the process parameter across the boundary between two of said sub-regions with respect to within the remainder of the region.2. A method according to embodiment 1, wherein said determination comprises applying a boundary condition which favors a correction that better minimizes process parameter errors across the boundary with respect to process parameter errors inside the sub-regions.3. A method according to embodiment 2, wherein said step of applying a boundary condition comprises imposing an error threshold for said process parameter across said boundary, such that said determined correction ensures process parameter errors across the boundary do not exceed the error threshold.4. A method according to embodiment 3, wherein said error threshold is zero, such that said determined correction ensures that said process parameter error across the boundary has a minimum achievable value.5. A method according to embodiment 4, wherein said method comprises defining a value for the error threshold before the step of determining a correction.6. A method according to any of embodiments 2 to 5, wherein said step of applying a boundary condition comprises imposing a weighting constraint, the weighting constraint applying a weighting in favor of a correction which minimizes process parameter errors across the boundary between two sub-regions with respect to within the remainder of the region.7. A method according to embodiment 6, wherein said method comprises defining the weighting constraint before the step of determining a correction.8. A method according to any of embodiments 2 to 7, wherein said method comprises: assessing whether the manufacturing process will be yielding, andamending said boundary condition if the manufacturing process is assessed to be non-yielding.9. A method according to any of embodiments 2 to 8, wherein said determining a correction comprises determining coefficients for a polynomial which minimizes the error over said region while respecting the boundary condition.10. A method according to any preceding embodiment, wherein said manufacturing process provides said product structures on said substrate in a plurality of exposures, each exposure defining a respective one of said sub-regions, said sub-regions being exposed adjacently to define said region.11. A method according to embodiment 10, wherein the region has an area larger than a maximum scanning field area of the manufacturing apparatus.12. A method according to embodiment 11, wherein each sub-region has an area defined by the maximum scanning field area of the manufacturing apparatus.13. A method according to any preceding embodiment, wherein the process parameter comprises overlay or edge placement error.14. A method according to any preceding embodiment, wherein the measurement data relates to a previously applied layer of the same substrate.15. A method according to any preceding embodiment, wherein the measurement data relates to a corresponding layer of an earlier processed substrate.16. A method according to any preceding embodiment, wherein said plurality of sub-regions comprise two sub-regions of equal area.17. A method according to any preceding embodiment, comprising controlling the manufacturing process using said correction, wherein the correction is applied by the manufacturing apparatus when applying a layer of product structures onto the region of the substrate.18. A method according to embodiment 17, wherein said layer of product structures is applied on said substrate in a plurality of exposures, each exposure defining a respective one of said sub-regions.19. A method according to any preceding embodiment, wherein across the boundary between the two sub-regions is defined as within a boundary area, said boundary area comprising an area within said two sub-regions and extending either side of the boundary.20. A method according to any preceding embodiment, wherein said measurement data relates to measurement of a plurality of targets within the region or a corresponding region; said plurality of targets comprising overlay targets and, in the vicinity of the boundary, at least one stitched target, wherein said stitched target comprise complementary patterns in each of said two sub-regions from which a relative positioning metric of the two sub-regions can be measured.21. A method according to embodiment 20 wherein the stitched target is formed with a further pattern in a layer beneath or overlaying said stitched target to enable determination of overlay in addition to the relative positioning metric.22. A method according to any preceding embodiment, wherein said structures are formed in a first layer, and said method further comprises:determining second layer corrections for control of the manufacturing process based on a separate consideration for each of said plurality of sub-regions, said second layer corrections for providing second layer structures to said region on the substrate in a second layer.23. The method according to embodiment 22, further comprising:forming said second layer structures in a single exposure using said second layer corrections.24. A method according to embodiment 22 or 23, wherein separate control grids are defined for each of said plurality of sub-regions, the second layer corrections being defined separately for the separate control grids.25. A method according to embodiment 22, 23 or 24, comprising defining a second layer boundary area; andapplying a boundary condition for corrections corresponding to said second layer boundary area.26. A method according to embodiment 25, wherein the boundary condition optimizes the second layer corrections within the boundary area in preference to second layer corrections outside of the boundary area.27. A method according to embodiment 25, wherein the boundary condition comprises not taking into account the boundary area when determining said second layer corrections.28. A method for determining a correction for control of at least one manufacturing apparatus used in a manufacturing process for providing structures to a region on a substrate, said region being defined by a maximum scanning field area of the manufacturing apparatus, said region comprising a plurality of sub-regions; the method comprising:obtaining measurement data relating to a process parameter of the manufacturing process for the region; anddetermining a correction for the manufacturing apparatus for the providing of structures to said region on the substrate based on a separate consideration for each of said plurality of sub-regions.29. The method according to embodiment 28, wherein said correction comprises a second layer correction for control of the manufacturing process and said structures comprise second layer structures formed on said region in a second layer.30. The method according to embodiment 29, wherein each of said sub-regions corresponds with a separately exposed sub-region in a first layer.31. The method according to embodiment 29 or 30, further comprising:forming said structures in a single exposure using said correction.32. A method according to any of embodiments 28 to 31, wherein separate control grids are defined for each of said plurality of sub-regions, the correction being defined separately for the separate control grids.33. A method according to any of embodiments 28 to 32, comprising defining a boundary area around the boundary of two adjacent sub-regions of the plurality of sub-regions; and applying a boundary condition for corrections corresponding to said boundary area.34. A method according to embodiment 33, wherein the boundary condition optimizes the correction within the boundary area in preference to the correction outside of the boundary area.35. A method according to embodiment 33, wherein the boundary condition comprises not taking into account the boundary area when determining said correction.36. A method according to any of embodiments 28 to 35, wherein said plurality of sub-regions numbers two.37. A control recipe comprising a correction as determined by the method of any preceding embodiment.38. A controller for a manufacturing apparatus configured to receive the control recipe of embodiment 37.39. A processing device for determining a correction for control of at least one manufacturing apparatus configured to provide product structures to a substrate in a manufacturing process, the processing device being configured to perform the method of any of embodiments 1 to 36.40. A manufacturing apparatus configured to provide product structures to a substrate in a manufacturing process, said manufacturing apparatus comprising the processing device according to embodiment 39.41. A manufacturing apparatus according to embodiment 40, wherein the manufacturing apparatus comprises a lithographic apparatus having:a substrate stage for holding a substrate;a reticle stage for holding a patterning device;a processor operable to control a manufacturing process using said correction.42. A computer program comprising program instructions operable to perform the method of any of embodiments 1 to 36 when run on a suitable apparatus.43. A non-transient computer program carrier comprising the computer program of embodiment 42. While the above description describes corrections for a lithographic apparatus/scanner, the determined corrections may also be used for any process and by any integrated circuit (IC) manufacturing apparatus in an IC manufacturing process, e.g., an etch apparatus, which has an effect on the position and/or a dimension of the structures formed within a layer. The terms “radiation” and “beam” used in relation to the lithographic apparatus encompass all types of electromagnetic radiation, including ultraviolet (UV) radiation (e.g., having a wavelength of or about 365, 355, 248, 193, 157 or 126 nm) and extreme ultra-violet (EUV) radiation (e.g., having a wavelength in the range of 5-20 nm), as well as particle beams, such as ion beams or electron beams. The term “lens”, where the context allows, may refer to any one or combination of various types of optical components, including refractive, reflective, magnetic, electromagnetic and electrostatic optical components. The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description by example, and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance. The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. | 50,390 |
11860550 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed that are between the first and second features, such that the first and second features are not in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. The term “nominal” as used herein refers to a desired, or target, value of a characteristic or parameter for a component or a process operation, set during the design phase of a product or a process, together with a range of values above and/or below the desired value. The range of values is typically due to slight variations in manufacturing processes or tolerances. The term “vertical,” as used herein, means nominally perpendicular to the surface of a substrate. The chip fabrication process can divided into three “modules,” in which each module may include all or some of the following operations: patterning (e.g., photolithography and etch); implantation; metal and dielectric material deposition; wet or dry clean; and planarization (e.g., etch-back process or chemical mechanical planarization). The three modules can be categorized as front end of the line (FEOL), middle of the line (MOL)/middle end of the line (MEOL), and back end of the line (BEOL). In FEOL, field effect transistors (FETs) are formed. For example, FEOL includes the formation of source/drain terminals, a gate stack, and spacers on sides of the gate stack. The source/drain terminals can be doped substrate regions formed with an implantation process after the gate stack formation. The gate stack includes a metal gate electrode, which can include two or more metal layers. The gate dielectric can include a high dielectric constant (high-k) material (e.g., greater than 3.9, which is the dielectric constant of silicon oxide). Metals in the gate electrode set the work function of the gate, in which the work functions can be different between p-type FETs and n-type FETs. The gate dielectric provides electrical isolation between the metal gate electrode and a channel formed between the source and the drain terminals when the FET is in operation. In MOL, low level interconnects (contacts) are formed and can include two layers of contacts on top of each other. The MOL interconnects can have smaller critical dimensions (CDs; e.g., line width) and are spaced closer together compared to their BEOL counterparts. A purpose of the MOL contact layers is to electrically connect the FET terminals, i.e., the source/drain and metal gate electrode, to higher level interconnects in BEOL. A first layer of contacts in MOL, known as “trench silicide (TS),” are formed over the source and drain terminals on either side of the gate stack. In the TS configuration, the silicide is formed in the trench and after the trench formation. The silicide lowers the resistance between the source and drain regions and the metal contacts. The gate stack and the first layer of contacts are considered to be on the same “level.” The second layer of contacts are formed over the gate electrode and TS. MOL contacts are embedded in a dielectric material, or a dielectric stack of materials, that ensures their electrical isolation. In BEOL, an interlayer dielectric (ILD) is deposited over the MOL contacts. The formation of high level interconnects in BEOL involves patterning a hard mask (HM) layer and subsequently etching through the HM layer to form holes and trenches in the ILD. The ILD can be a low-k material. Low-k materials can have a dielectric constant below 3.9, which is the dielectric constant of silicon oxide (SiO2). Low-k materials in BEOL can reduce unwanted parasitic capacitances and minimize resistance-capacitance (RC) delays. BEOL interconnects include two types of conductive lines: the vertical interconnect access lines (vias) and the lateral lines (lines). The vias run through the ILD layer in the vertical direction and create electrical connections to layers above or below the ILD layer. Lines are laid in the lateral direction within the ILD layer to connect a variety of components within the same ILD layer. BEOL includes multiple layers (e.g., up to 9 or more) of vias and lines with increasing CDs (e.g., line width) and line pitch. Each layer is required to align to the previous layer to ensure proper via and line connectivity. Line connectivity can be established through an alignment between the pattern on a photomask (reticle) and existing features on a wafer surface. This quality measure is known as “overlay (OVL) accuracy.” Alignment is critical because the reticle pattern must be precisely transferred to the wafer from layer to layer. Since multiple photolithography steps are used during patterning, any OVL misalignment is additive and contributes to the total placement tolerances between the different features formed on the wafer surface. The placement tolerances for each “photo-layer” are known as the “OVL budget.” Each photo-layer can have a different OVL budget depending on the incoming OVL misalignment, and the size/density of the features to be transferred on the wafer's surface. Since OVL misalignments are additive, they can adversely affect the OVL budget of each photo-layer. The wafer and the reticle position data are measured with respect to a coordinate system defined for the exposure tool and are then used in a global or field-by-field manner to perform the alignment. Global alignment, also known as “coarse alignment,” can use several marks to quickly align a wafer relative to the reticle. Field-by-field alignment, also known as “fine alignment,” can be used to align the reticle to each exposure site. The fine alignment can compensate for non-uniformities observed in the local topography, deposition non-uniformities, or dishing during chemical mechanical planarization (CMP) operations. The use of a HM to form the interconnects in BEOL can have several limitations. For example, the use of a HM can limit the photolithography alignment window because the narrow patterned features present in the HM reduce the tolerance for misalignment errors. A reduction in alignment window increases the risk for overlay errors, which in turn translates to a higher probability of patterning defects on the wafer. Common patterning defects include metal bridges between vias and deformed vias or lines. Self-aligned interconnects can provide a relief to the photolithography alignment requirements and increase the alignment, or OVL, window. This is helpful for high density areas of the chip where the line pitch is small. Various embodiments in accordance with this disclosure provide BEOL interconnect fabrication methods that employ a patterning photolithography/etch process with self-aligned interconnects resulting in a wider pattern for OVL. This effectively increases the OVL margin and reduces the number of patterning defects. Furthermore, the process is simplified because an HM layer is no longer required. According to this disclosure, the interconnect fabrication method supports the use of a multi-metal gap fill process. The metal in the multi-metal gap fill process can be a conductive material such as, for example, graphene. In some embodiments, the multi-metal gap fill interconnects are formed before the ILD layer. In some embodiments, the ILD layer is deposited such that it has naturally occurring voids to decrease the layer's dielectric constant. FIG.1is a cross sectional view of a structure100in accordance with some embodiments. In some embodiments, structure100is a portion of a substrate (not shown inFIG.1) which includes at least one BEOL interconnect network layer, in which vias and lines are formed. In some embodiments, the substrate can be a bare semiconductor wafer or a partially fabricated semiconductor wafer which includes previously formed layers. Structure100includes pattern structures105, in which each pattern structure105includes a mandrel110(a center portion of pattern structure105), a first spacer120, and a second spacer122. First and second spacers120and122are disposed on opposing side surfaces of mandrel110. In some embodiments, each mandrel110can be made of amorphous silicon, silicon nitride or amorphous carbon. By way of example and not limitation, the thickness of mandrel110can range from 10 nm to about 100 nm. In some embodiments, spacers120and122can be made of titanium oxide, titanium nitride, silicon oxide, or silicon nitride. The spacer thickness can range from 5 to 50 nm depending on the design. In some embodiments, mandrels110and spacers120,122act as an etch mask, in which a width between two pattern structures105is shown as distance125. Mandrel110and spacers120,122are disposed over an ILD layer130. By way of example and not limitation, ILD layer130has a thickness between 10 and 100 nm. In some embodiments, ILD layer130can be a stack of dielectrics such as, for example, a low-k dielectric and another dielectric: (i) a low-k dielectric (e.g., carbon doped silicon oxide) and a silicon carbide with nitrogen doping; (ii) a low-k dielectric and a silicon carbide with oxygen doping; (iii) a low-k dielectric with silicon nitride; or (iv) a low-k dielectric with silicon oxide. ILD layer130is disposed over an etch stop layer140. In some embodiments, etch stop layer140has a thickness between 1 nm and 100 nm. By way of example and not limitation, etch stop layer140is made of silicon carbide, silicon nitride, or silicon oxide. Structure100also includes an underlying metal line150. In some embodiments metal line150can be part of an earlier metallization layer. Further, metal line150is over an ILD layer160and etch stop layer170. A photolithography operation and a series of etch operations form openings in dielectric layer130and etch stop layer140. For example, inFIG.2, a coat of photoresist200is photo-exposed and patterned over structure100to create via opening210that has a width220. Photoresist200can be used to expose areas of structure100where vias will be formed and to protect other areas of structure100where vias should not be formed. As shown inFIG.2, the via and line opening width can be determined by distance125. Hence, width220of opening210may be wider than width125. In some embodiments, opening210can be as wide as width230. The scenario assumes that the OVL error in the photolithography process is zero (no alignment error), and therefore width220shows no variation due to the photolithography process. In some embodiments, the misalignment errors are nonzero and therefore width220of opening210can be wider than distance125, but width220cannot be wider than width230due to the variations in the photolithography process. Therefore, the OVL error contributes to the limit as to how close width220can be to width230—which is the maximum width for opening210without any misalignment error. In some embodiments, the OVL window is considered to be at least wider than distance125. An etch process removes exposed areas of ILD layer130and etch stop layer140through photoresist via opening210to form an opening that stops on underlying metal line150. In some embodiments, the etch process has high selectivity for ILD layer130and etch stop layer140. In some embodiments, the etch process automatically stops after a predetermined amount of time. An etch process which is terminated after a predetermined amount of time is referred to as a “timed etch.” An “end-pointed” etch process is a process that automatically stops when the layer directly underneath the etched layer is detected; for example, when the underlying metal line150is detected. End-point detection is possible because etch stop layer140and the underlying layer metal line150are made of different materials. Consequently, they can have different etch rates for a given etching chemistry. Detection of metal line150can be done through, for example, a change in the etch rate, which can be detected by in-situ metrology equipment such as, for example, an optical emission microscope. Since the optical emission microscope can be integrated into the etch chamber, the etch process can be monitored in real-time. In some embodiments, the etch process may be timed for a first part of the process and end-pointed for a second part of the process. Since the etch process is required to etch different materials (e.g., ILD layer130and etch stop layer140), different etch chemistry may be required. An exemplary etch chemistry can include a combination of hydrobromic acid (HBr), helium (He), oxygen (O2) and chlorine (Cl2). In addition to the etch chemistry, other etch process parameters can be adjusted such as, for example, flow rate, temperature, and pressure. These parameters can be used to control the etch rate, etch profile, uniformity, etc. After formation of via opening210, photoresist200is removed (i.e., stripped) and line openings250are exposed. In some embodiments, the height of via opening210is larger than the height of line opening250. In some embodiments, via and line openings210and250when filled with a conductive material form conductive structures in an interconnect layer. FIG.3shows the structure ofFIG.2after via opening210and line openings250are filled with a conductive material300. In some embodiments, conductive material300is copper (Cu), cobalt (Co), aluminum (Al), graphene, or any other suitable conductive material. Conductive material300is polished by chemical mechanical polishing (CMP) to remove extra material (overburden) from the top of mandrels110and spacers120,122. Referring toFIG.4, once conductive material300is polished, a top surface of conductive material300is capped with an etch stop capping layer400. In some embodiments, etch stop capping layer400can be selectively grown on conductive material300. By way of example and not limitation, etch stop capping layer400can be a metal oxide such as, for example, an Al-based, a Co-based, a tungsten (W)-based, a nickel (Ni)-based, or a zirconium (Zr)-based oxide. Those skilled in the art will appreciate that these are merely examples and other appropriate oxides can be used. By way of example and not limitation, etch stop capping layer400can be deposited with chemical vapor deposition (CVD), physical vapor deposition (PVD), or a spin-on process followed by a metal oxide patterning process. The role of etch stop capping layer400is to protect conductive material300from subsequent etching processes. In some embodiments, additional vias and lines can be formed by removing mandrels110to form a plurality of openings. Respective spacers120and122associated with the mandrels110(to be removed) are not removed. Removal of mandrels110may not be global, e.g., across the whole structure100. For example, a patterned photoresist may be used to protect areas of structure100where removal of mandrels110is not desired. In some embodiments, a dry etch chemistry can be used to remove mandrels110. By way of example and not limitation, a dry etch chemistry can be a combination of HBr, He, O2, and Cl2. After the mandrel removal process, a resist strip operation removes the photoresist.FIG.5shows the structure ofFIG.4after photoresist500is applied and patterned, and at least a portion of mandrels110are removed to form an opening510between opposing first spacer120and second spacer122. In order to start the interconnect formation, a photoresist600is applied on structure100, and then patterned as shown inFIG.6. At the end of the photoresist patterning process, via openings610are formed in photoresist600. Some openings, like opening510, are covered by photoresist600so that they are not subjected to the etching process. During this process, the OVL window remains wide. For example, via opening610has a width630. In some embodiments, in which the OVL or misalignment error is zero, via opening610may be as wide as width620for at least two reasons: (i) because the via/line opening width is defined by distance640between opposing spacers120and122; and (ii) because metal oxide layer400protects conductive material300from the etching chemistry and therefore if width630is wider and includes an area where conductive material300is exposed, the conductive material is protected from etching. In some embodiments, the misalignment errors are nonzero and therefore width630of via opening610can be wider than distance640, but cannot be wider than620due to variations in the photolithography process. Hence, the OVL error limits how close width630can be to width620, which is a maximum width without any misalignment errors. This is also true for via opening610. Referring toFIG.6, exposed areas of ILD layer130and etch stop layer140are etched through via openings610while covered areas of structure100are protected from the etch process. A selective process removes exposed areas of layer130and layer140to form via openings in ILD layer130and etch stop layer140. In some embodiments, the etch process may be timed, end-pointed, or a combination of the two. For example, the etch process can be timed for a first part of the process and end-pointed for a second part of the process. By way of example and not limitation, the etch chemistry for the removal of ILD layer130may be different than the etch chemistry for etch stop layer140. In some embodiments, the etch processes are highly selective for ILD layer130and etch stop layer140. An exemplary dry etch chemistry is a combination of HBr, He, O2, and Cl2. Once the etch process is complete, patterned photoresist600is stripped and all openings, such as opening510, are exposed. In some embodiments, opening510is a line opening, and the height of via opening610is larger than the height of line opening510. In some embodiments, via and line openings610and510when filled with a conductive material form conductive structures in an interconnect layer. InFIG.7, a conductive material700fills the formed via openings (e.g., via opening610) and line openings (e.g., opening510). In some embodiments, conductive material700is different than conductive material300. In some embodiments, conductive material700can be Al, Co, Cu, graphene, or any suitable conductive material with appropriate resistivity. InFIG.8, excess conductive material700is polished with CMP down to the level of metal oxide layer400. In some embodiments, the excess conductive material700is etched with a metal etch process. In some embodiments, the excess conductive material700can be removed with a combination of CMP and dry etch. In the combination of CMP and dry etch case, conductive material700can be recessed below the metal oxide layer400level. Once conductive material700is polished, its top surface is capped with an etch stop capping layer800. In some embodiments, lines or vias with different conductive material300and700are alternating and they may have different selectively grown etch stop capping layers. In some embodiments, similar to etch stop capping layer400, etch stop capping layer800selectively grows on conductive material700. By way of example and not limitation, etch stop capping layer800can be a metal oxide such as an Al-based, a Co-based, a W-based, a nickel Ni-based, or a Zr-based oxide. Those skilled in the art will appreciate that these are merely examples and other appropriate oxides can be used. By way of example and not limitation, etch stop capping layer400can be deposited with CVD, PVD, or a spin-on process followed by a metal oxide patterning process. FIG.9is a magnified view of section810ofFIG.8. A selective etch removes the remaining first spacers120and second spacers122as well as mandrels110to form openings that will be filled with a dielectric liner layer. In some embodiments, spacers120and122can be removed with a dry etch process or a wet etch process. By way of example and not limitation, a dry etch chemistry can be fluorine-based (CxHyFz) or chlorine-based (Cl2, BxCly). An exemplary wet etch chemistry can be hydrochloric acid, phosphoric acid, nitric acid and hydrogen peroxide chemistry.FIG.10showsFIG.9after the removal of remaining first spacers120and second spacers122as well as mandrel110, and the formation of a dielectric liner layer1000. Dielectric liner layer1000covers the etch stop capping layers800,400and partially fills the space between the formed interconnects (line spacing) allowing for a void1010to be formed between the interconnects. Void1010may be referred to as an “air-gap.” In some embodiments, void1010contains a gas. In some embodiments, void1010is nominally gas free. Voids can have a dielectric constant of nearly 1, therefore increasing the size of the void can further lower the dielectric constant of dielectric liner material1000. In some embodiments the dielectric constant of dielectric liner layer1000with voids present can range from 2 to 6. In some embodiments, dielectric liner layer1000is deposited using a chemical vapor deposition (CVD) or an atomic layer deposition (ALD) process. The deposition process conditions and the line spacing between conductive materials700and300can modulate the size of the void. For example, process conditions such as pressure and gas ratios can affect the conformality of the deposited film and allow the void to be formed. In some embodiments, the line spacing can range from 5 to 20 nm. At this line spacing range, the void forms naturally and can occupy from 30 to 70% of the total volume between conductive materials700and300. By way of example and not limitation, dielectric liner layer1000can be SiO2, SiN, or SiC and its thickness can range from 10 to 100 nm. Due to voids1010present in dielectric liner layer1000, dielectric liner layer1000may not have the thermo-mechanical rigidity to sustain vibrations or mechanical/thermal stress from subsequent processing. In some embodiments, a dielectric liner layer cap is formed to protect dielectric liner layer1000from fracture and/or collapse. Referring toFIG.11, a dielectric layer cap1100is spin-coated or deposited with CVD over dielectric liner layer1000. In some embodiments, dielectric layer cap1100is a low-k layer that provides mechanical support to dielectric liner layer1000. In some embodiments, the thickness of dielectric liner layer1000can range between 10 and 100 nm. In some embodiments, the dielectric constant of dielectric liner layer1000can range from 2 to 6. Referring toFIG.12, a flow diagram of an exemplary patterning fabrication process1200of multi-metal fill, self-aligned interconnects with dielectric liner layer and dielectric layer cap in accordance with this disclosure is shown. Other fabrication operations may be performed between the various operations of method1200, and are omitted merely for clarity. The patterning fabrication process of multi-metal fill, self-aligned metal lines with dielectric liner layer and dielectric layer cap is not limited to the exemplary fabrication process1200. Exemplary process1200starts with operation1210, where a plurality of pattern structures are formed over a substrate such as, for example, as shown inFIG.1. By way of example and not limitation, the substrate may be a partially fabricated wafer which includes previously formed layers. Each pattern includes a mandrel110(center portion) and a corresponding pair of opposing spacers120and122. An exemplary substrate includes an ILD layer130, an etch stop layer140, and an underlying metal line150. Metal line150is over ILD layer160and etch stop layer170as shown inFIG.1. Other layers may be present below etch stop layer170but are not shown for clarity. In some embodiments, mandrel110is made of amorphous Si. In some embodiments, first and second spacers120and122respectively are made of titanium oxide or silicon nitride. In some embodiments, mandrel110and spacers120,122act as an etch mask so that formed vias and lines are self-aligned to second spacer122of a first mandrel110and first spacer120of a neighboring second mandrel110. By way of example and not limitation, ILD layer130has a thickness between 10 and 100 nm. In some embodiments, ILD layer130can be a stack of dielectrics such as a low-k dielectric and another dielectric: (i) a low-k dielectric (e.g., carbon doped silicon oxide) and a silicon carbide with nitrogen doping; (ii) a low-k dielectric and a silicon carbide with oxygen doping; (iii) a low-k dielectric with silicon nitride; or (iv) a low-k dielectric with silicon oxide. Exemplary process1200continues with operation1215, where first openings are formed in the substrate and are self-aligned to the pattern structures. Referring toFIG.1, the opening includes via openings in ILD layer130and etch stop layer140. Vias electrically connect two layers in the vertical direction, and lines make electrical connections within the same layer in a plane that is substantially parallel to the surface of the substrate. Operation1215involves several photolithography and etch operations. Referring toFIG.2, a photoresist layer200is coated, photo-exposed, and patterned over structure100to create opening210with width220. A subsequent etch process removes exposed areas of ILD layer130and etch stop layer140through photoresist opening210to from a via opening that stops on underlying metal line150. The etch process can have high selectivity for ILD layer130and etch stop layer140. An exemplary etch chemistry can include a combination of HBr, He, O2, and Cl2. In some embodiments, the etch process automatically stops after a predetermined amount of time. In some embodiments, the etch process may be timed for a first part of the process and end-pointed for a second part of the process. In operation1220, a first conductive material is disposed in the openings to form an interconnect layer that includes conductive structures. The conductive material extends upwardly from the first opening to substantially fill a region between the second spacer of a first structure and the first spacer of a neighboring second structure. Referring toFIG.3, conductive material300fills via openings210(formed in previous operation1215) and line openings250. In some embodiments, the height of via opening210is larger than the height of line opening250. In some embodiments, via and line openings210and250when filled with a conductive material form conductive structures in an interconnect layer. In some embodiments, conductive material300is Cu, Co, Al, graphene, or any other suitable conductive material. Conductive line300is then polished to the level of structure110and spacers120/122with a CMP process. After the CMP process, a metal oxide layer is selectively grown on the conductive structures. The metal oxide layer is an etch stop capping layer such as, for example, layer400inFIG.4. By way of example and not limitation, etch stop capping layer400can be a metal oxide such as an Al-based, a Co-based, a W-based, a Ni-based, or a Zr-based oxide. Those skilled in the art will appreciate that these are merely examples and other appropriate oxides can be used. By way of example and not limitation, etch stop capping layer400can be deposited with CVD, PVD, or a spin-on process followed by a metal oxide patterning process. A role of etch stop capping layer400, among others, is to protect conductive material300from subsequent etching processes. In operation1225, additional via openings and line openings are formed by removing a portion, or all, of mandrels110, from the pattern structures in predetermined locations according to, for example, an interconnect layout of the integrated circuit being manufactured. Photolithography may be used to define the areas of structure100where mandrels110are to be removed. A selective etch process removes a portion of the mandrels110without removing its corresponding pair of opposing first and second spacers120and122, thus forming an opening (e.g., opening510shown inFIG.5). In some embodiments, a dry etch chemistry can be used to remove mandrels110. By way of example and not limitation, a dry etch chemistry can be a combination of HBr, He, O2, and Cl2. After removal of mandrels110, photoresist500, which was used in the photolithography process, is stripped. In operation1230, openings are formed that are self-aligned to the opposing first and second spacers (120and122) of the pattern structure. This operation involves similar photolithography and etch processes as described in connection with operation1215. For example, referring toFIG.6, photoresist600is applied on structure100and then patterned. At the end of the photoresist patterning process, via openings610are formed in photoresist600. Some openings, like opening510between neighboring spacers120, are covered by photoresist600so that they are not exposed to the etching process. During this process, the OVL window remains wide. For example, opening610has a width630. In some embodiments, in which the OVL or misalignment error is zero, opening610may be as wide as width620for at least two reasons: (i) because the via/line opening width is defined by distance640between opposing spacers120and122; and (ii) because metal oxide layer400protects conductive material300from the etching chemistry and therefore if width630is wider and includes an area where conductive material300is exposed, the conductive material is protected from etching. In some embodiments, the misalignment errors are nonzero and therefore width630of opening610can be wider than distance640, but cannot be wider than620due to variations in the photolithography process. Hence, the OVL error limits how close width630can be to width620, which is a maximum width without any misalignment errors. This is also true for opening610. A selective process removes exposed areas of ILD layer130and etch stop layer140through photoresist openings610to form a via opening that stops on underlying metal layer150. The photoresist is then stripped. By way of example and not limitation, etch chemistry for the removal of ILD layer130may be different than the etch chemistry for stop etch layer140. In some embodiments, the etch processes are highly selective for ILD layer130and etch stop layer140. An exemplary etch chemistry can include a combination of HBr, He, O2, and Cl2. In some embodiments, the etch process is timed, end-pointed, or a combination of the two. For example, an etch process is timed in the beginning of the process and end-pointed towards the end of the process. In operation1235, another conductive material fills the second opening(s) to form an additional interconnect layer that includes conductive structures. InFIG.7, a conductive material700fills via openings610and line openings510. In some embodiments, opening510is a line opening, and the height of via opening610is larger than the height of line opening510. In some embodiments, via and line openings610and510when filled with a conductive material form conductive structures in an interconnect layer. In some embodiments, conductive material700is different than conductive material300. In some embodiments, conductive material700is Al, Co, Cu, graphene, or any suitable conductive material with appropriate resistivity. InFIG.8, conductive material700is polished with CMP down to the level of metal oxide layer400. In some embodiments, excess conductive material700is etched with a metal etch process. In some embodiments, the excess conductive material700can be removed with a combination of CMP and dry etch. In the CMP and dry etch case, conductive material can be recessed below the metal oxide layer400. Once conductive material is polished or etched, its top surface is capped with an etch stop capping layer800. In some embodiments, lines or vias with different conductive material300and700are alternating. Conductive material300and700are alternating may have different selectively grown etch stop capping layers. In some embodiments, like etch stop capping layer400, etch stop capping layer800is selectively grown on conductive material700. By way of example and not limitation, etch stop capping layer800can be a metal oxide such as, for example, an Al-based, a Co-based, a W-based, a nickel Ni-based, or a Zr-based oxide. Those skilled in the art will appreciate that these are merely examples and other appropriate oxides can be used. By way of example and not limitation, etch stop capping layer400can be deposited with CVD, PVD, or a spin-on process followed by a metal oxide patterning process. In step1240opening are formed by removing the opposing first and second spacers120,122and the remaining mandrels110to form openings to be filled with a dielectric liner layer. In some embodiments, spacers120,122can be removed with a dry etch process or a wet etch process. By way of example and not limitation, a dry etch chemistry can be fluorine-based (CxHyFz) or chlorine-based (Cl2, BxCly). An exemplary wet etch chemistry can be hydrochloric acid, phosphoric acid, nitric acid, and hydrogen peroxide chemistry. In step1245, the openings are filled with a dielectric liner layer where voids are being formed between the conductive structures. For example, referring toFIG.10, a dielectric liner layer1000covers the etch stop capping layers800,400and partially fills the space between the formed interconnects (conductive material300and700) allowing for a void1010to be formed between the interconnects. Void1010may be referred to as an “air-gap.” In some embodiments, void1010contains a gas. In some embodiments, void1010is nominally gas free. Voids can have a dielectric constant of nearly 1, therefore increasing the size of the void can further lower the dielectric constant of dielectric liner material1000. In some embodiments, the dielectric constant of dielectric liner layer1000with voids present can range from 2 to 6. In some embodiments, dielectric liner layer1000is deposited using a CVD or an ALD process. The deposition process conditions and the size of the available space between the interconnects can modulate the size of the void. Process conditions such as pressure and gas ratios can affect the conformality of the deposited dielectric liner layer and allow the void to be formed. In some embodiments, the line spacing can range from 5 to 20 nm. At this line spacing range, the void forms naturally and can occupy anywhere from 30 to 70% of the total volume between conductive materials700and300. Dielectric liner layer1000can be SiO2, SiN, or SiC and its thickness can range from 10 to 100 nm. Due to voids1010present in dielectric liner layer1000, this layer does not have the thermo-mechanical rigidity to sustain vibrations or mechanical/thermal stress from subsequent processing. In some embodiments, a dielectric liner layer cap is used to protect dielectric liner layer1000from fracture and/or collapse. In step1250, a dielectric cap layer is formed over the liner dielectric layer. Referring toFIG.11, a dielectric layer cap1100is spin-coated or deposited with CVD over dielectric liner layer1000. In some embodiments, dielectric layer cap1100is a low-k layer that provides mechanical support to dielectric liner layer1000. In some embodiments, the thickness of dielectric liner layer1000is between 10 and 100 nm with a dielectric constant between 2 and 6. An interconnect formation process that employs a patterning photolithography/etch process with self-aligned interconnects is disclosed to improve the photolithography OVL margin since alignment is accomplished on a wider pattern. A wider OVL window reduces wafer defectivity associated with patterning such as, for example, metal bridges and deformed interconnects. Patterning defects are a reliability concern which adversely impacts wafer yield. In addition, the patterning photolithography/etch process with self-aligned interconnects supports the use of a multi-metal gap fill process where the interconnects can be filled with different types of conductive material. The multi-metal gap-fill process utilizes a selective metal oxide that is grown after a fill process to protect a deposited metal from subsequent etch processes. In some embodiments, dielectric liner layer is formed between the interconnects. The liner dielectric layer includes voids, or “air-gaps”, in the space between the formed interconnects. The voids, or “air-gaps,” further lower the dielectric constant of the liner dielectric layer. A dielectric cap layer is formed over the dielectric liner layer to protect the dielectric liner layer from fracture and/or collapse. In some embodiments, a semiconductor fabrication method includes a substrate, a dielectric stack formed over the substrate. A first interconnect layer made of a first conductive material and a second interconnect layer made of a second conductive material formed on a dielectric stack. The first and second conductive materials are different from one another. A first metal oxide layer is formed on the first interconnect layer and a second metal oxide layer is formed on the second interconnect layer. A first dielectric layer which includes a void is formed between the first and second interconnect layers and over the first and second metal oxides layers. A second dielectric layer is formed over the first dielectric layer. In some embodiments, a semiconductor fabrication method includes a substrate and a dielectric stack formed over the substrate. A first interconnect layer and a second interconnect layer are formed with at least one of the first and second interconnect layers disposed in the dielectric stack. An opening is formed between the first and second interconnect layers. A first dielectric layer that includes an opening is disposed in the opening. A second dielectric layer is formed over the first dielectric layer. In some embodiments, a semiconductor device includes a substrate and a dielectric stack over the substrate. A first dielectric layer over the dielectric stack and a second dielectric layer over the first dielectric layer. A first conductive structure embedded in the first dielectric layer, where the first conductive structure forms a first interconnect layer with a first conductive material and a first portion that penetrates through the dielectric stack. A second conductive structure is embedded in the first dielectric layer, where the second conductive structure forms a second interconnect layer with a second conductive material and a second portion that penetrates through the dielectric stack. The first dielectric layer includes a void formed between the first and second conductive structures. It is to be appreciated that the Detailed Description section, and not the Abstract of the Disclosure section, is intended to be used to interpret the claims. The Abstract of the Disclosure section may set forth one or more but not all possible embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the subjoined claims in any way. The foregoing disclosure outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art will appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 40,822 |
11860551 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention. The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention. Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a computer readable medium that is non-transitory and stores instructions for executing the method. Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a computer readable medium that is non-transitory and stores instructions executable by the system. Any reference in the specification to a computer readable medium that is non-transitory should be applied mutatis mutandis to a method that may be applied when executing instructions stored in the computer readable medium and should be applied mutatis mutandis to a system configured to execute the instructions stored in the computer readable medium. There may be provided a system, a method, and a computer readable medium for detecting rare stochastic defects. The rare stochastic defects have a statistics (probability of occurrence) which is proportional to the size of the pattern under consideration e.g. line or space width or hole or pillar radius. The substrate may be a semiconductor wafer. The substrate may be manufactured to include one or more targets—each target includes one or more dense patterns. The density of the dense patterns dramatically increases the chances of occurrence the rate stochastic defects in the one or more targets—and thus dramatically increases the chances of detecting the rate stochastic defects in the one or more targets. TABLE 1 illustrates an example of a relationship between pattern width and defect density (which is a function of the probability of occurrence of stochastic defects and may reflect number of defects per area, per patterns, and the like). A pattern may include multiple features (a feature may be a line, a dot, and the like) and the width listed in TABLE 1 may be (a) a width of the feature, or (b) a distance between adjacent features. TABLE 1Pattern width [Nanometer]Defect density3010−142910−132810−122710−112610−102510−92410−82310−72210−62110−52010−4 For example—assuming that a desired pattern width is 30 nanometer and the defect density (in a functional pattern—a pattern of the desired width) should not exceed 10−13—then at a target that includes dense patterns of width of 20 nanometer the defect density should not exceed 10−5. Thus—by changing the pattern width from 30 nanometer to 20 nanometer—the probability of finding the defects was increased by a factor of about 108. The one or more targets may cover a very small fraction (below one percent, below ten percent, and the like) of the entire substrate—and the one or more targets can be scanned by charged particle tools in a reasonable amount of time. The patterns of the targets may of any shape—for example an array of lines (or other structures) that is large enough for a meaningful statistics of defects to be collected, in which the space width is reduced by some percent. FIG.1illustrates a substrate10, a first type of functional patterns11, second type of functional patterns12, targets of first type (such as an array of dense lines)13, and targets of second type (such as an arrays of dots)14. FIG.2illustrates different types of rare stochastic defects. FIG.2include parts of different patterns that include rare stochastic defects. A part of first pattern41includes a part of an array of lines51and an unwanted bridge61. A part of second pattern42includes a part of an array of lines51, and a cut62. A part of third pattern43includes a part of an array of dots55, and a missing dot63. A part of fourth pattern44includes a part of an array of dots55, and an unwanted bridge64. A part of fifth pattern45includes a part of an array of lines52, and an unwanted bridge65. A part of sixth pattern46includes a part of an array of lines53, and a cut66. A part of seventh pattern47includes a part of an array of dots56, and missing dots67. A part of eighth pattern48includes a part of an array of dots57, and unwanted bridges68. FIG.3illustrates a method20that includes a sequence of steps. The steps may include:Step22of searching stochastic defects in targets of dense patterns. The dense patterns are denser than functional patterns located outside the targets. Functional means that the pattern are used during the operation of the dies. The targets are usually dedicated to the test.Step24of estimating the occurrence of rare stochastic defects outside the targets based on the outcome of step22.Step26of responding to the estimation—for example defining the substrate as a defective substrate or not based on the estimation of step24. For example comparing the estimated defect density to an allowable defect threshold and determine whether the substrate is acceptable or defective. FIG.4illustrates method100for detecting a rare stochastic defect. Method100may start by step110of searching for a rare stochastic defect in a dense pattern of a substrate, wherein the rare stochastic defect is (a) of nanometric scale, (b) appears in a functional pattern of the substrate with a defect density that is below 10−9, and (c) appears in the dense pattern with a defect density that is above 10′. Step110may include illuminating the dense pattern by a charged particle beam and generating images of the dense pattern (an image may cover at least a part of the entire dense pattern). Alternatively—step110may include receiving (for example by a remote computer that does not belong to a charger particle system) information about the dense pattern. The information may include one or more images of the dense pattern. The dense pattern is a dense representation of the functional pattern. The dense patterns should include the same features and may have an arrangement that may differ from the functional pattern by at least one out of (a) a distance between features of the dense pattern, and (b) a width of the features of the dense pattern. Assuming that the functional pattern includes an array of lines—then a corresponding dense pattern will include a denser array of lines. The width of the lines of the dense pattern may be smaller than the width of lines of the functional pattern. Additionally or alternatively, the distance between adjacent lines of the dense pattern may be smaller than the distance between adjacent lines of the functional pattern. Assuming that the functional pattern includes an array of dots—then a corresponding dense pattern will include a denser array of the dots. The width of the dots of the dense pattern may be smaller than the width of the dots of the functional pattern. Additionally or alternatively, the distance between adjacent dots of the dense pattern may be smaller than the distance between adjacent dots of the functional pattern. The dense pattern and the functional pattern may be arrays of lines. Step110may include searching for at least one out of a cut within a line, and an unwanted bridge between lines. The dense pattern and the functional pattern may be arrays of dots. Step110may include searching for at least one out of a missing dot and an unwanted bridge between dots. Step110may be followed by step120of estimating the occurrence of the rare stochastic defect within the functional pattern based on an outcome of the searching. The estimating may include determining the defect density of the rare stochastic defect within functional patterns that span over the substrate or span over one or more parts of the substrate. Step120may include step122of determining a defect density of the rare stochastic defect in the dense pattern, and step124of determining a defect density of the rare stochastic defect in the functional pattern based on (a) the defect density of the rare stochastic defect in the dense pattern, and (b) a relationship between defect densities of the rare stochastic defect within dense patterns and functional patterns. An example of the relationship is illustrated in TABLE 1. Step120may be followed by step130of responding to the outcome of step120. Step130may include evaluating a quality of the substrate based on the occurrence of the rare stochastic defect within the functional pattern. Step130may include disqualifying a substrate if a defect density of the rare stochastic defect within the functional pattern exceeds a predefined threshold. The predefined threshold may be defined by a manufacturer of the substrate, a customer, and the like. While method100was illustrated in relation to a test pattern and a rare stochastic defect, method may be applied on multiple dense patterns, and the searching may include searching for different types of rare stochastic defects. It should be noted that different rare stochastic defects may be searched in a single dene pattern. For example—a dense pattern of lines may be searched (during step110) for cut within a line, and/or for an unwanted bridge between lines. Yet for another example—a dense pattern of does may be searched (during step110) for missing dot and/or for an unwanted bridge between dots. For example step130may be followed by selecting another dense pattern and repeating steps110,120and130for one or more other rare stochastic defects. Accordingly, during a next repetition of method100, step110may include searching for another rare stochastic defect in another dense pattern of the substrate; wherein the other rare stochastic defect appears in another functional pattern of the substrate with a defect density that is below 10−9and appears in the other dense pattern with a defect density that is above 10′; wherein the searching comprises illuminating the other dense pattern by the charged particle beam; wherein the other dense pattern is a dense representation of the other functional pattern that differs from the other functional pattern by at least one out of (a) a distance between features of the other dense pattern, and (b) a width of the features of the other dense pattern; wherein the other rare stochastic defect differ from rare stochastic defect by type. During the next repetition of method100, step120will include estimating the occurrence of the other rare stochastic defect within the other functional pattern based on an outcome of the searching for the other rare stochastic defect. Step110may be executed by a system that may be a charged particle system. It should be noted that steps120and130may be executed by the system or by another system—for example by a remote computer. FIG.5illustrates an example of a system200. FIG.200includes imager210and a processor220. The imager210may be an electron beam imager, an electron beam microscope, an ion microscope, an ion imager, and the like. The electron beam microscope can be a scanning electron microscope, a transmission electron microscope, and the like. System200may be configured to execute method20and, additionally or alternatively may be configured to execute method100. For example—imager210may be configured to illuminate with a charge particle beam, a dense pattern of a substrate and generate images of the dense pattern. Processor220may be configured to:Search for a rare stochastic defect in the dense pattern based on an outcome of the illumination of the dense pattern, wherein the rare stochastic defect appears in a functional pattern of the substrate with a defect density that is below 10−9and appears in the dense pattern with a defect density that is above 10−7. The dense pattern is a dense representation of the functional pattern that differs from the functional pattern by at least one out of (a) a distance between features of the dense pattern, and (b) a width of the features of the dense pattern.An estimate an occurrence of the rare stochastic defect within the functional pattern based on an outcome of the searching. In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. Moreover, the terms “front,” “back,” “top,” “bottom,” “over,” “under” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals. Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality. Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner. However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. | 18,625 |
11860552 | DETAILED DESCRIPTION FIG.1schematically depicts a lithographic apparatus according to one embodiment of the invention. The apparatus includes an illumination system (illuminator) IL configured to condition a radiation beam B (e.g. UV radiation or any other suitable radiation), a mask support structure (e.g. a mask table) MT constructed to support a patterning device (e.g. a mask) MA and connected to a first positioning device PM configured to accurately position the patterning device in accordance with certain parameters. The apparatus also includes a substrate table (e.g. a wafer table) WT or “substrate support” constructed to hold a substrate (e.g. a resist coated wafer) W and connected to a second positioning device PW configured to accurately position the substrate in accordance with certain parameters. The apparatus further includes a projection system (e.g. a refractive projection lens system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g. including one or more dies) of the substrate W. The illumination system IL may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic or other types of optical components, or any combination thereof, for directing, shaping, or controlling radiation. The support structure MT supports, i.e. bears the weight of, the patterning device MA. It holds the patterning device MA in a manner that depends on the orientation of the patterning device MA, the design of the lithographic apparatus, and other conditions, such as for example whether or not the patterning device is held in a vacuum environment. The mask support structure can use mechanical, vacuum, electrostatic or other clamping techniques to hold the patterning device. The mask support structure may be a frame or a table, for example, which may be fixed or movable as required. The mask support structure may ensure that the patterning device is at a desired position, for example with respect to the projection system. Any use of the terms “reticle” or “mask” herein may be considered synonymous with the more general term “patterning device.” The term “patterning device” used herein should be broadly interpreted as referring to any device that can be used to impart a radiation beam with a pattern in its cross-section so as to create a pattern in a target portion of the substrate. It should be noted that the pattern imparted to the radiation beam may not exactly correspond to the desired pattern in the target portion of the substrate, for example if the pattern includes phase-shifting features or so called assist features. Generally, the pattern imparted to the radiation beam will correspond to a particular functional layer in a device being created in the target portion, such as an integrated circuit. The patterning device MA may be transmissive or reflective. Examples of patterning devices include masks, programmable mirror arrays, and programmable LCD panels. Masks are well known in lithography, and include mask types such as binary, alternating phase-shift, and attenuated phase-shift, as well as various hybrid mask types. An example of a programmable mirror array employs a matrix arrangement of small mirrors, each of which can be individually tilted so as to reflect an incoming radiation beam in different directions. The tilted mirrors impart a pattern in a radiation beam which is reflected by the mirror matrix. The term “radiation beam” used herein encompass all types of electromagnetic radiation, including ultraviolet (UV) radiation (e.g. having a wavelength of or about 365, 248, 193, 157 or 126 nm) and extreme ultra-violet (EUV) radiation (e.g. having a wavelength in the range of 5-20 nm), as well as particle beams, such as ion beams or electron beams. The term “projection system” used herein should be broadly interpreted as encompassing any type of projection system, including refractive, reflective, catadioptric, magnetic, electromagnetic and electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term “projection lens” herein may be considered as synonymous with the more general term “projection system”. As here depicted, the apparatus is of a transmissive type (e.g. employing a transmissive mask). Alternatively, the apparatus may be of a reflective type (e.g. employing a programmable mirror array of a type as referred to above, or employing a reflective mask). The lithographic apparatus may be of a type having two (dual stage) or more substrate tables or “substrate supports” (and/or two or more mask tables or “mask supports”). In such “multiple stage” machines the additional tables or supports may be used in parallel, or preparatory steps may be carried out on one or more tables or supports while one or more other tables or supports are being used for exposure. The lithographic apparatus may also be of a type wherein at least a portion of the substrate may be covered by a liquid having a relatively high refractive index, e.g. water, so as to fill a space between the projection system and the substrate. An immersion liquid may also be applied to other spaces in the lithographic apparatus, for example, between the mask and the projection system. Immersion techniques can be used to increase the numerical aperture of projection systems. The term “immersion” as used herein does not mean that a structure, such as a substrate, must be submerged in liquid, but rather only means that a liquid is located between the projection system and the substrate during exposure. Referring toFIG.1, the illuminator IL receives a radiation beam from a radiation source SO. The radiation source SO and the lithographic apparatus may be separate entities, for example when the source is an excimer laser. In such cases, the radiation source SO is not considered to form part of the lithographic apparatus and the radiation beam is passed from the radiation source SO to the illuminator IL with the aid of a beam delivery system BD including, for example, suitable directing mirrors and/or a beam expander. In other cases the radiation source SO may be an integral part of the lithographic apparatus, for example when the source is a mercury lamp. The radiation source SO and the illuminator IL, together with the beam delivery system BD if required, may be referred to as a radiation system. The illuminator IL may include an adjuster AD configured to adjust the angular intensity distribution of the radiation beam. Generally, at least the outer and/or inner radial extent (commonly referred to as G-outer and G-inner, respectively) of the intensity distribution in a pupil plane of the illuminator can be adjusted. In addition, the illuminator IL may include various other components, such as an integrator IN and a condenser CO. The illuminator may be used to condition the radiation beam, to have a desired uniformity and intensity distribution in its cross section. The radiation beam B is incident on the patterning device MA, which is held on the support structure MT, and is patterned by the patterning device. Having traversed the patterning device MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioning device PW and position sensor IF (e.g. an interferometric device, linear encoder or capacitive sensor), the substrate table WT can be moved accurately, e.g. so as to position different target portions C in the path of the radiation beam B. Similarly, the first positioning device PM and another position sensor (which is not explicitly depicted inFIG.1) can be used to accurately position the patterning device MA with respect to the path of the radiation beam B, e.g. after mechanical retrieval from a mask library, or during a scan. In general, movement of the support structure MT may be realized with the aid of a long-stroke module (coarse positioning) and a short-stroke module (fine positioning), which form part of the first positioning device PM. Similarly, movement of the substrate table WT or “substrate support” may be realized using a long-stroke module and a short-stroke module, which form part of the second positioner PW. In the case of a stepper (as opposed to a scanner) the support structure MT may be connected to a short-stroke actuator only, or may be fixed. Patterning device MA and substrate W may be aligned using mask alignment marks M1, M2and substrate alignment marks P1, P2. Although the substrate alignment marks P1, P2as illustrated occupy dedicated target portions, they may be located in spaces between target portions C (these are known as scribe-lane alignment marks). Similarly, in situations in which more than one die is provided on the patterning device MA, the mask alignment marks M1, M2may be located between the dies. The depicted apparatus could be used in at least one of the following modes:1. In step mode, the support structure MT and the substrate table WT are kept essentially stationary, while an entire pattern imparted to the radiation beam is projected onto a target portion C at one time (i.e. a single static exposure). The substrate table WT is then shifted in the X and/or Y direction so that a different target portion C can be exposed. In step mode, the maximum size of the exposure field limits the size of the target portion C imaged in a single static exposure.2. In scan mode, the support structure MT and the substrate table WT are scanned synchronously while a pattern imparted to the radiation beam is projected onto a target portion C (i.e. a single dynamic exposure). The velocity and direction of the substrate table WT or “substrate support” relative to the support structure MT may be determined by the (de-)magnification and image reversal characteristics of the projection system PS. In scan mode, the maximum size of the exposure field limits the width (in the non-scanning direction) of the target portion in a single dynamic exposure, whereas the length of the scanning motion determines the height (in the scanning direction) of the target portion.3. In another mode, the support structure MT is kept essentially stationary holding a programmable patterning device, and the substrate table WT is moved or scanned while a pattern imparted to the radiation beam is projected onto a target portion C. In this mode, generally a pulsed radiation source is employed and the programmable patterning device is updated as required after each movement of the substrate table WT or in between successive radiation pulses during a scan. This mode of operation can be readily applied to maskless lithography that utilizes programmable patterning device, such as a programmable mirror array of a type as referred to above. Combinations and/or variations on the above described modes of use or entirely different modes of use may also be employed. FIG.2shows a first embodiment of a system1according to the invention. The system1can for example be a stage system. The system1as shown inFIG.2comprises member10, which has a surface11. In this example, the surface11is the top surface of member10. The member10may for example be an object table of a stage system. The member10is provided with a plurality of air bearing devices20. Each air bearing device20comprises an air bearing body21, which has a free surface22. The free surface22is not covered by other physical components of the system. In this embodiment, it forms part of the outer surface of the member10. In use, and in particular after an optional clamping the object on the member10, the free surface22may be covered by the object that is supported on the member10. In the example ofFIG.2, the free surfaces22of the air bearing bodies21are arranged flush with the surface11of the member10, but this is not necessary. InFIG.2, four air bearing devices are provided, but alternatively any other number of air bearing devices may be provided, in particular two or more. Each air bearing device20further comprises a primary channel23which extends through the air bearing body21. The primary channel23has an inlet opening24in the free surface22. Each air bearing device20further comprises a secondary channel system25which extends through the air bearing body21. The secondary channel system25has a plurality of discharge openings26in the free surface22of the air bearing body21. For reasons of clarity, only a few discharge openings are provided with a reference numeral inFIG.2. The secondary channel system25may have only one discharge opening26in the free surface22of the air bearing body21. In the example shown inFIG.2, the air bearing body21comprises a porous material having interconnected cavities. The secondary channel system25is formed by the interconnected cavities in the porous material. The porous material is for example a sintered ceramic material, e.g. SiSiC. Optionally, the faces of the porous material through which no pressurised gas should escape are sealed. In an alternative embodiment, not shown in the drawing, the secondary channel system25in the air bearing body may be formed by channels that are for example machined, e.g. drilled, or etched into the air bearing body. Preferably, the primary channel23is connectable to a source of sub-atmospheric pressure, and the secondary channel system25is connectable to a source of pressurised gas. In use, when an object is arranged on or above the surface11, a pressurised gas can be supplied between the object and the surface11and/or between the object and the free surfaces22, via the discharge openings26of the secondary channel system25. At the same time, a sub-atmospheric pressure is provided at the inlet24of the primary channel23. The pressurised gas pushes the object away from the surface11, while the sub-atmospheric pressure pulls the object towards surface11. The balance between the pushing force that the pressurised gas exerts on the object and the pulling force that the sub-atmospheric pressure exerts on the object allows to position the object in z-direction relative to the surface11, which surface11extends in the x-y plane. The flow resistance in the secondary channel25system is higher than the flow resistance in the primary channel23. This stabilizes the system, because the higher flow resistance prevents that a force in z-direction, for example caused by tilting of the object around the x-axis or y-axis, forces the pressurised air back into the secondary channel system25. If pressurised gas would be forced back into the secondary channel system, the balance between the pushing force and the pulling force would be disturbed (at least temporarily), making the system unstable. It even may result in that the object hits the surface11before the balance between the pushing force and the pulling force is re-established. This is called “hammering”, and is not desired. In addition, the feature that the flow resistance in the secondary channel system25is higher than the flow resistance in the primary channel23results in a high tilt stiffness of the system. Preferably, the flow resistance in the secondary channel25system is also higher than the flow resistance in the gap between the free surface22and the object to be positioned. So, the flow resistance in the secondary channel system25being higher than the flow resistance in the primary channel23increases the stability and the reliability of the system. Optionally, the member10may comprise channels12to connect the primary channel23of the air bearing device20to a source of sub-atmospheric pressure and further channels13to connect the secondary channel system25of the air bearing device20to a source of pressurised gas. In the example ofFIG.2, in the air bearing devices20the discharge openings26of the secondary channel system25surround the inlet opening24of the primary channel23. This provides further stability to the system, because the pushing force exerted by the pressurised gas is distributed over a relatively large area, often evenly distributed over this area. In addition, pushing forces on opposite sides of the inlet opening24of the primary channel23may counteract each other, which also further stabilises the system. This results in a high tilt stiffness of the system. This effect is even more substantial when a porous material is used for the air bearing body, because of the large number of discharge openings26in such an embodiment. In a variant of the embodiment ofFIG.2, the system further comprises an sub-atmospheric pressure source which is in fluid communication with the primary channel23of at least one air bearing device20. Alternatively or in addition, in this variant, the system further comprises a source of pressurised gas which is in fluid communication with the secondary channel system25of at least one air bearing device20. FIG.3schematically shows a further embodiment of the system according to the invention. In the embodiment ofFIG.3, the member10is provided with a two-dimensional array of air bearing devices20. In this example, all air bearing devices20are identical, but this is not necessary. The air bearing devices20in the embodiment ofFIG.3are for example the same as the air bearing devices20as shown inFIG.2or as shown inFIG.6. Again, the member10may for example be an object table of a stage system or member comprising a guide surface. In the embodiment ofFIG.3, the air bearing devices20are arranged in a rectangular grid. They are distributed over the surface11of the member10, so they can act locally on an object that is present on or above the member10. This allows not only to position the object, but also to change the shape of the object. For example, it is possible to correct warp, concavity, convexity, or another low order Zernike shape, or local deformation or other deviations from the flat shape of the object. In the embodiment ofFIG.3, an air bearing device20* is provided at the centre of the member10. This further increases the stability of the system, as the actions of air bearing devices20on opposite sides of the centre of the member10may counteract each other. FIG.4schematically shows a further embodiment of the system according to the invention. In the embodiment ofFIG.4, the member10is provided with an array of air bearing devices20. In this example, all air bearing devices20are identical, but this is not necessary. The air bearing devices20in the embodiment ofFIG.4are for example the same as the air bearing devices20as shown inFIG.2or as shown inFIG.6. Again, the member10may for example be an object table of a stage system or member comprising a guide surface. In the embodiment ofFIG.4, the air bearing devices20are arranged in a polar grid. They are arranged on lines meeting in the centre of the polar grid. In the example ofFIG.4, the air bearing devices are arranged in concentric circles around the centre of the polar grid. So, a number of air bearing devices20has the same distance to the centre of the polar grid. The air bearing devices are distributed over the surface11of the member10, so they can act locally on an object that is present on or above the member10. This allows not only to position the object, but also to change the shape of the object. For example, it is possible to correct warp, concavity, convexity, or another low order Zernike shape, local deformation or other deviations from the flat shape of the object. In the embodiment ofFIG.4, an air bearing device20* is provided at the centre of the member10. This further increases the stability of the system, as the actions of air bearing devices20on opposite sides of the centre of the member10may counteract each other. InFIGS.2,3and4, the air bearing bodies21have a rectangular free surface22, and are generally box-shaped. Alternatively, other shapes are possible, for example cylindrical air bearing bodies, e.g. with a circular or elliptical free surface. Optionally, in any of the embodiments of theFIGS.2,3and4, the plurality of air bearing devices20is adapted to change the shape of an object which is to be arranged on the member10by controlling the pressure and/or gas flow rate at the inlet opening24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of the secondary channel system25of at least one air bearing device20. Optionally, the control of the pressure and/or gas flow rate at the inlet24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of a secondary channel system25of at least one air bearing device20is based on the expected and/or measured shape of the object. In a variant, any of the embodiments of theFIGS.2,3and/or4further comprises a measurement system which is adapted to obtain shape data of an object arranged on or above the member10during activation of the air bearing devices20. The measurement system optionally comprises capacitive sensors, and/or inductive sensors and/or an interferometer. Optionally, one or more parts of the measurement system are arranged on or in the member10, so that measurement of the object can take place while the object is arranged at or above the member10. Alternatively or in addition, the measurement system may be arranged such that measurement of the object takes place before the object is arranged at or above the member10. In this variant, the embodiment further comprises a control device for controlling the pressure and/or gas flow rate at the inlet opening24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of a secondary channel system25of at least one air bearing device20. The control device is adapted to receive the shape data from the measurement system and to control the pressure and/or gas flow rate at the inlet opening24of the primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of the secondary channel system25of the at least one air bearing device20, based on the received shape data. Optionally, in any of the embodiments of theFIGS.2,3and/or4, the plurality of air bearing devices comprises multiple air bearing groups. Each air bearing group comprises at least one air bearing device20. For example, in the embodiment ofFIG.3, each row of air bearing devices20extending in the x-direction may form an air bearing group, or each row of air bearing devices20extending in the y-direction may form an air bearing group. In the embodiment shown inFIG.4, for example the air bearing devices20that have the same distance to the centre of the polar grid (so the air bearing devices on the same circle) may form an air bearing group. Within an air bearing group, the pressure and/or gas flow rate at the inlets of the primary channels is in this embodiment controllable separately from the pressure and/or gas flow rate at the inlet opening of the primary channels in another air bearing group. This allows to control the local forces that are exerted on the object, which allows to change the shape of the object. Optionally, the pressure and/or gas flow rate at the inlet opening of the primary channels is controllable independently from the pressure and/or gas flow rate at the inlet openings of the primary channels in another air bearing group. Alternatively or in addition, the pressure and/or the gas flow rate at the discharge openings of the secondary channel systems in at least one air bearing group is controllable separately from the pressure and/or gas flow rate at the discharge openings of secondary channel systems in another air bearing group. This allows to control the local forces that are exerted on the object, which allows to change the shape of the object. Optionally, the pressure and/or the gas flow rate at the discharge openings of the secondary channel systems in at least one air bearing group is controllable independently from the pressure and/or gas flow rate at the discharge openings of secondary channel systems in another air bearing group. The separate control of the pressure and/or flow rate in any of the air bearing groups may lead to difference in pressure and/or flow rate between different air bearing groups, and therewith in differences in the magnitude of the local pulling and/or pushing forces exerted on the object at any given time. Alternatively or in addition, with this feature it is possible to control the timing of the activation of the respective air bearing groups. For example, one air bearing group may be activated only after another air bearing group has been activated. Activating one air bearing group only after another air bearing group has been activated can be advantageous when the shape of the object on or above the member10has to be corrected. For example, if the object is round, but dome-shaped instead of flat, and the embodiment ofFIG.4is used, it is advantageous to first activate the air bearing device20* at the centre of the member10. When the centre of the object is brought at the desired level above the member10by the combined action of the pulling force and pushing forces of the air bearing device20* at the centre of the member10, the air bearing devices20at the inner ring14are activated. They bring a ring around the centre of the object to desired level above the member10. The object can move freely in the x-y plane because there is no friction with any mechanical support members, which reduces the internal stresses in the object. After activating the air bearing devices in the inner ring14, the air bearing devices20in the outer ring15are activated. They bring an outer ring of the object to desired level above the member10. This way, the object is flattened and the dome shape is reduced or even made to disappear. After flattening, the object may be clamped or otherwise fixed to or relative to the member10. The invention may be applied in a stage system, for example in a stage system such as used in a lithographic apparatus. In that case, the member may be an object table or a substrate table. So, in a further embodiment, the invention provides a stage system, for positioning an object, which stage system comprises:an object table adapted to support the object to be positioned,which object table is provided with a plurality of air bearing devices,wherein each air bearing device comprises:an air bearing body, which has a free surface,a primary channel which extends through the air bearing body and has an inlet opening in the free surface,a secondary channel system which extends through the air bearing body and which has a plurality of discharge openings in the free surface,wherein the flow resistance in the secondary channel system is higher than the flow resistance in the primary channel. The invention allows the object, e.g. a substrate W or a patterning device MA, to be positioned in a direction perpendicular to a surface of the object table prior to clamping or otherwise fixing the object to that surface of the object table. In some embodiments, the invention allows to change or correct the shape of the object before prior to clamping or otherwise fixing the object the object table. FIG.5shows a side view of an example of an object table110of stage system of a lithographic apparatus. The top side of the object table110comprises a vacuum clamp104to clamp an object, e.g., a substrate W, on the object table110. The object table110further comprises three retractable pins1055, also known as e-pins, which are movable with respect to the object table between an extended position in which the pins105extend from the object table110and a retracted position in which the pins105are retracted in the object table110. The retractable pins105are movable in a substantially vertical direction, i.e., in a direction substantially perpendicular to a main plane of an object to be supported by the pins. The retractable pins105are used for transfer of an object, e.g. a substrate W, between the object table110and a robot or any other type of object handler. The retractable pins105are provided so that e.g. a gripper of a robot may be placed under the object for supporting the object. When the robot is configured to hold the object at the sides or top, the retractable pins105may be omitted. In alternative embodiments any other type of device capable of exerting an attraction force on an object, such as electrostatic, magnetic or electromagnetic clamps may be used. In this embodiment a robot places an object on the pins105which are in the extended position. Then the pins105are be moved to the retracted position so that the object comes to rest on the support surface of the object table110. After an object supported by the object table110is for example exposed to a patterned beam of radiation, the object is exchanged for another one. For exchange of the object it is lifted from the object table110by the retractable pins105which are moved from the retracted position to the extended position. When the retractable pins105are in the extended position, the object is taken over by the robot or any other type of object handler. The vacuum clamp104is formed by a recessed surface106which is surrounded by a sealing rim107. A suction conduit108is provided to create a low pressure in a vacuum space delimited by the recessed surface106, the sealing rim107and an object placed or to be placed on the object table110. The suction conduit108is connected to a suction pump to draw air, or another gas present in the process environment, out of the vacuum space. The lower pressure provides a vacuum force which draws an object placed within a certain range above the supporting surface towards the object table110in order to clamp it to the object table. In the recessed surface106a number of burls109are arranged. The top ends of the burls109provide support surfaces for an object to be placed on the object table110. The sealing rim107and the top ends of the burls109may be arranged in substantially the same plane to provide a substantial flat surface for supporting an object. In an alternative embodiment the sealing rim107may be arranged lower than the burls109, as shown inFIG.5, or vice versa. In the object table110, a plurality of air bearing devices20is arranged. These air bearing devices are for example air bearing devices as shown inFIGS.2,3and4. Each air bearing device comprises a primary channel23which is preferably connected to a source16of sub-atmospheric pressure via channel12. Each air bearing device further comprises a secondary channel system25is formed by interconnected cavities in a porous material. The secondary channel system is preferably connected to a source17of pressurised gas via channel13. FIG.6shows an example of an air bearing device20as can be used in the embodiment ofFIG.5. The object table110is provided with a plurality of air bearing devices20. Each air bearing device20comprises an air bearing body21, which has a free surface22. The free surface22is not covered by other physical components of the system. In use, and in particular after an optional clamping the object on the member10, the free surface22may be covered by the object that is supported on the object table110. In the example ofFIGS.5and6, the free surfaces22of the air bearing bodies21extend somewhat above the surface111of the object table, but alternatively they can be arranged flush with the surface111of the object table110. Each air bearing device20further comprises a primary channel23which extends through the air bearing body21. The primary channel23has an inlet opening24in the free surface22. Each air bearing device further comprises a secondary channel system25which extends through the air bearing body21. The secondary channel system25has a plurality of discharge openings26in the free surface22of the air bearing body21. In the example shown inFIGS.5and6, the air bearing body21comprises a porous material having interconnected cavities. The secondary channel system25is formed by the interconnected cavities in the porous material. The porous material is for example a sintered ceramic material, e.g. SiSiC. Optionally, the faces of the porous material through which no pressurised gas should escape are sealed. In an alternative embodiment, not shown in the drawing, the secondary channel system25in the air bearing body may be formed by channels that are for example machined, e.g. drilled, or etched into the air bearing body. Preferably, the primary channel23is connectable to a source of sub-atmospheric pressure16, and the secondary channel system25is connectable to a source of pressurised gas. In use, when an object is arranged on or above the surface111, a pressurised gas can be supplied between the object and the surface via the discharge openings26of the secondary channel system and/or between the object and the free surfaces22. At the same time, a sub-atmospheric pressure is provided at the inlet24of the primary channel23. The pressurised gas pushes the object away from the surface111, while the sub-atmospheric pressure pulls the object towards surface111. The balance between the pushing force that the pressurised gas exerts on the object and the pulling force that the sub-atmospheric pressure exerts on the object allows to position the object in z-direction relative to the surface111, which surface111extends in the x-y plane. The flow resistance in the secondary channel25system is higher than the flow resistance in the primary channel23. This stabilises the system, because the higher flow resistance prevents that a force in z-direction, for example caused by tilting of the object around the x-axis or y-axis, forces the pressurised air back into the secondary channel system25. If pressurised gas would be forced back into the secondary channel system, the balance between the pushing force and the pulling force would be disturbed (at least temporarily), making the system unstable. It even may result in that the object hits the surface111before the balance between the pushing force and the pulling force is re-established. This is called “hammering”, and is not desired. In addition, the feature that the flow resistance in the secondary channel25system is higher than the flow resistance in the primary channel23results in a high tilt stiffness of the system. Preferably, the flow resistance in the secondary channel25system is also higher than the flow resistance in the gap between the free surface22and the object to be positioned. So, the flow resistance in the secondary channel25system being higher than the flow resistance in the primary channel23increases the stability and the reliability of the system. In this embodiment, the object table110may comprise channels12which allow to connect the primary channel23of the air bearing device20to the source of sub-atmospheric pressure and further channels13to connect the secondary channel system25of the air bearing device20to the source of pressurised gas. In the example ofFIGS.5and6, in the air bearing devices20the discharge openings26of the secondary channel system25surround the inlet opening24of the primary channel23. This provides further stability to the system, because the pushing force exerted by the pressurised gas is distributed over a relatively large area, often even distributed evenly over this area. In addition, pushing forces on opposite sides of the inlet opening24of the primary channel23may counteract each other, which also further stabilises the system. This furthermore results in a high tilt stiffness of the system. This effect is even more substantial when a porous material is used for the air bearing body, because of the large number of discharge openings26in such an embodiment. In a variant of the embodiment ofFIGS.5and6, the system further comprises an sub-atmospheric pressure source16which is in fluid communication with the primary channel23of at least one air bearing device20. Alternatively or in addition, in this variant, the system further comprises a source17of pressurised gas which is in fluid communication with the secondary channel system25of at least one air bearing device20. Optionally, in the embodiment ofFIGS.5and6, the air bearing devices20are arranged in a rectangular grid, for example as shown inFIG.3, or alternatively in a polar grid, for example as shown inFIG.4. The use of arrangements ofFIG.3orFIG.4in the stage system ofFIGS.5and/or6, allows to change the shape of the object. For example it is possible to correct warp, concavity, convexity, or another low order Zernike shape, or local deformation or other deviations from the flat shape of the object. In the embodiment shown inFIGS.5and6, the air bearing bodies21have a rectangular free surface22, and are generally box-shaped. Alternatively, other shapes are possible, for example cylindrical air bearing bodies, e.g. with a circular or elliptical free surface. Optionally, the embodiment of theFIGS.5and6, the plurality of air bearing devices20is adapted to change the shape of an object which is to be arranged on the object table110by controlling the pressure and/or gas flow rate at the inlet opening24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of the secondary channel system25of at least one air bearing device20. Optionally, the control of the pressure and/or gas flow rate at the inlet24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of a secondary channel system25of at least one air bearing device20is based on the expected and/or measured shape of the object. In a variant, the embodiment of theFIGS.5and6further comprises a measurement system which is adapted to obtain shape data of an object arranged on or above the object table110during activation of the air bearing devices20. The measurement system optionally comprises capacitive sensors, and/or inductive sensors and/or an interferometer. Optionally, one or more parts of the measurement system are arranged on or in the object table110, so that measurement of the object can take place while the object is arranged at or above the object table110. Alternatively or in addition, the measurement system may be arranged such that measurement of the object takes place before the object is arranged at or above the object table110. In this variant, the embodiment further comprises a control device for controlling the pressure and/or gas flow rate at the inlet opening24of a primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of a secondary channel system25of at least one air bearing device20. The control device is adapted to receive the shape data from the measurement system and to control the pressure and/or gas flow rate at the inlet opening24of the primary channel23and/or the pressure and/or gas flow rate at the discharge openings26of the secondary channel system25of the at least one air bearing device20, based on the received shape data. Optionally, in the embodiment of theFIGS.5and6, the plurality of air bearing devices comprises multiple air bearing groups. Each air bearing group comprises at least one air bearing device20. Within an air bearing group, the pressure and/or gas flow rate at the inlets of the primary channels is in this embodiment controllable separately from the pressure and/or gas flow rate at the inlet opening of the primary channels in another air bearing group. This allows to control the local forces that are exerted on the object, which allows to change the shape of the object. Optionally, the pressure and/or gas flow rate at the inlet opening of the primary channels is controllable independently from the pressure and/or gas flow rate at the inlet openings of the primary channels in another air bearing group. Alternatively or in addition, the pressure and/or the gas flow rate at the discharge openings of the secondary channel systems in at least one air bearing group is controllable separately from the pressure and/or gas flow rate at the discharge openings of secondary channel systems in another air bearing group. This allows to control the local forces that are exerted on the object, which allows to change the shape of the object. Optionally, the pressure and/or the gas flow rate at the discharge openings of the secondary channel systems in at least one air bearing group is controllable independently from the pressure and/or gas flow rate at the discharge openings of secondary channel systems in another air bearing group. The separate control of the pressure and/or flow rate in any of the air bearing groups may lead to differences in pressure and/or flow rate between different air bearing groups, and therewith in differences in the magnitude of the local pulling and/or pushing forces exerted on the object at any given time. Alternatively or in addition, with this feature it is possible to control the timing of the activation of the respective air bearing groups. For example, one air bearing group may be activated only after another air bearing group has been activated. Activating one air bearing group only after another air bearing group has been activated can be advantageous when the shape of the object on or above the member10has to be corrected. For example, if the object is round, but dome-shaped instead of flat, and the configuration ofFIG.4is used in the embodiment ofFIGS.5and6, it is advantageous to first activate the air bearing device20* at the centre of the object table110. When the centre of the object is brought at the desired level above the object table110by the combined action of the pulling force and pushing forces of the air bearing device20* at the centre of the object table110, the air bearing devices20at the inner ring14are activated. They bring a ring around the centre of the object to desired level above the object table110. The object can move freely in the x-y plane because there is no friction with any mechanical support members, which reduces the internal stresses in the object. After activating the air bearing devices in the inner ring14, the air bearing devices20in the outer ring15are activated. They bring an outer ring of the object to desired level above the object table110. This way, the object is flattened and the dome shape is reduced or even made to disappear. After flattening, the object may be clamped or otherwise fixed to or relative to the object table110. FIG.7shows a variant of a stage system according to the invention. In the variant of the object table110comprises a porous zone110a. Optionally, also a non-porous zone110bis present. At least one air bearing body21forms part of the porous zone110aof the object table110. In the embodiment shown inFIG.7, impermeable elements27have been provided in order to avoid leaking of pressurised gas when the porous air bearing bodies21have been connected to a source of pressurised gas. For example, the object table110is in this variant made from a sintered ceramic, e.g. from SiSiC. After sintering, the entire object table110is porous. Then, a sealant substance is allowed to fill the interconnected cavities, e.g. by arranging the lower part from the object table in a bath of liquid sealant, which enters the interconnected cavities due to capillary action. This process is however stopped before the entire object table is saturated. Only a lower part of the object table110is filled with the sealant. This lower part forms the non-porous zone110b. The impermeable elements27may be formed by locally injecting sealant. In the embodiments described, the air bearing device20is arranged to discharge air via the discharge opening26. The discharged air may have the same composition as the ambient air, or may have a different composition than the ambient air. For example, the humidity of the discharged air may be different, for example lower, than that of the ambient air. The discharged air may comprise any type of suitable gas, such as nitrogen or carbon dioxide. It is clear to the skilled person that the term ‘air bearing’ may be interpreted as a more general term ‘gas bearing’. In an embodiment, the invention further provides a method of positioning an object, which method comprises the following steps:arranging an object on or above the object table of a stage system according to the invention,making pressurized gas flow out of the discharge openings of the secondary channel systems of at least one air bearing device of the stage system according to the invention while simultaneously applying a sub-atmospheric pressure to the inlet of the primary channel of at least one air bearing device of the stage system according to the invention, thereby keeping the object in a position spaced apart from the object table in a direction perpendicular to the object table. In a possible embodiment of the stage system according to the invention, the plurality of air bearing devices comprises multiple air bearing groups each comprising at least one air bearing device, wherein the pressure and/or gas flow rate at the inlet opening of the primary channel or at the inlet openings of the primary channels, respectively, in at least one air bearing group of the stage system is controllable separately from the pressure and/or gas flow rate at the inlet opening of the primary channel or primary channels, respectively, in another air bearing group in the stage system. In a possible embodiment of the method according to the invention, such an embodiment of the stage system is used. In this embodiment of the method according to the invention, the pressure and/or gas flow rate at the inlet of the primary channel or at the inlets of the primary channels, respectively, in at least one air bearing group of the stage system is controlled separately from the pressure and/or gas flow rate at the inlet of the primary channel or primary channels, respectively, in another air bearing group of the stage system so as to control the shape of the object. In a possible embodiment of the stage system according to the invention, the plurality of air bearing devices comprises multiple air bearing groups each comprising at least one air bearing device, wherein the pressure and/or the gas flow rate at the discharge openings of the secondary channel system or secondary channel systems, respectively, in at least one air bearing group of the stage system is controllable separately from the pressure and/or gas flow rate at the discharge openings of the secondary channel system or secondary channel systems, respectively, in another air bearing group of the stage system. In a possible embodiment of the method according to the invention, such an embodiment of the stage system is used. In this embodiment of the method according to the invention, the pressure and/or the gas flow rate of at the discharge openings of the secondary channel system or secondary channel systems, respectively, in at least one air bearing group of the stage system, is controlled separately from the pressure and/or gas flow rate at the discharge of the secondary channel system or secondary channel systems, respectively, in another air bearing group of the stage system so as to control the shape of the object. In a possible embodiment of the method according to the invention, the method further comprises changing the shape of the object which is to be arranged on the object table by controlling the pressure and/or gas flow rate at the inlet opening of a primary channel and/or the pressure and/or gas flow rate at the discharge openings of the secondary channel system of at least one air bearing device. Optionally, this controlling of the pressure and/or gas flow rate at the inlet opening of a primary channel and/or of the pressure and/or gas flow rate at the discharge openings of the secondary channel system is based on an expected and/or measured shape of the object. In another embodiment of the invention, there is provided a system for positioning an object, which system comprises:a member, which member is provided with a plurality of air bearing devices,wherein each air bearing device comprises:an air bearing body, which has a free surface,a primary channel which extends through the air bearing body and has an inlet opening in the free surface,a secondary channel system which extends through the air bearing body and which has a plurality of discharge openings in the free surface,wherein the flow resistance in the secondary channel system is higher than the flow resistance in the primary channel,wherein the plurality of air bearing devices is adapted to change the shape of an object which is to be arranged on the object table by controlling the pressure and/or gas flow rate at the inlet opening of a primary channel and/or the pressure and/or gas flow rate at the discharge openings of the secondary channel system of at least one air bearing device. Optionally, in this embodiment, the control of the pressure and/or gas flow rate at the inlet of a primary channel and/or the pressure and/or gas flow rate at the discharge openings of a secondary channel system of at least one air bearing device is based on the expected and/or measured shape of the object. Optionally, in this embodiment, the system further comprises a measurement system which is adapted to obtain shape data of an object arranged on the object table during activation of the air bearing devices, and further comprises a control device for controlling the pressure and/or gas flow rate at the inlet of a primary channel and/or the pressure and/or gas flow rate at the discharge openings of a secondary channel system of at least one air bearing device. The control device is adapted to receive the shape data from the measurement system and to control the pressure and/or gas flow rate at the inlet opening of the primary channel and/or the pressure and/or gas flow rate at the discharge opening of the secondary channel system of the at least one air bearing device based on the received shape data. In another embodiment of the invention, there is provided a method for positioning an object, which method comprises the following steps:arranging an object on or above the surface of a member of a system, which member is provided with a plurality of air bearing devices,wherein each air bearing device comprises:an air bearing body, which has a free surface,a primary channel which extends through the air bearing body and has an inlet opening in the free surface,a secondary channel system which extends through the air bearing body and which has a plurality of discharge openings in the free surface,wherein the flow resistance in the secondary channel system is higher than the flow resistance in the primary channel,making pressurized gas flow out of the discharge openings of the secondary channel systems of at least one air bearing device while simultaneously applying a sub-atmospheric pressure to the inlet of the primary channel of at least one air bearing device, thereby keeping the object in a position spaced apart from the member in a direction perpendicular to the surface of the member,wherein the method further comprises changing the shape of the object which is to be arranged on the object table by controlling the pressure and/or gas flow rate at the inlet opening of a primary channel and/or the pressure and/or gas flow rate at the discharge openings of the secondary channel system of at least one air bearing device. Optionally, this controlling of the pressure and/or gas flow rate at the inlet opening of a primary channel and/or of the pressure and/or gas flow rate at the discharge openings of the secondary channel system is based on an expected and/or measured shape of the object. In an embodiment, there is provided a stage system for positioning an object, the stage system comprising an object table adapted to support the object to be positioned, the object table having a plurality of air bearing devices, wherein each air bearing device comprises: an air bearing body having a free surface, a primary channel which extends through the air bearing body and has an inlet opening in the free surface, and a secondary channel system which extends through the air bearing body and which has a discharge opening in the free surface, wherein the flow resistance in the secondary channel system is higher than the flow resistance in the primary channel. In an embodiment, the air bearing body comprises a porous material comprising interconnected cavities, and the secondary channel system is formed by the interconnected cavities. In an embodiment, the stage system comprises a plurality of discharge openings, wherein the plurality of discharge openings surround the inlet opening of the primary channel. In an embodiment, the plurality of air bearing devices comprises multiple air bearing groups each group comprising at least one air bearing device, wherein the pressure and/or gas flow rate at the one or more inlet openings of the one or more primary channels in at least one air bearing group is controllable separately from the pressure and/or gas flow rate at the one or more inlet openings of the one of more primary channels in another air bearing group. In an embodiment, the plurality of air bearing devices comprises multiple air bearing groups each group comprising at least one air bearing device, wherein the pressure and/or the gas flow rate at the one or more discharge openings of the one or more secondary channel systems in at least one air bearing group is controllable separately from the pressure and/or gas flow rate at the one or more discharge openings of the one or more secondary channel systems in another air bearing group. In an embodiment, the plurality of air bearing devices is adapted to change a shape of an object which is to be arranged on the object table by controlling the pressure and/or gas flow rate at the inlet opening of a primary channel and/or the pressure and/or gas flow rate at the discharge opening of the secondary channel system of at least one air bearing device. In an embodiment, the control of the pressure and/or gas flow rate at the inlet of a primary channel and/or the pressure and/or gas flow rate at the discharge opening of a secondary channel system of at least one air bearing device is based on an expected and/or measured shape of the object. In an embodiment, the stage system further comprises a measurement system which is adapted to obtain shape data of the object when arranged on the object table during activation of the air bearing devices, and a control device for controlling the pressure and/or gas flow rate at the inlet of a primary channel and/or the pressure and/or gas flow rate at the discharge opening of a secondary channel system of at least one air bearing device, wherein the control device is adapted to receive the shape data from the measurement system and to control the pressure and/or gas flow rate at the inlet opening of the primary channel and/or the pressure and/or gas flow rate at the discharge opening of the secondary channel system of the at least one air bearing device based on the received shape data. In an embodiment, the object table comprises a porous zone, and at least one air bearing body forms part of the porous zone of the object table. In an embodiment, there is provided a lithographic apparatus arranged to transfer a pattern from a patterning device onto a substrate, wherein the lithographic apparatus comprises a stage system as described herein. In an embodiment, there is provided a lithographic apparatus comprising: an illumination system configured to condition a radiation beam; a support constructed to support a patterning device, the patterning device being capable of imparting the radiation beam with a pattern in its cross-section to form a patterned radiation beam; a substrate table constructed to hold a substrate; and a projection system configured to project the patterned radiation beam onto a target portion of the substrate, wherein the substrate table has a plurality of air bearing devices, each air bearing device comprising: an air bearing body having a free surface, a primary channel which extends through the air bearing body and has an inlet opening in the free surface, and a secondary channel system which extends through the air bearing body and which has a discharge opening in the free surface, wherein the flow resistance in the secondary channel system is higher than the flow resistance in the primary channel. In an embodiment, there is provided a device manufacturing method comprising transferring a pattern from a patterning device onto a substrate, wherein use is made of a stage system as described herein. In an embodiment, there is provided a method for positioning an object, the method comprising: arranging an object on or above the object table of a stage system as described herein; and making pressurized gas flow out of the discharge opening of the secondary channel system of at least one air bearing device of the stage system while simultaneously applying a sub-atmospheric pressure to the inlet of the primary channel of at least one air bearing device of the stage system thereby keeping the object in a position spaced apart from the object table in a direction perpendicular to the object table. Although specific reference may be made in this text to the use of lithographic apparatus in the manufacture of ICs, it should be understood that the lithographic apparatus described herein may have other applications, such as the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat-panel displays, liquid-crystal displays (LCDs), thin film magnetic heads, etc. The skilled artisan will appreciate that, in the context of such alternative applications, any use of the terms “wafer” or “die” herein may be considered as synonymous with the more general terms “substrate” or “target portion”, respectively. The substrate referred to herein may be processed, before or after exposure, in for example a track (a tool that typically applies a layer of resist to a substrate and develops the exposed resist), a metrology tool and/or an inspection tool. Where applicable, the disclosure herein may be applied to such and other substrate processing tools. Further, the substrate may be processed more than once, for example in order to create a multi-layer IC, so that the term substrate used herein may also refer to a substrate that already contains multiple processed layers. Although specific reference may have been made above to the use of embodiments of the invention in the context of optical lithography, it will be appreciated that the invention may be used in other applications, for example imprint lithography, and where the context allows, is not limited to optical lithography. In imprint lithography a topography in a patterning device defines the pattern created on a substrate. The topography of the patterning device may be pressed into a layer of resist supplied to the substrate whereupon the resist is cured by applying electromagnetic radiation, heat, pressure or a combination thereof. The patterning device is moved out of the resist leaving a pattern in it after the resist is cured. While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described. For example, the invention may take the form of a computer program containing one or more sequences of machine-readable instructions describing a method as disclosed above, or a data storage medium (e.g. semiconductor memory, magnetic or optical disk) having such a computer program stored therein. The descriptions above are intended to be illustrative, not limiting. Thus, it will be apparent to one skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below. | 60,246 |
11860553 | DETAILED DESCRIPTION In the present document, the terms “radiation” and “beam” are used to encompass all types of electromagnetic radiation, including ultraviolet radiation (e.g. with a wavelength of 365, 248, 193, 157 or 126 nm) and EUV (extreme ultra-violet radiation, e.g. having a wavelength in the range of about 5-100 nm). The term “reticle”, “mask” or “patterning device” as employed in this text may be broadly interpreted as referring to a generic patterning device that can be used to endow an incoming radiation beam with a patterned cross-section, corresponding to a pattern that is to be created in a target portion of the substrate. The term “light valve” can also be used in this context. Besides the classic mask (transmissive or reflective, binary, phase-shifting, hybrid, etc.), examples of other such patterning devices include a programmable mirror array and a programmable LCD array. FIG.1schematically depicts a lithographic apparatus LA. The lithographic apparatus LA includes an illumination system (also referred to as illuminator) IL configured to condition a radiation beam B (e.g., UV radiation, DUV radiation or EUV radiation), a mask support (e.g., a mask table) MT constructed to support a patterning device (e.g., a mask) MA and connected to a first positioner PM configured to accurately position the patterning device MA in accordance with certain parameters, a substrate support (e.g., a wafer table) WT constructed to hold a substrate (e.g., a resist coated wafer) W and connected to a second positioner PW configured to accurately position the substrate support in accordance with certain parameters, and a projection system (e.g., a refractive projection lens system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g., comprising one or more dies) of the substrate W. The assembly according to the invention—which will be elucidated further below—may be used in the first positioner PM and/or mask support MT. Further, in addition or alternatively the assembly according to the invention—which will be elucidated further below—may be used in the second positioner PW and/or substrate support WT. In operation, the illumination system IL receives a radiation beam from a radiation source SO, e.g. via a beam delivery system BD. The illumination system IL may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic, and/or other types of optical components, or any combination thereof, for directing, shaping, and/or controlling radiation. The illuminator IL may be used to condition the radiation beam B to have a desired spatial and angular intensity distribution in its cross section at a plane of the patterning device MA. The term “projection system” PS used herein should be broadly interpreted as encompassing various types of projection systems, including refractive, reflective, catadioptric, anamorphic, magnetic, electromagnetic and/or electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, and/or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term “projection lens” herein may be considered as synonymous with the more general term “projection system” PS. The lithographic apparatus LA may be of a type wherein at least a portion of the substrate may be covered by a liquid having a relatively high refractive index, e.g., water, so as to fill a space between the projection system PS and the substrate W—which is also referred to as immersion lithography. More information on immersion techniques is given in U.S. Pat. No. 6,952,253, which is incorporated herein by reference. The lithographic apparatus LA may also be of a type having two or more substrate supports WT (also named “dual stage”). In such “multiple stage” machine, the substrate supports WT may be used in parallel, and/or steps in preparation of a subsequent exposure of the substrate W may be carried out on the substrate W located on one of the substrate support WT while another substrate W on the other substrate support WT is being used for exposing a pattern on the other substrate W. Also in case of two or more substrate supports WT, the assembly according to the invention—which will be elucidated further below—may be used in the second positioner PW and/or the two or more substrate supports. In addition to the substrate support WT, the lithographic apparatus LA may comprise a measurement stage. The measurement stage is arranged to hold a sensor and/or a cleaning device. The sensor may be arranged to measure a property of the projection system PS or a property of the radiation beam B. The measurement stage may hold multiple sensors. The cleaning device may be arranged to clean part of the lithographic apparatus, for example a part of the projection system PS or a part of a system that provides the immersion liquid. The measurement stage may move beneath the projection system PS when the substrate support WT is away from the projection system PS. In operation, the radiation beam B is incident on the patterning device, e.g. mask, MA which is held on the mask support MT, and is patterned by the pattern (design layout) present on patterning device MA. Having traversed the patterning device MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioner PW and a position measurement system PMS, the substrate support WT can be moved accurately, e.g., so as to position different target portions C in the path of the radiation beam B at a focused and aligned position. Similarly, the first positioner PM and possibly another position sensor (which is not explicitly depicted inFIG.1) may be used to accurately position the patterning device MA with respect to the path of the radiation beam B. Patterning device MA and substrate W may be aligned using mask alignment marks M1, M2and substrate alignment marks P1, P2. Although the substrate alignment marks P1, P2as illustrated occupy dedicated target portions, they may be located in spaces between target portions. Substrate alignment marks P1, P2are known as scribe-lane alignment marks when these are located between the target portions C. To clarify the invention, a Cartesian coordinate system is used. The Cartesian coordinate system has three axis, i.e., an x-axis, a y-axis and a z-axis. Each of the three axis is orthogonal to the other two axis. A rotation around the x-axis is referred to as an Rx-rotation. A rotation around the y-axis is referred to as an Ry-rotation. A rotation around about the z-axis is referred to as an Rz-rotation. The x-axis and the y-axis define a horizontal plane, whereas the z-axis is in a vertical direction. The Cartesian coordinate system is not limiting the invention and is used for clarification only. Instead, another coordinate system, such as a cylindrical coordinate system, may be used to clarify the invention. The orientation of the Cartesian coordinate system may be different, for example, such that the z-axis has a component along the horizontal plane. FIG.2shows a more detailed view of a part of the lithographic apparatus LA ofFIG.1. The lithographic apparatus LA may be provided with a base frame BF, a balance mass BM, a metrology frame MF and a vibration isolation system IS. The metrology frame MF supports the projection system PS. Additionally, the metrology frame MF may support a part of the position measurement system PMS. The metrology frame MF is supported by the base frame BF via the vibration isolation system IS. The vibration isolation system IS is arranged to prevent or reduce vibrations from propagating from the base frame BF to the metrology frame MF. The second positioner PW is arranged to accelerate the substrate support WT by providing a driving force between the substrate support WT and the balance mass BM. The driving force accelerates the substrate support WT in a desired direction. Due to the conservation of momentum, the driving force is also applied to the balance mass BM with equal magnitude, but at a direction opposite to the desired direction. Typically, the mass of the balance mass BM is significantly larger than the masses of the moving part of the second positioner PW and the substrate support WT. In an embodiment, the second positioner PW is supported by the balance mass BM. For example, wherein the second positioner PW comprises a part of a planar motor system to levitate or accelerate the substrate support WT above and/or relative to the balance mass BM. This planar motor system may be a magnetic levitation and/or acceleration motor system provided with an assembly according to the invention—which will be elucidated further below —. In another embodiment, the second positioner PW is supported by the base frame BF. For example, wherein the second positioner PW comprises a stator of linear motor and wherein the second positioner PW comprises a bearing, like a gas bearing, to levitate the substrate support WT above the base frame BF. According to another example, the second positioner may comprise a linear motor system provided with an assembly according to the invention—which will be elucidated further below—. The position measurement system PMS may comprise any type of sensor that is suitable to determine a position of the substrate support WT. The position measurement system PMS may comprise any type of sensor that is suitable to determine a position of the mask support MT. The sensor may be an optical sensor such as an interferometer or an encoder. The position measurement system PMS may comprise a combined system of an interferometer and an encoder. The sensor may be another type of sensor, such as a magnetic sensor, a capacitive sensor or an inductive sensor. The position measurement system PMS may determine the position relative to a reference, for example the metrology frame MF or the projection system PS. The position measurement system PMS may determine the position of the substrate table WT and/or the mask support MT by measuring the position or by measuring a time derivative of the position, such as velocity or acceleration. The position measurement system PMS may comprise an encoder system. An encoder system is known from for example, United States patent application US2007/0058173A1, filed on Sep. 7, 2006, hereby incorporated by reference. The encoder system comprises an encoder head, a grating and a sensor. The encoder system may receive a primary radiation beam and a secondary radiation beam. Both the primary radiation beam as well as the secondary radiation beam originate from the same radiation beam, i.e., the original radiation beam. At least one of the primary radiation beam and the secondary radiation beam is created by diffracting the original radiation beam with the grating. If both the primary radiation beam and the secondary radiation beam are created by diffracting the original radiation beam with the grating, the primary radiation beam needs to have a different diffraction order than the secondary radiation beam. Different diffraction orders are, for example, +1storder, −1storder, +2ndorder and −2ndorder. The encoder system optically combines the primary radiation beam and the secondary radiation beam into a combined radiation beam. A sensor in the encoder head determines a phase or phase difference of the combined radiation beam. The sensor generates a signal based on the phase or phase difference. The signal is representative of a position of the encoder head relative to the grating. One of the encoder head and the grating may be arranged on the substrate structure WT. The other of the encoder head and the grating may be arranged on the metrology frame MF or the base frame BF. For example, a plurality of encoder heads are arranged on the metrology frame MF, whereas a grating is arranged on a top surface of the substrate support WT. In another example, a grating is arranged on a bottom surface of the substrate support WT, and an encoder head is arranged below the substrate support WT. The position measurement system PMS may comprise an interferometer system. An interferometer system is known from, for example, U.S. Pat. No. 6,020,964, filed on Jul. 13, 1998, hereby incorporated by reference. The interferometer system may comprise a beam splitter, a mirror, a reference mirror and a sensor. A beam of radiation is split by the beam splitter into a reference beam and a measurement beam. The measurement beam propagates to the mirror and is reflected by the mirror back to the beam splitter. The reference beam propagates to the reference mirror and is reflected by the reference mirror back to the beam splitter. At the beam splitter, the measurement beam and the reference beam are combined into a combined radiation beam. The combined radiation beam is incident on the sensor. The sensor determines a phase or a frequency of the combined radiation beam. The sensor generates a signal based on the phase or the frequency. The signal is representative of a displacement of the mirror. In an embodiment, the mirror is connected to the substrate support WT. The reference mirror may be connected to the metrology frame MF. In an embodiment, the measurement beam and the reference beam are combined into a combined radiation beam by an additional optical component instead of the beam splitter. The first positioner PM may comprise a long-stroke module and a short-stroke module. The short-stroke module is arranged to move the mask support MT relative to the long-stroke module with a high accuracy over a small range of movement. The long-stroke module is arranged to move the short-stroke module relative to the projection system PS with a relatively low accuracy over a large range of movement. With the combination of the long-stroke module and the short-stroke module, the first positioner PM is able to move the mask support MT relative to the projection system PS with a high accuracy over a large range of movement. Similarly, the second positioner PW may comprise a long-stroke module and a short-stroke module. The short-stroke module is arranged to move the substrate support WT relative to the long-stroke module with a high accuracy over a small range of movement. The long-stroke module is arranged to move the short-stroke module relative to the projection system PS with a relatively low accuracy over a large range of movement. With the combination of the long-stroke module and the short-stroke module, the second positioner PW is able to move the substrate support WT relative to the projection system PS with a high accuracy over a large range of movement. The first positioner PM and the second positioner PW each are provided with an actuator to move respectively the mask support MT and the substrate support WT. The actuator of the first positioner and/or second positioner may be a linear actuator to provide a driving force along a single axis, for example the y-axis. This linear actuator may be provided with an assembly according to the invention. Multiple linear actuators, which may be provided with an assembly according to the invention, may be applied to provide driving forces along multiple axis. The actuator may be a planar actuator to provide a driving force along multiple axis. This planar actuator may be provided with an assembly according to the invention. For example, the planar actuator may be arranged to move the substrate support WT in 6 degrees of freedom. The actuator may be an electro-magnetic actuator comprising at least one coil and at least one magnet. The actuator is arranged to move the at least one coil relative to the at least one magnet by applying an electrical current to the at least one coil. The actuator may be a moving-magnet type actuator, which has the at least one magnet coupled to the substrate support WT respectively to the mask support MT. The actuator may be a moving-coil type actuator which has the at least one coil coupled to the substrate support WT respectively to the mask support MT. The actuator may be a voice-coil actuator, a reluctance actuator, a Lorentz-actuator or a piezo-actuator, or any other suitable actuator. The lithographic apparatus LA comprises a position control system PCS as schematically depicted inFIG.3. The position control system PCS comprises a setpoint generator SP, a feedforward controller FF and a feedback controller FB. The position control system PCS provides a drive signal to the actuator ACT. The actuator ACT may be the actuator of the first positioner PM or the second positioner PW. The actuator ACT drives the plant P, which may comprise the substrate support WT or the mask support MT. An output of the plant P is a position quantity such as position or velocity or acceleration. The position quantity is measured with the position measurement system PMS. The position measurement system PMS generates a signal, which is a position signal representative of the position quantity of the plant P. The setpoint generator SP generates a signal, which is a reference signal representative of a desired position quantity of the plant P. For example, the reference signal represents a desired trajectory of the substrate support WT. A difference between the reference signal and the position signal forms an input for the feedback controller FB. Based on the input, the feedback controller FB provides at least part of the drive signal for the actuator ACT. The reference signal may form an input for the feedforward controller FF. Based on the input, the feedforward controller FF provides at least part of the drive signal for the actuator ACT. The feedforward FF may make use of information about dynamical characteristics of the plant P, such as mass, stiffness, resonance modes and eigenfrequencies. FIG.4shows, schematically, a side cross-section of magnetic levitation and/or acceleration motor system1according to the invention, which system has a flat coil layer3of electromagnets2and is provided with an assembly6,7,8,9according to the invention. Due to the coil layer3being flat, this magnetic levitation and/or acceleration motor system1is a linear motor system or planar motor system. The coil layer3is flat, because the electromagnets2are arranged in a plane. In the example ofFIG.4, it is a planar motor system. In the example shown, this planar motor system1is used in the wafer stage of a lithographic apparatus. For this purpose, the substrate W, substrate support WT and second positioner PW are indicated with the same references as used inFIGS.1-3. In addition or alternatively, the magnetic levitation and/or acceleration motor system1according to the invention may also be used as a planar or linear motor system in the mask stage of a lithographic apparatus, in which case the substrate W may be the patterning device MA (e.g. a mask), the substrate support WT may be the mask support MT and the second positioner PW may be the first positioner PM. FIG.5shows a detail ofFIG.4, which details are shown inFIG.4in cross-section according the arrows IV as indicated inFIG.5. In the embodiments ofFIGS.4and5the assembly according to the invention is provided in the stator part of the planar motor system1. The mover part of this planar motor system, in this example indicated as WT, is at its underside provided with a flat layer of conducting coils21,22in which the current can be manipulated by means of a controller in order to control the position of the mover part WT with respect to the stator part PW;8,3,9. The assembly according to the invention comprises a cryostat6,7,8and9configured to cryocool the superconducting electromagnets2in the coil layer3at a cryogenic temperature, for example a temperature below 30 K as indicated inFIG.4. Taking into account that presently there are materials available which are superconducting in the range of 70K to 150K, this temperature may also be higher than 30K. In the embodiments ofFIGS.4and5, the coil layer3is a layer of superconducting electromagnets2. The coil layer is flat and has opposing layer faces4,5. InFIG.4, layer face4is the lower face of the coil layer3and layer face5is the upper face of coil layer3. The cryostat comprises a cryocooler system6, a vacuum system7, and two insulation coverings8,9. The coil layer3and two insulation coverings8,9are arranged in a sandwich manner, with the coil layer3arranged between the two insulation coverings8,9such that each of the two opposing layer faces4,5is covered by one of the insulation coverings8,9. One of the insulation coverings or both the insulation coverings8,9may each comprise an inner plate10and an outer plate11parallel to the inner plate10. The inner plate10is closest to the coil layer3and arranged between the outer plate11and the coil layer3. One of or both the insulation coverings8,9may further comprise an insulation system12arranged between the inner plate10and outer plate11. The vacuum system is configured to provide a vacuum (layer)13in the insulation system. The cryocooler system is configured for cryocooling the inner plates10and is for this purpose at14connected with the inner plate10of the insulation covering9. Because the inner plate10of the insulation covering8is thermally connected with the inner plate10of the insulation covering9, also the inner plate10of the insulation covering will be cryocooled. This thermal connection may for example be provided via the magnets2or not shown thermal bridges. According to the invention, the insulation system12of one or both said insulation coverings8,9comprises, in the vacuum (layer)13, one or more layers of at least partly circular bodies101each defining an at least partly circular contour102and a central axis extending through a center103of the circular contour102as well as perpendicular to the circular contour102. The central axes of the circular bodies101of each layer of circular bodies extend perpendicular to the inner plate10and outer plate11. In the embodiment as shown inFIG.4, the at least partly circular bodies are spheres, in this case full-spheres, but these circular bodies may also be half-spheres or straight cylindrical bodies, e.g. full-cylindrical bodies or half-cylindrical bodies. The term ‘full-’ relates specifically to the outer contour of the circular bodies not to the inner of the circular bodies. The inner of a full-circular bodies may for example be hollow or a full-circular body may have a through bore in order to save weight and/or to reduce the thermal conductivity of the bodies. The insulation covering8,9according to the invention is furthermore configured to provide at least one layer104,105of point contacts106between two said layers of circular bodies101or between a said layer of circular bodies101and the inner plate10and/or outer plate11. This configuration is such that each possible thermal conductive path between the inner plate10and outer plate11has to pass, at some place, through at least one point contact106. This results in an insulation covering having minimal thermal conductivity due to the layer of point contacts, on the one hand, and high load bearing capacity due to the at least partly circular bodies on the other hand. In the example ofFIG.4, there are two layers104and105of point contacts106. One layer105at the side of the outer plates11and one layer104at the side of the inner plates. In the embodiment ofFIG.4, the circular bodies101may be (full-)spheres of zirconium with a diameter of 7 mm. These spheres lie between the inner plate10and outer plate11. In a linear motor the vertical distance D between the layer3of electro-magnets2and the conducting coils21,22is limited. This means that the vertical thickness (in the direction of double arrow D) of especially the insulation covering8is to be kept low. With an aluminum inner plate10and aluminum outer plate11of each 0.75 mm thickness and spheres with a diameter of 7 mm, the total thickness of the insulation covering is about 9 mm. This thickness can easily be reduced by using a thinner inner plate10and thinner outer plate11and/or by using spheres101of smaller diameter. In this respect it is to be noted that zirconium spheres with diameters as from 0.2 mm can be readily obtained in the market against low prices. Also the insulation covering9can be designed very thin, but taking into account that on this side of the layer3of superconducting electromagnets2space is in general not a real issue, the spheres101in the insulation covering9can be taken larger than the spheres101in the insulation covering8. The spheres in the insulation covering may for example be in the range of 15-20 mm. In order to prevent these spheres from rolling away, a spacer plate107may be provided. This spacer plate107may be provided with a pattern of through holes, each having a diameter in the range of 70-100% of the diameter of the spheres. By using a spacer plate with a thermal expansion coefficient larger than the thermal expansion coefficient of the spheres, a sealing shrunk connection between the spheres101and spacer plate107is obtained when the inner plate is cryocooled. Further, the spacer plate107may be kept at a temperature of in this example about 80 K by the crycooler. For this purpose the crycooler6is at108connected with the spacer plate107of the insulation covering9(and/or 8, not shown). The spacer plate107then serves as a shield which improves the insulation capacity of the insulation covering8,9. In order to cool both the spacer plates107, the spacer plates107are thermally connected by thermal connection109. As shown inFIG.4by means of the indicated temperatures, the substrate support WT may be at room temperature, the coils21and22may have a temperature higher than room temperature due to heat development in the coils21and22, and the frame15may be at room temperature as well. Further the crycooler cryocools both the inner plates10at a cryogenic temperature, which will, according to this example, be below 30K. To improve the insulation, the spacer plates107are cooled by the cryocooler to have a temperature between the temperature of the inner plates (in this example <30K) and the temperature of the outer plates (in his example 295K). The temperature of the spacer plates may for example be 80 K. In the example ofFIGS.4and5, the planar motor system according to the invention has a stator part provided with a flat layer of superconducting electromagnets2and a mover part provided with a flat layer of normal-conducting coils21,22(which are in general at room temperature or higher when, in use, the coils21,22produce heat), seeFIGS.4and5. It is noted that additionally or alternatively the flat layer of coils21may, according to the invention, be a flat layer of superconducting coils having on both sides or one side of the flat layer an insulation covering8,9according to the invention with at least partly circular bodies in a vacuum layer. In this case the coil layer3may be a coil layer of permanent magnets or normal electromagnets or superconducting electromagnets. As shown in very schematic cross-section inFIGS.6and7, the insulation covering8and/or insulation covering9may also comprise multiple layers of spheres, separated by intermediate plates. FIG.6shows two layers200,210of spheres201and211, respectively. The spheres201in the layer200may be maintained in position by a spacer plate207and the spheres211in the layer210may be maintained in position by a spacer plate217. The two layers200and210of spheres201and211may be separated by an intermediate layer240. This configuration provides a total of four layers204,205,214and215of point contacts206and216. By using spheres of smaller diameter, the thickness of the insulation covering ofFIG.6can still be maintained at or below the 9 mm thickness as mentioned in relation toFIG.4. FIG.7shows four layers300,310,320,330of spheres301,311,321and331, respectively. The spheres301in the layer300may be maintained in position by a spacer plate307, the spheres311in the layer310may be maintained in position by a spacer plate317, the spheres321in the layer320may be maintained in position by a spacer plate327, and the spheres331in the layer330may be maintained in position by a spacer plate337. The two layers300and310of spheres301and311may be separated by an intermediate layer340, the two layers310and320of spheres311and321may be separated by an intermediate layer350, and the two layers320and330of spheres321and331may be separated by an intermediate layer360. This configuration provides a total of eight layers304,305,314,315,324,325,334and335of point contacts206,216,226and236. By using spheres of smaller diameter, the thickness of the insulation covering ofFIG.7can still be maintained at or below the 9 mm thickness as mentioned in relation toFIG.4. As shown in very schematic perspective view inFIG.8, an insulation system providing at least one layer of point contacts can also be obtained with cross-wise stacked layers of straight cylindrical bodies, like the wires401and402having a circular circumference402respectively412. At the crossings of the wires401and411there will be point contacts406, as schematically indicated at four crossings inFIG.8. In the construction ofFIG.8, the cylindrical bodies401,402of each layer being straight and the layers being crosswise stacked ensured that each possible thermal conductive path between the inner and outer plate has to pass, at some place, through at least one point contact. The present invention can also be characterized by the following clauses:1. An assembly comprising a cryostat and a coil layer of superconducting coils, wherein the coil layer is configured for use in or with a magnetic levitation and/or acceleration motor system of a lithographic apparatus, is flat and defines two opposing layer faces;wherein the cryostat comprises two insulation coverings;wherein the coil layer is arranged between the two insulation coverings and each of the two opposing layer faces is covered by one of the insulation coverings;wherein each said insulation covering comprises:an inner plate and an outer plate parallel to the inner plate, the inner plate being arranged between the outer plate and the coil layer, andan insulation system arranged between the inner plate and the outer plate;wherein the insulation system is configured to have a vacuum layer;wherein the inner plates are configured to be cryocooled; andwherein the insulation system of one or both said insulation coverings:comprises, in the vacuum layer, one or more layers of at least partly circular bodies each defining an at least partly circular contour and a central axis extending through a center of the circular contour as well as perpendicular to the circular contour, the central axes of the bodies of each layer of circular bodies extending perpendicular to the inner and outer plate, andis configured to provide at least one layer of point contacts between two said layers of circular bodies or between a said layer of circular bodies and the inner and/or outer plate.2. Assembly according to clause 1,wherein at least one said layer of circular bodies is a layer of spherical bodies, such as spheres or half-spheres.3. An assembly comprising a cryostat and a coil layer of superconducting coils,wherein the coil layer is configured for use in or with a magnetic levitation and/or acceleration motor system of a lithographic apparatus, is flat and defines two opposing layer faces;wherein the cryostat comprises two insulation coverings;wherein the coil layer is arranged between the two insulation coverings and each of the two opposing layer faces is covered by one of the insulation coverings;wherein each said insulation covering comprises:an inner plate and an outer plate parallel to the inner plate, the inner plate being arranged between the outer plate and the coil layer, andan insulation system arranged between the inner plate and the outer plate;wherein the insulation system is configured to have a vacuum layer;wherein the inner plates are configured to be cryocooled; andwherein the insulation system of one or both said insulation coverings comprises, in the vacuum layer, one or more layers of spherical bodies, such as spheres or half-spheres, at least one said layers of spherical bodies providing a layer of point contacts between the layer of spheres and the inner and/or outer plate. 4. Assembly according to clause 2 or 3,wherein at least one said layers of spherical bodies comprises spheres and a spacer plate provided with a pattern of circular through holes for accommodating the spheres, each through hole having a diameter configured to contact a said sphere such that the spacer plate is supported by the spheres, wherein the spacer plate is arranged parallel to the inner plate and the outer plate.5. Assembly according to clause 4,wherein the diameter of the through holes is in the range of 70-100%, such as 90-100%, of the diameter of the spheres.6. Assembly according to one of clauses 4-5,wherein the spacer plate is configured to be cooled at a temperature between a temperature of the inner plate and a temperature of the outer plate.7. Assembly according to one of clauses 4-6,wherein the thermal expansion coefficient of the spacer plate is larger than the thermal expansion coefficient of the spheres such that when the spacer plate and spheres are cooled down a shrunk connection between the spacer plate and spheres is obtained in the through holes.8. Assembly according to one of clauses 4-7,wherein the spacer plate comprises aluminum or an aluminum alloy.9. Assembly according to one of clauses 2-8, comprising a multiple of said layers of spherical bodies,wherein between adjacent ones of said layers of spherical bodies a separation plate is arranged which provides on each side an additional layer of point contacts between the separation plate and spherical bodies.10. Assembly according to clause 9,wherein the separation plate is configured to be cooled at a temperature between a temperature of the inner plate and a temperature of the outer plate.11. Assembly according to one of clauses 1-10,wherein said one or more layers of circular bodies comprise at least one set of two layers of straight cylindrical bodies, such as cylinders and/or half-cylinders;wherein the cylindrical bodies of a first of said two layers are arranged parallel to each other with a spacing between adjacent cylindrical bodies and the cylindrical bodies of a second of said two layers are arranged parallel to each other with a spacing between adjacent cylindrical bodies; andwherein the first layer and second layer are stacked directly onto each other with the cylindrical bodies of the first layer crosswise with respect to the cylindrical bodies of the second layer to provide, between the cylindrical bodies of the first layer and the cylindrical bodies of the second layer, a said layer of point contacts.12. An assembly comprising a cryostat and a coil layer of superconducting coils, wherein the coil layer is configured for use in or with a magnetic levitation and/or acceleration motor system of a lithographic apparatus, is flat and defines two opposing layer faces,wherein the cryostat comprises two insulation coverings;wherein the coil layer is arranged between the two insulation coverings and each of the two opposing layer faces is covered by one of the insulation coverings;wherein each said insulation covering comprises:an inner plate and an outer plate parallel to the inner plate, the inner plate being arranged between the outer plate and the coil layer, andan insulation system arranged between the inner plate and the outer plate;wherein the insulation system is configured to have a vacuum layer;wherein the inner plates are configured to be cryocooled; andwherein the insulation system of one or both said insulation coverings comprises, in the vacuum layer, comprises at least one set of two layers of straight cylindrical bodies, such as cylinders and/or half-cylinders;wherein the cylindrical bodies of a first of said two layers are arranged parallel to each other with a spacing between adjacent cylindrical bodies and the cylindrical bodies of a second of said two layers are arranged parallel to each other with a spacing between adjacent cylindrical bodies; andwherein the first layer and second layer are stacked directly onto each other with the cylindrical bodies of the first layer crosswise with respect to the cylindrical bodies of the second layer to provide, between the cylindrical bodies of the first layer and the cylindrical bodies of the second layer, a layer of point contacts.13. Assembly according to one of clauses 11-12,wherein the cylindrical bodies are wires.14. Assembly according to one of the preceding clauses,wherein the at least partly circular bodies have a diameter defined as twice a radius from a center of the circular contour to the circular contour; andwherein the diameter is smaller than 7 mm, such as smaller than 5 mm.15. Assembly according to clause 14,wherein the diameter is in the range of 0.1 to 5 mm, such as in the range of 0.5 to 4 mm.16. Assembly according to one of the preceding clauses,wherein the point contacts, in a said layer of point contacts, are arranged with a pitch of 5-20 mm, such as a pitch of 10-15 mm.17. Assembly according to one of the preceding clauses,wherein one of said insulation coverings has a thickness of at most 10 mm, such as at most 7-8 mm, the thickness being defined in a direction perpendicular to the inner and outer plate.18. Assembly according to one of the preceding clauses,wherein the circular bodies are made from a material chosen from one or more of the group of:zirconia, Kevlar, Kevlar composites, Kevlar fiber composites, glass, glass composites, glass fiber composites, and titanium alloys.19. Assembly according to one of the preceding clauses, wherein the circular bodies are madefrom a material having a ratio of the Young modulus with respect to the integral of the thermal conductivity coefficient over the temperature range of 4K to 80K, which is at least 1 N/Wm, such as at least 1.5 N/Wm.20. Assembly according to one of the preceding clauses, further comprising a cryocooler system configured for cryocooling the inner plates to a temperature lower than 30 K, such as lower than 10 K, like in the range of 0-4 K.21. Assembly according to one of the preceding clauses, further comprising a vacuum system configured to provide in the insulation system a vacuum of 10−3Pa or lower.22. Assembly according to one of the preceding clauses,wherein the inner and/or outer plate are made from a stainless steel alloy.23. Assembly according to one of the preceding clauses,wherein the coil layer is configured as a stator part of the magnetic levitation and/or acceleration motor system.24. Assembly according to one of the preceding clauses,wherein the coil layer is configured as a mover part of the magnetic levitation and/or acceleration motor system.25. Assembly according to one of the preceding clauses,wherein the coil layer of super conducting coils is an array of superconducting electromagnets, each electromagnet having a north-south axis extending perpendicular to the coil layer;wherein the array of electromagnets is configured such that adjacent electromagnets have opposite polarity; and wherein the array of electromagnets is wired for direct current operation.26. Assembly according to one of the preceding clauses, wherein the coil layer of superconducting coils is wired for alternating current operation.27. Assembly according to one of the preceding clauses, wherein the magnetic levitation and/or acceleration motor system is a linear motor system.28. Assembly according to one of the preceding clauses, wherein the magnetic levitation and/or acceleration motor system is a planar motor system.29. A lithographic apparatus comprising at least one flat magnetic levitation and/or acceleration motor system provided with an assembly according to one of the preceding clauses.30. A lithographic apparatus, comprising:a mask support constructed to support a patterning device,a first positioner configured to position the mask support with respect to the first positioner,a substrate support constructed to hold a substrate,a second positioner configured to position the substrate support with respect to the second positioner, anda projection system configured to project a pattern imparted to a radiation beam by the patterning device onto a target position on the substrate;wherein one or more of the following items:the mask support,the first positioner,the substrate support, andthe second positioner,are provided with an assembly according to one of the clauses 1-28. GENERAL STATEMENTS In accordance with the present invention, ‘cryogenic cooling’ or ‘cooling the coil layer to a cryogenic temperature’ refers to a process of cooling the coil layer to such a temperature that the coils exhibit a superconductive behavior and keeping the coils at such a temperature. As such, when cooled to such a temperature, the coils may be supplied with an electrical current, substantially without generating Ohmic losses. As will be appreciated by the skilled person, the required temperature or cooling may depend on the material or composition of the applied coils and/or the pressure conditions prevailing. Although specific reference may be made in this text to the use of a lithographic apparatus in the manufacture of ICs, it should be understood that the lithographic apparatus described herein may have other applications. Possible other applications include the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat-panel displays, liquid-crystal displays (LCDs), thin-film magnetic heads, etc. Although specific reference may be made in this text to embodiments of the invention in the context of a lithographic apparatus, embodiments of the invention may be used in other apparatus. Embodiments of the invention may form part of a mask inspection apparatus, a metrology apparatus, or any apparatus that measures or processes an object such as a wafer (or other substrate) or mask (or other patterning device). These apparatus may be generally referred to as lithographic tools. Such a lithographic tool may use vacuum conditions or ambient (non-vacuum) conditions. Although specific reference may have been made above to the use of embodiments of the invention in the context of optical lithography, it will be appreciated that the invention, where the context allows, is not limited to optical lithography and may be used in other applications, for example imprint lithography. Where the context allows, embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc. and in doing that may cause actuators or other devices to interact with the physical world. While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described. The descriptions above are intended to be illustrative, not limiting. Thus it will be apparent to one skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below. | 44,536 |
11860554 | DETAILED DESCRIPTION In the present document, the terms “radiation” and “beam” are used to encompass all types of electromagnetic radiation, including ultraviolet radiation (e.g. with a wavelength of 365, 248, 193, 157 or 126 nm) and EUV (extreme ultra-violet radiation, e.g. having a wavelength in the range of about 5-100 nm). The term “reticle”, “mask” or “patterning device” as employed in this text may be broadly interpreted as referring to a generic patterning device that can be used to endow an incoming radiation beam with a patterned cross-section, corresponding to a pattern that is to be created in a target portion of the substrate. The term “light valve” can also be used in this context. Besides the classic mask (transmissive or reflective, binary, phase-shifting, hybrid, etc.), examples of other such patterning devices include a programmable mirror array and a programmable LCD array. FIG.1schematically depicts a lithographic apparatus LA. The lithographic apparatus LA includes an illumination system (also referred to as illuminator) IL configured to condition a radiation beam B (e.g., UV radiation, DUV radiation or EUV radiation), a mask support (e.g., a mask table) MT constructed to support a patterning device (e.g., a mask) MA and connected to a first positioner PM configured to accurately position the patterning device MA in accordance with certain parameters, a substrate support (e.g., a wafer table) WT constructed to hold a substrate (e.g., a resist coated wafer) W and connected to a second positioner PW configured to accurately position the substrate support in accordance with certain parameters, and a projection system (e.g., a refractive projection lens system) PS configured to project a pattern imparted to the radiation beam B by patterning device MA onto a target portion C (e.g., comprising one or more dies) of the substrate W. In operation, the illumination system IL receives a radiation beam from a radiation source SO, e.g. via a beam delivery system BD. The illumination system IL may include various types of optical components, such as refractive, reflective, magnetic, electromagnetic, electrostatic, and/or other types of optical components, or any combination thereof, for directing, shaping, and/or controlling radiation. The illuminator IL may be used to condition the radiation beam B to have a desired spatial and angular intensity distribution in its cross section at a plane of the patterning device MA. The term “projection system” PS used herein should be broadly interpreted as encompassing various types of projection system, including refractive, reflective, catadioptric, anamorphic, magnetic, electromagnetic and/or electrostatic optical systems, or any combination thereof, as appropriate for the exposure radiation being used, and/or for other factors such as the use of an immersion liquid or the use of a vacuum. Any use of the term “projection lens” herein may be considered as synonymous with the more general term “projection system” PS. The lithographic apparatus LA may be of a type wherein at least a portion of the substrate may be covered by a liquid having a relatively high refractive index, e.g., water, so as to fill a space between the projection system PS and the substrate W—which is also referred to as immersion lithography. More information on immersion techniques is given in U.S. Pat. No. 6,952,253, which is incorporated herein by reference. The lithographic apparatus LA may also be of a type having two or more substrate supports WT (also named “dual stage”). In such “multiple stage” machine, the substrate supports WT may be used in parallel, and/or steps in preparation of a subsequent exposure of the substrate W may be carried out on the substrate W located on one of the substrate support WT while another substrate W on the other substrate support WT is being used for exposing a pattern on the other substrate W. In addition to the substrate support WT, the lithographic apparatus LA may comprise a measurement stage. The measurement stage is arranged to hold a sensor and/or a cleaning device. The sensor may be arranged to measure a property of the projection system PS or a property of the radiation beam B. The measurement stage may hold multiple sensors. The cleaning device may be arranged to clean part of the lithographic apparatus, for example a part of the projection system PS or a part of a system that provides the immersion liquid. The measurement stage may move beneath the projection system PS when the substrate support WT is away from the projection system PS. In operation, the radiation beam B is incident on the patterning device, e.g. mask, MA which is held on the mask support MT, and is patterned by the pattern (design layout) present on patterning device MA. Having traversed the patterning device MA, the radiation beam B passes through the projection system PS, which focuses the beam onto a target portion C of the substrate W. With the aid of the second positioner PW and a position measurement system IF, the substrate support WT can be moved accurately, e.g., so as to position different target portions C in the path of the radiation beam B at a focused and aligned position. Similarly, the first positioner PM and possibly another position sensor (which is not explicitly depicted inFIG.1) may be used to accurately position the patterning device MA with respect to the path of the radiation beam B. Patterning device MA and substrate W may be aligned using mask alignment marks M1, M2and substrate alignment marks P1, P2. Although the substrate alignment marks P1, P2as illustrated occupy dedicated target portions, they may be located in spaces between target portions. Substrate alignment marks P1, P2are known as scribe-lane alignment marks when these are located between the target portions C. To clarify the invention, a Cartesian coordinate system is used. The Cartesian coordinate system has three axis, i.e., an x-axis, a y-axis and a z-axis. Each of the three axis is orthogonal to the other two axis. A rotation around the x-axis is referred to as an Rx-rotation. A rotation around the y-axis is referred to as an Ry-rotation. A rotation around about the z-axis is referred to as an Rz-rotation. The x-axis and the y-axis define a horizontal plane, whereas the z-axis is in a vertical direction. The Cartesian coordinate system is not limiting the invention and is used for clarification only. Instead, another coordinate system, such as a cylindrical coordinate system, may be used to clarify the invention. The orientation of the Cartesian coordinate system may be different, for example, such that the z-axis has a component along the horizontal plane. FIG.2shows a more detailed view of a part of the lithographic apparatus LA ofFIG.1. The lithographic apparatus LA may be provided with a base frame BF, a balance mass BM, a metrology frame MF and a vibration isolation system IS. The metrology frame MF supports the projection system PS. Additionally, the metrology frame MF may support a part of the position measurement system PMS. The metrology frame MF is supported by the base frame BF via the vibration isolation system IS. The vibration isolation system IS is arranged to prevent or reduce vibrations from propagating from the base frame BF to the metrology frame MF. The second positioner PW is arranged to accelerate the substrate support WT by providing a driving force between the substrate support WT and the balance mass BM. The driving force accelerates the substrate support WT in a desired direction. Due to the conservation of momentum, the driving force is also applied to the balance mass BM with equal magnitude, but at a direction opposite to the desired direction. Typically, the mass of the balance mass BM is significantly larger than the masses of the moving part of the second positioner PW and the substrate support WT. In an embodiment, the second positioner PW is supported by the balance mass BM. For example, wherein the second positioner PW comprises a planar motor to levitate the substrate support WT above the balance mass BM. In another embodiment, the second positioner PW is supported by the base frame BF. For example, wherein the second positioner PW comprises a linear motor and wherein the second positioner PW comprises a bearing, like a gas bearing, to levitate the substrate support WT above the base frame BF. The position measurement system PMS may comprise any type of sensor that is suitable to determine a position of the substrate support WT. The position measurement system PMS may comprise any type of sensor that is suitable to determine a position of the mask support MT. The sensor may be an optical sensor such as an interferometer or an encoder. The position measurement system PMS may comprise a combined system of an interferometer and an encoder. The sensor may be another type of sensor, such as a magnetic sensor. a capacitive sensor or an inductive sensor. The position measurement system PMS may determine the position relative to a reference, for example the metrology frame MF or the projection system PS. The position measurement system PMS may determine the position of the substrate table WT and/or the mask support MT by measuring the position or by measuring a time derivative of the position, such as velocity or acceleration. The position measurement system PMS may comprise an encoder system. An encoder system is known from for example, United States patent application US2007/0058173A1, filed on Sep. 7, 2006, hereby incorporated by reference. The encoder system comprises an encoder head, a grating and a sensor. The encoder system may receive a primary radiation beam and a secondary radiation beam. Both the primary radiation beam as well as the secondary radiation beam originate from the same radiation beam, i.e., the original radiation beam. At least one of the primary radiation beam and the secondary radiation beam is created by diffracting the original radiation beam with the grating. If both the primary radiation beam and the secondary radiation beam are created by diffracting the original radiation beam with the grating, the primary radiation beam needs to have a different diffraction order than the secondary radiation beam. Different diffraction orders are, for example, +1storder, −1storder, +2ndorder and −2ndorder. The encoder system optically combines the primary radiation beam and the secondary radiation beam into a combined radiation beam. A sensor in the encoder head determines a phase or phase difference of the combined radiation beam. The sensor generates a signal based on the phase or phase difference. The signal is representative of a position of the encoder head relative to the grating. One of the encoder head and the grating may be arranged on the substrate structure WT. The other of the encoder head and the grating may be arranged on the metrology frame MF or the base frame BF. For example, a plurality of encoder heads are arranged on the metrology frame MF, whereas a grating is arranged on a top surface of the substrate support WT. In another example, a grating is arranged on a bottom surface of the substrate support WT, and an encoder head is arranged below the substrate support WT. The position measurement system PMS may comprise an interferometer system. An interferometer system is known from, for example, U.S. Pat. No. 6,020,964, filed on Jul. 13, 1998, hereby incorporated by reference. The interferometer system may comprise a beam splitter, a mirror, a reference mirror and a sensor. A beam of radiation is split by the beam splitter into a reference beam and a measurement beam. The measurement beam propagates to the mirror and is reflected by the mirror back to the beam splitter. The reference beam propagates to the reference mirror and is reflected by the reference mirror back to the beam splitter. At the beam splitter, the measurement beam and the reference beam are combined into a combined radiation beam. The combined radiation beam is incident on the sensor. The sensor determines a phase or a frequency of the combined radiation beam. The sensor generates a signal based on the phase or the frequency. The signal is representative of a displacement of the mirror. In an embodiment, the mirror is connected to the substrate support WT. The reference mirror may be connected to the metrology frame MF. In an embodiment, the measurement beam and the reference beam are combined into a combined radiation beam by an additional optical component instead of the beam splitter. The first positioner PM may comprise a long-stroke module and a short-stroke module. The short-stroke module is arranged to move the mask support MT relative to the long-stroke module with a high accuracy over a small range of movement. The long-stroke module is arranged to move the short-stroke module relative to the projection system PS with a relatively low accuracy over a large range of movement. With the combination of the long-stroke module and the short-stroke module, the first positioner PM is able to move the mask support MT relative to the projection system PS with a high accuracy over a large range of movement. Similarly, the second positioner PW may comprise a long-stroke module and a short-stroke module. The short-stroke module is arranged to move the substrate support WT relative to the long-stroke module with a high accuracy over a small range of movement. The long-stroke module is arranged to move the short-stroke module relative to the projection system PS with a relatively low accuracy over a large range of movement. With the combination of the long-stroke module and the short-stroke module, the second positioner PW is able to move the substrate support WT relative to the projection system PS with a high accuracy over a large range of movement. The first positioner PM and the second positioner PW each are provided with an actuator to move respectively the mask support MT and the substrate support WT. The actuator may be a linear actuator to provide a driving force along a single axis, for example the y-axis. Multiple linear actuators may be applied to provide driving forces along multiple axis. The actuator may be a planar actuator to provide a driving force along multiple axis. For example, the planar actuator may be arranged to move the substrate support WT in 6 degrees of freedom. The actuator may be an electro-magnetic actuator comprising at least one coil and at least one magnet. The actuator is arranged to move the at least one coil relative to the at least one magnet by applying an electrical current to the at least one coil. The actuator may be a moving-magnet type actuator, which has the at least one magnet coupled to the substrate support WT respectively to the mask support MT. The actuator may be a moving-coil type actuator which has the at least one coil coupled to the substrate support WT respectively to the mask support MT. The actuator may be a voice-coil actuator, a reluctance actuator, a Lorentz-actuator or a piezo-actuator, or any other suitable actuator. The lithographic apparatus LA comprises a position control system PCS as schematically depicted inFIG.3. The position control system PCS comprises a setpoint generator SP, a feedforward controller FF and a feedback controller FB. The position control system PCS provides a drive signal to the actuator ACT. The actuator ACT may be the actuator of the first positioner PM or the second positioner PW. The actuator ACT drives the plant P, which may comprise the substrate support WT or the mask support MT. An output of the plant P is a position quantity such as position or velocity or acceleration. The position quantity is measured with the position measurement system PMS. The position measurement system PMS generates a signal, which is a position signal representative of the position quantity of the plant P. The setpoint generator SP generates a signal, which is a reference signal representative of a desired position quantity of the plant P. For example, the reference signal represents a desired trajectory of the substrate support WT. A difference between the reference signal and the position signal forms an input for the feedback controller FB. Based on the input, the feedback controller FB provides at least part of the drive signal for the actuator ACT. The reference signal may form an input for the feedforward controller FF. Based on the input, the feedforward controller FF provides at least part of the drive signal for the actuator ACT. The feedforward FF may make use of information about dynamical characteristics of the plant P, such as mass, stiffness, resonance modes and eigenfrequencies. FIG.4Aschematically shows, in top view, an object W which has an actual shape that deviates from the ideal shape. In the example ofFIG.4A, the ideal shape for the object W, which is for example a substrate, for example a wafer, is a flat disk-shape. In the actual shape, the edge of the object W is curled upwards, so the actual shape of the object W is more like a bowl-shape.FIG.4Bshows this same object W in side view. InFIGS.4A and4B, the dashed line W-ID indicates the ideal or desired contour of the object W, and the solid line W-ACT indicates the actual contour of the object W. By either changing the temperature of the object W, its shape and/or size will be changed due to thermal expansion or thermal shrink. This allows to correct the shape of the object W, and therewith to bring it into a shape which approaches the desired or ideal shape more closely. Depending on the desired shape change, the temperature of the object W can be changed locally, or the overall temperature of the object can be changed. For example, in the situation of the bowl shape as shown inFIGS.4A and4B, when the outer circumference of the object W is brought to a higher temperature than the center of the object W has, the outer circumference of the object W will expand, which will result in a flattening of the bowl shape.FIG.4Aschematically shows a thermal device10, which comprises several thermal elements11. The thermal elements11can be heaters and/or coolers. They are preferably arranged in the vicinity of the object W, e.g. in a plane parallel to the plane of the object W. The thermal elements11can be arranged either stationary or movable relative to the object W. The thermal elements11can be used to obtain a temperature profile (i.e. a temperature distribution) in the object W which results in an actual shape which better approached the ideal or desired shape. FIG.5schematically shows a first embodiment of an object positioner according to the invention. In the embodiment ofFIG.5, the object positioner comprises an object support WT having an object support surface WT-OSS which is configured to engage at least a part of an object W. The object support surface WT-OSS has a support surface temperature. The object W is for example a substrate, for example a wafer. The object support WT is for example a substrate support, wafer stage or wafer table. If the object positioner according toFIG.5is part of a lithographic apparatus, object support surface WT-OSS engages and/or supports the object W during exposure to the projection beam. If the object positioner according toFIG.5is part of an object inspection apparatus, object support surface WT-OSS engages and/or supports the object W during exposure to the measurement beam. In the example ofFIG.5, the object support WT is provided with a plurality of burls WT-B, which have a fee surface which in use engages the object W. These free surfaces of the burls WT-B together form the object support surface WT-B. The object positioner ofFIG.5further comprises a thermal device10. The thermal device10is configured to provide at least a part of the object W with a first object temperature. The first object temperature differs from the support surface temperature by a first predetermined temperature difference. This allows to correct the actual shape of the object W, i.e. to bring the actual shape of the object W closer to the desired or ideal shape of the object W. The first object temperature may be above or below the support surface temperature. By providing at least a part of the object W with a first object temperature which is different from the support surface temperature, the object W will deform due to thermal expansion and/or thermal shrink, in a uniform way or in a non-uniform way depending on the temperature profile which is provided to the object W by the thermal device10. This allows to control the shape of the object and therewith to counteract existing undesired deformation of the object W due to other causes, e.g. the processing of the object W such as the repeated applying, curing and/or etching of resist layers on the object W. The first object temperature is for example a temperature of the outer surface of the object W. In the embodiment ofFIG.5, for example the entire object W is provided with the first object temperature. In this case, the object W obtains a uniform temperature. Alternatively, the object W may be provided locally with the first object temperature. In that case, the object W obtains a non-uniform temperature. In the embodiment ofFIG.5, for example the thermal device10is configured to provide a first portion of the object W with the first object temperature and a second portion of the object W with a second object temperature. The second object temperature differs from the support surface temperature by a second predetermined temperature difference. The second temperature difference may be the same as the first temperature difference, or the second temperature difference may be different from the first temperature difference. The first object temperature may be the same as the second object temperature, or first object temperature may be different from the second object temperature. The second object temperature is for example a temperature of the outer surface of the object W. In this embodiment, the object W obtains a non-uniform temperature profile due to the action of the thermal device10. This allows to locally correct the shape of the object. The portion or portions of the object W between the first portion and the second portion optionally have a temperature which is different from the first object temperature and from the second object temperature, e.g. a temperature equal to the support surface temperature. In the embodiment ofFIG.5, optionally the thermal device10is configured to provide at least a part of the object W with a temperature gradient between a first maximum temperature and a first minimum temperature. The first object temperature is the first maximum temperature, the first minimum temperature or a temperature between the first maximum temperature and the first minimum temperature. In the embodiment ofFIG.5, the thermal device10comprises a plurality of thermal elements11. The thermal elements11may be heating elements and/or cooling elements. Optionally, the thermal device10contains both heating elements and cooling elements. The heating elements may for example be infrared LED's. The cooling elements may for example be Peltier elements. In the embodiment ofFIG.5, the thermal device10is arranged at a distance from the object support WT, In an alternative embodiment, for example the thermal elements11may be arranged in or on the object support WT, e.g. between the burls WT-B. This is indicated by the dashed lines inFIG.5. In the embodiment ofFIG.5, the object positioner further comprises a control device15. The control device15is configured to receive shape data relating to the shape of the object, and to generate a control signal based on this shape data. In this embodiment, the thermal device10is at least partly controlled on the basis of said control signal. In particular, the settings of the thermal elements11are controlled on the basis of the control signal, in order to make sure that the object W is provided with the correct first object temperature. This shape data which is received by the control device15can for example be obtained by measuring or calculating the actual shape of the object W. This way, the deviation of the actual shape from the desired shape of the object W is known. This information is then used to the generate a control signal, on the basis of which the thermal device10is controlled. Based on the shape data, it is determined (e.g. calculated) what the first object temperature should be. Then, a control signal is generated and sent to the thermal device10(e.g. through a wired data connection16or a wireless data connection; this data connection may be a direct connection or an indirect connection), and the thermal device10is activated to provide the object W with the first object temperature. Optionally, also a desired temperature profile (either uniform or non-uniform) for the object W and/or a second object temperature is determined, and a corresponding control signal for the thermal device10is created by the control device15. In the embodiment ofFIG.5, the object positioner further comprises a measurement tool20which is configured to generate shape data of the object W. The control device15is configured to generate a control signal based on the shape data which is generated by the measurement tool20. A wired data connection21or a wireless data connection is provided between the measurement tool to transfer the shape data as generated by the measurement tool20to the control device15. This data connection may be a direct connection or an indirect connection. The embodiment ofFIG.5suitable for carrying out an embodiment of the method according to the invention. In such an embodiment of the invention, the shape of the object W (e.g. a substrate, e.g. a wafer) is determined. In the embodiment ofFIG.5, the measurement tool20can for example be used for that. Alternatively or in addition, a different measurement tool (which is for example not connected to the control device15via data connection21) can be used, or the shape of the object W can be calculated, e.g. using a mathematical model. Then, the determined shape of the object W is compared with the desired or ideal shape of the object W, and therewith a difference between the desired shape and the determined shape is determined. This can for example be done in the control device15, e.g. in a computer which forms part of the control device15. Alternatively or in addition the can be (at least partly) done by using a computer which is not part of or connected to the control device15. Then, a temperature profile is determined for the object W which provides the object W with a deformed shape, wherein the difference between the deformed shape and the determined shape is smaller than the difference between the desired shape and the determined shape for at least a part of the object. Subsequently, the temperature profile is applied to the object W. When the embodiment ofFIG.5is used to carry out this embodiment of the method according to the invention, the thermal device10with the thermal elements11is used to apply the temperature profile to the object W. In the embodiment ofFIG.5, the object support WT is optionally provided with a clamping device to clamp the object onto the object support WT, in particular onto the object support surface WT-OSS of the object support WT. The clamping device is for example vacuum clamp device or an electrostatic clamp device. In the variant of the embodiment ofFIG.5in which a clamping device is provided, an embodiment of the method according to the invention can be carried out which further comprises the step of clamping the object W onto the object support WT, in particular onto an object support surface WT-OSS which is configured to engage at least a part of an object W during positioning of this object W. In this embodiment the temperature profile is applied either before or after effecting the clamping of the object W onto the object support WT. For example, the object support WT is a substrate support, e.g. a wafer table, for example in a lithographic apparatus or object inspection apparatus. In a lithographic apparatus or an object inspection apparatus, the object W (e.g. a substrate, e.g. a wafer) is clamped onto the object support surface WT-OSS of the object support WT during the positioning of the object W, e.g. relative to a projection beam and/or a measurement beam. In accordance with this embodiment of the method according to the invention, the temperature profile is applied to the object W (e.g. the substrate, e.g. the wafer) either before or after the clamping of the object W on the object support surface WT-OSS is effected. In general, the temperature profile is applied before the exposure of the object W to the projection beam or measurement beam starts. Optionally, in this embodiment, the object support surface WT-OSS has a support surface temperature, and the temperature profile which is applied to the object W is a uniform temperature profile at a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference. Alternatively, in this embodiment the object support surface WT-OSS has a support surface temperature, and the temperature profile which is applied to the object W is a non-uniform temperature profile comprising a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference. The temperature profile can be applied to the object W while the object W is arranged on the object support surface WT-OSS or prior to the arranging of the object W on the object support surface WT-OSS. For example, the object support surface WT-OSS is formed by the surfaces at the free ends of a plurality of burls WT-B which are provided on the object support WT. One or more thermal elements11can for example be provided between the burls WT-B in order to apply the temperature profile to the object W. Alternatively or in addition, one or more thermal elements11may be provided at a distance from the object support. FIG.6shows a further embodiment of a object positioner according to the invention. In the embodiment ofFIG.6, the object positioner comprises an object support WT having an object support surface WT-OSS. The object support surface WT-OSS is configured to engage at least a part of an object W, e.g. a substrate, e.g. a wafer. The object support surface WT-OSS has a support surface temperature. The object positioner further comprises a thermal device10. The thermal device10is configured to provide at least a part of the object W with a first object temperature. The first object temperature differs from the support surface temperature by a first predetermined temperature difference. In the embodiment ofFIG.6, the object positioner further comprises an object infeed device30. The object infeed device30is configured to supply the object W to the object support WT, e.g. to the object support surface WT-OSS. In this embodiment, the thermal device10is arranged relative to the object infeed device30such that the object is provided with the first object temperature before the object W is arranged on the object support WT, e.g. on the object support surface WT-OSS of the object support WT. For example, the thermal device10may be at least partly be arranged inside or adjacent to the object infeed device30. The object infeed device10optionally comprises a thermal stabilization device32. In the thermal stabilization device32, the object W is generally brought to a desired uniform temperature which equals the support surface temperature. Optionally, at least a part of the thermal device10, for example one or more thermal elements11, may be arranged in the thermal stabilization unit32. Such thermal elements11may provide the first object temperature to the first object either during or after the thermal stabilization by the thermal stabilization device32. In the embodiment ofFIG.6, the object positioner further comprises an object transport device31which is configured to transport an object from the thermal stabilization device32towards the object support WT along an object infeed path33. At least a part of the thermal device10is arranged along the object infeed path33. For example, one or more thermal elements11of the thermal device10are arranged along a object infeed path31, for example such that the object W passes underneath the thermal elements11while being transported from the thermal stabilization device31towards the object support WT. In this embodiment, any expected heating or cooling of the object during transport of the object W from the position of the thermal elements11to the object support WT is optionally taken into account when the first object temperature, and optionally also the second object temperature is determined. In the embodiment ofFIG.6, at least a part of the object infeed device30and at least a part of the thermal device10are moveable relative to each other. For example, at least a part of the object infeed device30is moveable, for example an object transport device31which is moveable along the object infeed path33, in order to transport the object W to the object support WT. At least a part of the thermal device10, for example one or more thermal elements such as heating elements and/or cooling elements, may for example be arranged along the object infeed path33, so the object W in the object infeed device30is provided with the first object temperature upon passing the thermal elements11of the thermal device10. Alternatively or in addition, elements of object infeed device30may be stationary when the thermal device10provides the object with the first object temperature, e.g. by moving one or more thermal elements11over the object while the object is stationary in the object infeed device30. Alternatively, both at least a part of the object infeed device30(e.g. the object transport device31) and a part of the thermal device10(e.g. one or more thermal elements11) move simultaneously, e.g. in opposite directions, so that a relative movement of the respective part of the object infeed device30and the respective part of the thermal device10is obtained. For example, the thermal elements11may be heating elements (e.g. infrared LED's) and/or cooling elements (e.g. Peltier elements). These thermal elements11are optionally arranged in a grid-like layout (e.g. in a two dimensional Carthesian grid or in a polar grid) or in an array (e.g. in a linear arrangement). During heating and/or cooling of the object, the thermal elements11may be stationary relative to the object W, or the thermal elements11may be moved relative to the object W, e.g. in a scanning motion of the thermal elements11relative the object W. The thermal device10optionally comprises both heating elements as well as cooling elements. Optionally, at least a part of the thermal device10, e.g. one or more thermal elements11, is/are mounted in or below an infeed device support surface of the object transport device31, which infeed device support surface is configured to support the object W during the transport thereof to the object support WT. Optionally, the thermal device comprises multiple thermal elements11, which are arranged in or below the infeed device support surface. Optionally, the object transport device31comprises a plurality of burls, and the surfaces at the free ends of these burls together form the infeed device support surface. The thermal elements11of the thermal device may in that case for example be arranged between those burls. FIG.7schematically shows an example of an object W with a non-uniform temperature distribution as can be applied by a thermal device10. The object W comprises a first portion W1which is provided with a first object temperature, and a second portion W2which is provided with a second object temperature. For the remainder of the object, the temperature is controlled such that it is as close as possible to the support surface temperature. The first portion W1is for example an area which before applying the first object temperature comprised a local bulge. The second portion W2is an annular area adjacent to the edge of the object W. In the second portion W2for example a deformation was present which before applying the second object temperature caused the object W to have an undesired bowl-shape or umbrella-shape. Embodiments are provided according to the following clauses:1. Object positioner which comprises:an object support having an object support surface which is configured to engage at least a part of an object, said object support surface having a support surface temperature, anda thermal device, which thermal device is configured to provide at least a part of the object with a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference.2. Object positioner according to clause1, wherein the thermal device is configured to provide a first portion of the object with the first object temperature and a second portion of the object with a second object temperature, which second object temperature differs from the support surface temperature by a second predetermined temperature difference.3. Object positioner according to any of the preceding clauses, wherein the thermal device is configured to provide at least a part of the object with a temperature gradient between a first maximum temperature and a first minimum temperature, wherein the first object temperature is the first maximum temperature, the first minimum temperature or a temperature between the first maximum temperature and the first minimum temperature.4. Object positioner according to any of the preceding clauses, wherein the object positioner further comprises an object infeed device which is configured to supply the object to the object support, and wherein the thermal device is arranged relative to the object infeed device such that the object is provided with the first object temperature before the object is arranged on the object support.5. Object positioner according to clause 4, wherein at least a part of the object infeed device and at least a part of the thermal device are moveable relative to each other.6. Object positioner according to any of the preceding clauses, wherein at least a part of the thermal device and at least a part of the object support are moveable relative to each other.7. Object positioner according to clause 4, wherein at least a part of the thermal device is mounted in the object infeed device.8. Object positioner according to any of the preceding clauses, wherein the thermal device is mounted in the object support, adjacent to the object support surface.9. Object positioner according to any of preceding clauses, wherein the thermal device comprises a heating element which is for example an infrared LED and/or cooling element which is for example a Peltier element.10. Object positioner according to any of the preceding clauses, wherein the object positioner further comprises a control device, which control device is configured to receive shape data relating to the shape of the object, and to generate a control signal based on the shape data, and wherein the thermal device is at least partly controlled on the basis of said control signal.11. Object positioner according to any of the preceding clauses, which object positioner further comprises a measurement tool which is configured to generate shape data of an object.12. Object positioner according to clause 10, which object positioner further comprises a measurement tool which is configured to generate shape data of an object, and wherein the control device is configured to generate a control signal based on the shape data which is generated by the measurement tool.13. Object positioner according to any of the preceding clauses, wherein the object support surface is configured to engage at least a part of an object which object is a substrate, e.g. a wafer.14. Method for correcting the shape of an object, which method comprises the following steps:determining the shape of said object,comparing the determined shape of said object with a desired shape of said object, and therewith determining a difference between the desired shape and the determined shape,determining a temperature profile for the object which provides the object with a deformed shape, wherein the difference between the deformed shape and the determined shape is smaller than the difference between the desired shape and the determined shape for at least a part of the object, andapplying the temperature profile to the object.15. Method according to clause 14, wherein the method further comprises the step of clamping the object onto an object support having an object support surface which is configured to engage at least a part of an object during positioning of said object, and wherein the temperature profile is applied either before or after effecting the clamping of the object onto the object support.16. Method according to clause 15, wherein the object support surface has a support surface temperature, and wherein the temperature profile which is applied to the object is a uniform temperature profile at a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference.17. Method according to clause 15, wherein the object support surface has a support surface temperature, and wherein the temperature profile which is applied to the object is a non-uniform temperature profile comprising a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference.18. Method according to any of the clauses 14-17, wherein the object is a substrate, e.g. a wafer.19. Lithographic apparatus, which lithographic apparatus comprises an object positioner according to any of the clauses 1-13.20. Lithographic apparatus according to clause 19, which lithographic apparatus further comprises a projection system, and wherein the object support is configured for positioning an object relative to the projection system.21. Lithographic apparatus, comprising:an illumination system configured to condition a radiation beam;a support constructed to support a patterning device, the patterning device being capable of imparting the radiation beam with a pattern in its cross-section to form a patterned radiation beam;a projection system configured to project a patterned radiation beam onto a substrate; the projection system comprising a plurality of optical elements,a wafer stage having a substrate support surface which is configured to engage at least a part of the substrate, said substrate support surface having a support surface temperature, anda thermal device, which thermal device is configured to provide at least a part of the substrate with a first object temperature, which first object temperature differs from the support surface temperature by a first predetermined temperature difference.22. Lithographic apparatus according to clause 21, wherein the lithographic apparatus further comprises a thermal stabilization device, and wherein at least a part of the thermal device is arranged in the thermal stabilization device.23. Lithographic apparatus according to clause 21, wherein the lithographic apparatus further comprises a thermal stabilization device and a substrate transport device which is configured to transport a substrate from the thermal stabilization device towards the wafer stage along a substrate infeed path, and wherein at least a part of the thermal device is arranged along the substrate infeed path.24. Object inspection apparatus, which object inspection apparatus comprises an object positioner according to any of the clauses 1-13.25. A device manufacturing method comprising transferring a pattern from a patterning device onto a substrate, comprising the step of using a lithographic apparatus according to any of clauses 19-23.26. A device manufacturing method, which comprises the following steps:determining the shape of a substrate,comparing the determined shape of said substrate with a desired shape of said substrate, and therewith determining a difference between the desired shape and the determined shape,determining a temperature profile for the substrate which provides the substrate with a deformed shape, wherein the difference between the deformed shape and the determined shape is smaller than the difference between the desired shape and the determined shape for at least a part of the substrate,applying the temperature profile to the substrate, andafter applying the temperature profile to the substrate, transferring a pattern from a patterning device onto the substrate. Although specific reference may be made in this text to the use of a lithographic apparatus in the manufacture of ICs, it should be understood that the lithographic apparatus described herein may have other applications. Possible other applications include the manufacture of integrated optical systems, guidance and detection patterns for magnetic domain memories, flat-panel displays, liquid-crystal displays (LCDs), thin-film magnetic heads, etc. Although specific reference may be made in this text to embodiments of the invention in the context of a lithographic apparatus, embodiments of the invention may be used in other apparatus. Embodiments of the invention may form part of a mask inspection apparatus, a metrology apparatus, or any apparatus that measures or processes an object such as a wafer (or other substrate) or mask (or other patterning device). These apparatus may be generally referred to as lithographic tools. Such a lithographic tool may use vacuum conditions or ambient (non-vacuum) conditions. Although specific reference may have been made above to the use of embodiments of the invention in the context of optical lithography, it will be appreciated that the invention, where the context allows, is not limited to optical lithography and may be used in other applications, for example imprint lithography. Where the context allows, embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc. and in doing that may cause actuators or other devices to interact with the physical world. While specific embodiments of the invention have been described above, it will be appreciated that the invention may be practiced otherwise than as described. The descriptions above are intended to be illustrative, not limiting. Thus it will be apparent to one skilled in the art that modifications may be made to the invention as described without departing from the scope of the claims set out below. | 48,072 |
11860555 | DETAILED DESCRIPTION An exemplary embodiment will be described in detail here, and an example thereof is shown in the drawings. When the following description relates to the drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The implementation modes described in the following exemplary embodiments do not represent all implementation modes consistent with the disclosure. Instead, they are only examples of devices and methods consistent with some aspects of the disclosure as detailed in the appended claims. During photolithography exposure of the semiconductor, the exposure machine is provided with an alignment platform and an exposure platform. The photolithography exposure of a wafer should complete an alignment process and an exposure process in sequence. In the actual use of the exposure machine, it is generally necessary to continuously perform photolithography exposure on a plurality of wafers, that is, the exposure process of one wafer is executed based on the exposure platform and the alignment process of another wafer is executed based on the alignment platform at the same time. If the alignment process of another wafer has not been completed when the exposure process of one wafer is completed, the exposure can be executed only after the alignment process is completed; and this waiting time will reduce the photolithography exposure efficiency of the exposure machine. That is, shortening a difference value between the exposure time of one wafer and the alignment time of the next wafer can effectively improve the photolithography exposure efficiency of the exposure machine, so as to improve the manufacturing efficiency of a semiconductor product. However, the alignment time of the wafer during alignment is related to the alignment mark count of the wafer. The more the alignment mark count is, the longer the alignment time is. Therefore, how to determine the appropriate alignment mark count to improve the photolithography exposure efficiency of the exposure machine and the manufacturing efficiency of the semiconductor product is a problem to be considered. More specifically, an exposure machine refers to a device that transfers image information on a film and other transparent bodies to a surface coated with a photosensitive substance by turning on light to emit ultraviolet rays, the exposure machine is provided with an alignment platform and an exposure platform, and when a semiconductor is subjected to photolithography exposure by the exposure machine to make a product, the photolithography exposure of a wafer should complete the alignment process and exposure process in sequence. In the actual use of the exposure machine, it is generally necessary to continuously perform photolithography exposure on a plurality of wafers, that is, after one wafer has completed the alignment process and entered the exposure process, it is necessary to align another wafer based on the alignment platform at this time. If the next wafer to be exposed has not been aligned when the exposure of one wafer is completed, it needs to wait a relatively long time to expose the next wafer, which will reduce the photolithography exposure efficiency of the exposure machine. The alignment time of the wafer during alignment is related to the alignment mark count of the wafer. The more the alignment mark count is, the longer the alignment time is. Therefore, if the appropriate alignment mark count may be determined, the waiting time in the wafer photolithography exposure process may be greatly shortened, so as to improve the photolithography exposure efficiency of the exposure machine and the manufacturing efficiency of a semiconductor product. Based on this, the present disclosure provides an alignment mark count acquiring method and device. In the alignment mark count acquiring method, the exposure time of a wafer located on an exposure platform is acquired, the alignment time of a wafer located on an alignment platform is also acquired, and a buffer time between the alignment time and the exposure time is calculated. If the buffer time is beyond a preset value, the optimal alignment mark count (a target alignment mark count) of the wafer located on the alignment platform is determined based on the corresponding relationship. The corresponding relationship is the relationship between exposure parameters and alignment mark counts. The corresponding relationship is used to make the buffer time to be less than or equal to the preset value. Therefore, the target alignment mark count determined based on the corresponding relationship may make the buffer time to be less than or equal to the preset value. The alignment mark count of the wafer located on the alignment platform is set to the target alignment mark count, so that the buffer time may be controlled to be less than or equal to the preset value, so as to improve the photolithography exposure efficiency of the exposure machine and the manufacturing efficiency of the semiconductor product. Referring toFIG.1, the alignment mark count acquiring method provided in the present disclosure is applied to a terminal device, such as a server special for a laboratory and a personal computer.FIG.1is an application schematic diagram of the alignment mark count acquiring method provided in the present disclosure. InFIG.1, the terminal device stores the corresponding relationship between the exposure parameters and the alignment mark counts. The terminal device also receives a first time at which the exposure machine performs exposure of a first wafer, a second time at which the exposure machine performs alignment of a second wafer, and the exposure parameters of the first wafer. When the buffer time between the second time and the first time is less than the preset value, the terminal device determines and outputs the target alignment mark count of the second wafer according to the exposure parameters of the first wafer and the corresponding relationship. The alignment mark count of the second wafer during alignment is set according to the target alignment mark count, so that the buffer time between the first time and the second time may be controlled to be less than or equal to the preset value, so as to improve the photolithography exposure efficiency of the exposure machine. Referring toFIG.2, the first embodiment of the present disclosure provides an alignment mark count acquiring method, which includes the following operations. At S201, a first time at which an exposure machine performs exposure of a first wafer is acquired, and a time at which the exposure machine performs alignment of a second wafer is acquired. Herein, the exposure machine includes an alignment platform and an exposure platform. When the wafer is located on the exposure platform, the wafer is defined as the first wafer, the first wafer has exposure parameters. When the wafer is located on the alignment platform, the wafer is defined as the second wafer, the second wafer has an alignment mark count. The same wafer passes through the alignment platform and the exposure platform in sequence to complete the exposure process. Referring to the above description of the photolithography exposure process by the exposure machine, the first wafer has completed the alignment process and entered the exposure process, and the second wafer is in the alignment process at this time. The first time and the second time may be recorded by a worker and input to the terminal device. The unit of the first time and the second time may be second (s). For example, the exposure parameters of the first wafer are the exposure dose of the exposure machine when performing exposure of the first wafer, and the exposure shot count on the first wafer. At S202, a first buffer time between the second time and the first time is acquired when the first time is less than the second time. When the first time is less than the second time, it means that the alignment of the second wafer has not been completed when the exposure machine completes the exposure of the first wafer. At this time, it is necessary to wait for the second wafer to complete the alignment before performing the exposure of the second wafer. The first time and the second time may be understood as the time points, and the first buffer time may be understood as a difference value between the alignment end time of the second wafer and the exposure end time of the first wafer. That is, the first buffer time is the time duration required to wait for the second wafer to complete alignment when the exposure of the first wafer is ended. The unit of the first buffer time is consistent with that of the first time and the second time. For example, if the unit of the first time and the second time is s, the unit of the first buffer time is also s. At S203, the target alignment mark count of the second wafer is determined according to the exposure parameters of the first wafer and the corresponding relationship when the first buffer time is greater than a preset value. Herein, the corresponding relationship is the relationship between the exposure parameters and the alignment mark counts, and the corresponding relationship is used to make the first buffer time to be less than or equal to the preset value. In order to improve the photolithography exposure efficiency of the exposure machine, the preset value may be set to a number greater than 0 but infinitely close to 0. For example, the preset value may be 0.17 or 0.03. The preset value may also be set to 0. The corresponding relationship is the relationship between the exposure parameters of the first wafer and the alignment mark count of the second wafer. More precisely, the corresponding relationship is the relationship between each parameter contained in the exposure parameters of the first wafer and the alignment mark count. As described above, for example, the exposure parameters of the first wafer are the exposure dose when the exposure machine performs exposure of the first wafer and the exposure shot count on the first wafer. Thus, the corresponding relationship includes the relationship between the exposure dose and the alignment mark count, as well as the relationship between the exposure shot count and the alignment mark count. The corresponding relationship is a relationship established based on the situation that the exposure process of the first wafer and the alignment process of the second wafer are ended almost at the same time. That is, the corresponding relationship is a relationship established based on the situation that a difference value between the alignment time of the second wafer and the exposure time of the first wafer is less than or equal to the preset value. Therefore, after the target alignment mark count of the second wafer is determined according to the exposure parameters of the first wafer and the corresponding relationship, the alignment time of the second wafer based on the target alignment mark count is infinitely close to the time of exposure of the first wafer. That is, the first buffer time is less than or equal to the preset value. At this time, after the exposure process of the first wafer is completed, the exposure of the second wafer may be carried out directly without waiting too long. At S204, the target alignment mark count is output. The terminal device may display the target alignment mark count for a worker to set the alignment process of the second wafer. The terminal device may also send the target alignment mark count to a mobile phone, a personal computer, a tablet computer and other terminal devices of the worker. Optionally, when the first time is greater than or equal to the second time, the alignment mark count of the second wafer is output as the target alignment mark count of the second wafer. Alternatively, when the first buffer time is less than or equal to the preset value, the alignment mark count of the second wafer is output as the target alignment mark count of the second wafer. Because whether the first time is greater than or equal to the second time or the first buffer time is less than or equal to the preset value meets the requirements for the photolithography exposure efficiency of the exposure machine, the alignment mark count of the second wafer at this time may be directly output as the target alignment mark count of the second wafer. According to the alignment mark count acquiring method provided in the embodiment, the terminal device receives the first time at which the exposure machine performs exposure of the first wafer, the second time at which the exposure machine performs alignment of the second wafer, and the exposure parameters of the first wafer. When the first buffer time between the second time and the first time is greater than the preset value, the terminal device determines and outputs the target alignment mark count of the second wafer according to the exposure parameters of the first wafer and the corresponding relationship. After the alignment mark count of the second wafer during alignment is set according to the target alignment mark count, the buffer time between the first time and the second time may be controlled to be less than or equal to the preset value, so as to improve the photolithography exposure efficiency of the exposure machine and improve the manufacturing efficiency of the semiconductor product. Referring toFIG.3, in the alignment mark count acquiring method provided in the second embodiment of the present disclosure, acquiring the corresponding relationship is further described based on the first embodiment. In the embodiment, the alignment mark count acquiring method includes the following operations. At S301, when a historical first buffer time is less than or equal to the preset value, the exposure parameters of a first wafer and the alignment mark count of a second wafer which correspond to the historical first buffer time are acquired. Herein, the historical first buffer time is a difference value between a historical first time and a historical second time, the historical first time is the time at which the exposure machine performs historical exposure of the first wafer, and the historical second time is the time at which the exposure machine performs historical alignment of the second wafer. The historical exposure refers to one previous exposure of the first wafer by the exposure machine, and the historical first time refers to the exposure time at which the exposure machine performs the one previous exposure of the first wafer. Similarly, the historical alignment refers to one previous alignment of the second wafer by the exposure machine, and the historical second time refers to the alignment time at which the exposure machine performs the one previous alignment of the second wafer. That is, the historical first buffer time, the historical first time and the historical second time mentioned in this operation are the time parameters during one photolithography exposure of the first wafer and the second wafer by the exposure machine at a previous time. The historical first buffer time, the historical first time and the historical second time correspond to each other. After the historical first buffer time is determined to be less than or equal to the preset value, the exposure parameters of the first wafer and the alignment mark count of the second wafer are acquired. The exposure parameters of the first wafer refer to the exposure parameters of the first wafer which correspond to the historical first buffer time. The alignment mark count of the second wafer refers to the alignment mark count of the second wafer corresponding to the historical first buffer time. At S302, the corresponding relationship is acquired according to a plurality of exposure parameters of the first wafer and a plurality of alignment mark counts of the second wafer. The exposure parameters of the first wafer and the alignment mark count of the second wafer refer to the exposure parameters of the first wafer and the alignment mark count of the second wafer when the historical first buffer time is less than or equal to the preset value. Optionally, the exposure parameters of the first wafer include the exposure dose of the first wafer and the exposure shot count of the first wafer, and the corresponding relationship includes the first corresponding relationship and the second corresponding relationship. The first corresponding relationship refers to the relationship between the exposure dose of the first wafer and the alignment mark count of the second wafer, and the second corresponding relationship refers to the relationship between the exposure shot count of the first wafer and the alignment mark count of the second wafer. When acquiring the first corresponding relationship, the terminal device acquires the first corresponding relationship according to a plurality of exposure doses of the first wafer and a plurality of alignment mark counts of the second wafer. An exposure machine with a KRF model is taken as an example, the limit exposure dose of the KRF exposure machine during wafer exposure is 50 mj/cm2. When the preset value is set in the range of 0.18-0.2, a plurality of exposure doses of the first wafer when the historical first buffer time of the KRF exposure machine is less than or equal to the preset value are acquired. As shown in Table 1 below, when the historical first buffer time is 0.18 s, the acquired exposure dose of the first wafer is 24 mj/cm2and the acquired alignment mark count of the second wafer is 16.9. When the historical first buffer time is 0.17 s, the acquired exposure dose of the first wafer is 36 mj/cm2and the acquired alignment mark count of the second wafer is 17. By analogy, in Table 1, a total of 7 groups of data are acquired as the basis for acquiring the first corresponding relationship. In the 7 groups of data, the plurality of acquired exposure doses of the first wafer include 24 mj/cm2, 36 mj/cm2, 45 mj/cm2, 50 mj/cm2, 55 mj/cm2, 60 mj/cm2and 65 mj/cm2. Correspondingly, the plurality of acquired alignment mark counts of the second wafer include 16.9, 17, 17.1, 17.1, 18.6, 22.2 and 28.2. TABLE 1DOSE (mj/cm2)24364550556065buffer time/s0.180.170.150.1−0.1−0.7−1.7mark counts16.91717.117.1 (Z)18.622.228.2 The relationship between the exposure doses of the first wafer and the alignment mark counts of the second wafer is simulated according to the data in Table 1, and the schematic diagram of the finally obtained first corresponding relationship is illustrated inFIG.4. The first corresponding relationship is as follows. When the exposure dose of the first wafer is less than or equal to 50 mj/cm2(DOSE is less than 50 mj/cm2), the increase of the exposure dose of the first wafer has little effect on the alignment mark count of the second wafer. When the exposure dose of the first wafer is greater than 50 mj/cm2, the relationship between DOSE and mark count is that y=0.048X2−4.8X+120+Z. X represents the exposure dose of the first wafer, Y represents the alignment mark count of the second wafer, and Z represents a constant. When acquiring the second corresponding relationship, the terminal device acquires the second corresponding relationship according to a plurality of exposure shot counts of the first wafer and a plurality of alignment mark counts of the second wafer. An exposure machine with a KRF model is taken as an example, the limit exposure dose of the KRF exposure machine during wafer exposure is 50 mj/cm2. At this time, the maximum value of the exposure dose of the first wafer is set to 50 mj/cm2to acquire the alignment mark count of the second wafer when the historical first buffer time is less than or equal to the preset value. As shown in Table 2 below, a total of 6 groups of data are acquired as the basis for acquiring the second corresponding relationship. In the 6 groups of data, the plurality of acquired exposure shot counts of the first wafer include 101, 106, 108, 113, 121 and 130. Correspondingly, the acquired alignment mark counts of the second wafer include 14, 17, 18.5, 22, 28 and 34. TABLE 2Shot count101106108113121130Mark counts141718.5222834 The relationship between the exposure shot counts of the first wafer and the alignment mark counts of the second wafer is simulated according to the data in Table 2, and the schematic diagram of the finally obtained second corresponding relationship is illustrated inFIG.5. The second corresponding relationship is that Y=0.7012X−57.099, X represents the exposure shot count of the first wafer, and Y represents the alignment mark count of the second wafer. At S303, a first time at which an exposure machine performs exposure of a first wafer is acquired, and a second time at which the exposure machine performs alignment of a second wafer is acquired. The exposure machine includes an alignment platform and an exposure platform. When the wafer is located on the exposure platform, the wafer is defined as the first wafer, the first wafer has exposure parameters. When the wafer is located on the alignment platform, the wafer is defined as the second wafer, the second wafer has an alignment mark count. The same wafer passes through the alignment platform and the exposure platform in sequence to complete the exposure process. At S304, a first buffer time between the second time and the first time is acquired when the first time is less than the second time. At S305, a target alignment mark count of the second wafer is determined according to the exposure parameters of the first wafer and the corresponding relationship when the first buffer time is greater than a preset value. The corresponding relationship is the relationship between the exposure parameters and the alignment mark counts, and the corresponding relationship is used to make the first buffer time to be less than or equal to the preset value. At S306, the target alignment mark count is output. The relevant description of S303to S306may refer to the relevant description of S201to S204in the first embodiment, which will not be elaborated here. The operations at S301to S302may be executed before the operation at S303or before the operation at S305. The operations at S301to S302are executed only once. After acquiring the corresponding relationship, the terminal device stores the corresponding relationship and calls the corresponding relationship when it needs to be used. The alignment mark count acquiring method provided in the embodiment describes how to acquire the corresponding relationship on the basis of the first embodiment. When acquiring the corresponding relationship, it is necessary to acquire a large number of exposure parameters of the first wafer and a large number of alignment mark counts of the second wafer based on the fact that the historical first buffer time is less than or equal to the preset value. Then, the corresponding relationship is acquired according to a large number of exposure parameters of the first wafer and a large number of alignment mark counts of the second wafer. The exposure parameters of the first wafer include the exposure dose of the first wafer and the exposure shot count of the first wafer. The first corresponding relationship in the corresponding relationship may be acquired by the exposure doses of the first wafer and the alignment mark counts of the second wafer, and the second corresponding relationship in the corresponding relationship may be acquired by the exposure shot counts of the first wafer and the alignment mark counts of the second wafer. After experimental verification, whether the target alignment mark count is determined according to the first corresponding relationship or the second corresponding relationship, the results are consistent. Therefore, at S305, the target alignment mark count of the second wafer may be determined according to the first corresponding relationship and/or the second corresponding relationship. Referring toFIG.6, the third embodiment of the present disclosure provides an alignment mark count acquiring device10, which includes an acquiring module11, a processing module12and an output module13. The acquiring module11is configured to acquire a first time at which an exposure machine performs exposure of a first wafer, and acquire a second time at which the exposure machine performs alignment of a second wafer. The exposure machine includes an alignment platform and an exposure platform, when the wafer is located on the exposure platform, the wafer is defined as the first wafer, the first wafer has exposure parameters, when the wafer is located on the alignment platform, the wafer is defined as the second wafer, the second wafer has an alignment mark count. The same wafer passes through the alignment platform and the exposure platform in sequence to complete the exposure process. The acquiring module11is further configured to acquire a first buffer time between the second time and the first time when the first time is less than the second time. The processing module12is configured to determine, according to the exposure parameters of the first wafer and the corresponding relationship, a target alignment mark count of the second wafer when the first buffer time is greater than a preset value. The corresponding relationship is the relationship between the exposure parameters and the alignment mark counts, and the corresponding relationship is used to make the first buffer time to be less than or equal to the preset value. The output module is configured to output the target alignment mark count. The acquiring module11is further configured to acquire the exposure parameters of a first wafer and the alignment mark count of a second wafer which correspond to a historical first buffer time when the historical first buffer time is less than or equal to the preset value. The historical first buffer time is a difference value between a historical first time and a historical second time, the historical first time is the time at which the exposure machine performs historical exposure of the first wafer, and the historical second time is the time at which the exposure machine performs historical alignment of the second wafer. The acquiring module11is further configured to acquire the corresponding relationship according to a plurality of exposure parameters of the first wafer and a plurality of alignment mark counts of the second wafer. The exposure parameters of the first wafer includes the exposure dose of the first wafer and the exposure shot count of the first wafer, and the corresponding relationship includes the first corresponding relationship and the second corresponding relationship. The acquiring module11is specifically configured to acquire the first corresponding relationship according to the plurality of exposure doses of the first wafer and the plurality of alignment mark counts of the second wafer, and acquire the second corresponding relationship according to the plurality of exposure shot counts of the first wafer and the plurality of alignment mark counts of the second wafer. The output module13is further configured to output the alignment mark count of the second wafer as the target alignment mark count of the second wafer when the first time is greater than or equal to the second time. The output module13is further configured to output the alignment mark count of the second wafer as the target alignment mark count of the second wafer when the first buffer time is less than or equal to the preset value. The alignment mark count acquiring device provided in the embodiment may be used to execute the alignment mark count acquiring method provided in the first embodiment to the second embodiment above, and the specific implementation and the technical effect are similar, which will not be elaborated here. Referring toFIG.7, the fourth embodiment of the present disclosure further provides a terminal device20, which includes a memory21, a processor22and a transceiver23. The memory21is configured to store instructions. The transceiver23is configured to communicate with other devices. The processor22is configured to execute the instructions stored in the memory21, to enable the terminal device to execute the alignment mark acquiring method provided in the first embodiment to the second embodiment above. The specific implementation mode and the technical effect are similar, which will not be elaborated here. The present disclosure further provides a computer readable storage medium, having stored thereon instructions executable by a computer. When the instructions are executed, the computer is enabled to execute the alignment mark acquiring method provided in the first embodiment to the second embodiment above. The specific implementation and the technical effect are similar, which will not be elaborated here. The present disclosure further provides a computer program product, which includes a computer program that, when executed by the processor, implements the alignment mark acquiring method provided in the first embodiment to the second embodiment 2 above. The specific implementation and the technical effect are similar, which will not be elaborated here. It is to be noted that, the computer readable storage medium may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Ferromagnetic Random Access Memory (FRAM), a Flash Memory, a magnetic surface memory, a compact disc, a Compact Disc Read-Only Memory (CD-ROM) and the like. The computer readable storage medium may also be various electronic devices including one or any combination of the above memories, such as mobile phones, computers, tablet devices, personal digital assistants, etc. It is to be noted that, in this context, the terms “include”, “containing” or any other variation thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device that includes a series of elements includes not only those elements, but also other elements not explicitly listed, or elements inherent in such process, method, article or device. Without further restrictions, the element defined by the statement “including a . . . ” does not exclude the existence of another same element in the process, method, article or device including the element. The above serial number of the embodiment in the present disclosure is only for description and does not represent the advantages and disadvantages of the embodiment. Through the description of the above embodiments, those skilled in the art can clearly understand that the above embodiment method can be realized by means of software and necessary general hardware platforms. Of course, it can also be realized by hardware, but in many cases, the former is a better embodiment. Based on this understanding, the technical solution of the present disclosure essentially or the part that contributes to the traditional art can be embodied in the form of a software product. The computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disc and a compact disc), including several commands to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, Or a network device, etc.) to execute the method described in various embodiments of the present disclosure. The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiment of the present disclosure. It should be understood that, each process and/or block in the flowchart and/or block diagram and the combination of processes and/or blocks in the flowchart and/or block diagram may be implemented by a computer program command. These computer program commands may be provided for the processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing devices to generate a machine, and therefore, a device for realizing the functions specified in one or more processes of the flowchart and/or one or more blocks of the block diagram is generated through the command executed by a processor of a computer or other programmable data processing devices. These computer program commands may also be stored in a computer-readable memory capable of guiding a computer or other programmable data processing devices to work in a specific manner, so that the commands stored in the computer readable memory generates a manufacturing product including a command device, The command device implements the functions specified in one or more processes of the flowchart and/or one or more blocks of the block diagram. These computer program commands may also be loaded on a computer or other programmable data processing devices, so that a series of operation steps are performed on the computer or other programmable devices to produce computer implemented processing, thus, commands executed on a computer or other programmable devices provide steps for implementing the functions specified in one or more processes of the flowchart and/or one or more blocks of the block diagram. The above is only the preferred embodiment of the present disclosure and does not limit the scope of the patent of the present disclosure. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present disclosure, or directly or indirectly applied in other relevant technical fields, are similarly included in the scope of patent protection of the present disclosure. | 33,947 |
11860556 | DESCRIPTION OF THE EMBODIMENTS Hereafter, exemplary embodiments for carrying out the present invention will be described with reference to the drawings. First Embodiment FIG.1Ais a schematic drawing illustrating a configuration of an image forming apparatus1according to a first embodiment. The image forming apparatus1is a monochrome printer that forms an image on a recording material based on image information entered from an external device. The recording material can include paper such as normal paper and thick paper, plastic films such as OHP sheets, sheets having special shapes such as envelopes and index paper, and various other sheet materials of different materials such as cloth. General Configuration The image forming apparatus1includes, as illustrated inFIGS.1A and1B, a printer body100serving as an apparatus body, a reading unit200supported in an openable/closable manner on the printer body100, and an operation unit300attached to an exterior surface of the printer body100. The printer body100includes an image forming unit10that forms a toner image on a recording material, a feeding portion60for feeding the recording material to the image forming unit10, a feeding portion70that fixes the toner image formed by the image forming unit10on the recording material, and a sheet discharge roller pair80. The image forming unit10includes a scanner unit11, a process unit20adopting an electrophotographic system, and a transfer roller12that transfers a toner image serving as a developer image formed on a photosensitive drum21of the process unit20to a recording material. As shown inFIGS.6A and6B, the process unit20includes the photosensitive drum21, and a charging roller22, a pre-exposure unit23, and a developing apparatus30including a developing roller31arranged in a circumference of the photosensitive drum21. The process unit20is detachably attached to the printer body100(refer toFIG.3). The process unit20can be fixed to the printer body100by a screw and mainly detached by a service personnel and not by the user. Meanwhile, a structural member of the printer body100, such as a casing frame of the printer body100, is not included in the process unit20. The photosensitive drum21is a photosensitive member formed in a cylindrical shape. The photosensitive drum21according to the present embodiment has a photosensitive layer formed of an organic photosensitive member having negative chargeability arranged on a drum-shaped base made of aluminum. The photosensitive drum21serving as an image bearing member is driven to rotate at a predetermined processing speed in a predetermined direction, i.e., clockwise direction in the drawing, by a motor. The charging roller22abuts against the photosensitive drum21with a predetermined contact pressure and forms a charging portion. Further, the charging roller22charges a surface of the photosensitive drum21uniformly to a predetermined potential by having a desirable charging voltage applied from a charging high-voltage power supply. In the present embodiment, the photosensitive drum21is charged to negative polarity by the charging roller22. The pre-exposure unit23destaticizes the surface potential of the photosensitive drum21before it enters the charging portion so as to cause stable discharge at the charging portion. The scanner unit11serving as an exposure unit irradiates laser light corresponding to the image information entered from the external device or the reading unit200to the photosensitive drum21using a polygon mirror to scan and expose the surface of the photosensitive drum21. By this exposure, an electrostatic latent image corresponding to the image information is formed on the surface of the photosensitive drum21. The scanner unit11is not limited to a laser scanner device, and for example, an LED exposure device including an LED array in which a plurality of LEDs are arranged along a longitudinal direction of the photosensitive drum21can be adopted. The developing apparatus30includes the developing roller31serving as a developer bearing member that bears developer, a developer container32serving as a frame of the developing apparatus30, and a feed roller33capable of feeding developer to the developing roller31. The developing roller31and the feed roller33are supported rotatably by the developer container32. Further, the developing roller31is arranged at an opening portion of the developer container32so as to face the photosensitive drum21. The feed roller33is abutted rotatably against the developing roller31, and toner serving as developer stored in the developer container32is applied on a surface of the developing roller31by the feed roller33. The feed roller33is not always necessary as long as toner can be sufficiently supplied to the developing roller31. The developing apparatus30according to the present embodiment adopts a contact development system as the development system. That is, the toner layer toner borne on the developing roller31comes into contact with the photosensitive drum21at a developing portion, i.e., developing area, at which the photosensitive drum21and the developing roller31face each other. A developing voltage is applied to the developing roller31from a developing high-voltage power supply. The toner borne on the developing roller31is transferred under the developing voltage from the developing roller31to the drum surface according to a potential distribution on the surface of the photosensitive drum21, by which the electrostatic latent image is developed as a toner image. Further according to the present embodiment, a reversal development system is adopted. That is, after being charged in the charging step, toner is adhered to the surface area of the photosensitive drum21whose electric charge has been attenuated by being exposed during an exposing step, by which a toner image is formed. According to the present embodiment, toner having a particle diameter of 6 μm and a normal charge polarity of negative polarity is used. One example of toner according to the present embodiment is polymerized toner generated by a polymerization method. Further, the toner according to the present embodiment does not contain magnetic components, and it is a so-called nonmagnetic one-component developer according to which toner is borne on the developing roller31mainly by intermolecular force or electrostatic force, i.e., image force. However, one-component developer containing magnetic components can also be used. Further, one-component developer may contain, in addition to toner particles, an additive, such as wax or silica microparticles, for adjusting fluidity or chargeability of toner. Further, a two-component developer composed of nonmagnetic toner and magnetic carrier as developer can also be used. In a case where developer having magnetic property is used, a cylindrical developing sleeve having a magnet arranged on an inner side thereof can be used, for example, as the developer bearing member. An agitation member34is arranged inside the developer container32. The agitation member34is driven to rotate by a motor M1(refer toFIG.12) to thereby agitate toner inside the developer container32and convey toner toward the developing roller31and the feed roller33. Further, the agitation member34serves to circulate toner not used for developing image and detached from the developing roller31inside the developer container to uniformize toner inside the developer container. The agitation member34is not limited to a rotating type. For example, an agitation member of a swinging type can be adopted. Moreover, another agitation member can also be provided in addition to the agitation member34. Further, a developing blade35for regulating an amount of toner being borne on the developing roller31is arranged at the opening portion of the developer container32where the developing roller31is arranged. Toner supplied to the surface of the developing roller31is formed into a uniform thin layer by passing through the portion opposed to the developing blade35along with the rotation of the developing roller31and charged to negative polarity by frictional charging. The feeding portion60includes, as illustrated inFIGS.1A and1B, a front door61supported in an openable/closable manner on the printer body100, a tray portion62, a sheet supporting portion63, a tray spring64, and a pickup roller65. The tray portion62constitutes a bottom plane of a recording material storage space that is exposed by opening the front door61, and the sheet supporting portion63is supported liftably on the tray portion62. The tray spring64urges the sheet supporting portion63upward and presses recording materials P supported on the sheet supporting portion63against the pickup roller65. The front door61closes the recording material storage space by being closed against the printer body100, and supports the recording material P with the tray portion62and the sheet supporting portion63by being opened with respect to the printer body100. The feeding portion70adopts a heat fixing system that performs an image fixing process by heating and melting the toner on the recording material. The feeding portion70includes a fixing film71, a fixing heater such as a ceramic heater for heating the fixing film71, a thermistor for measuring the temperature of the fixing heater, and a pressure roller72in pressure contact with the fixing film71. Next, an image forming operation of the image forming apparatus1will be described. In a state where an image forming command is entered to the image forming apparatus1, an image forming process by the image forming unit10is started based on image information entered from an external computer connected to the image forming apparatus1or the reading unit200. The scanner unit11irradiates laser light toward the photosensitive drum21based on the entered image information. In this state, the photosensitive drum21is charged in advance by the charging roller22, and an electrostatic latent image is formed on the photosensitive drum21by having laser light irradiated thereto. Thereafter, the electrostatic latent image is developed by the developing roller31, and a toner image is formed on the photosensitive drum21. In parallel with the above-mentioned image forming process, the pickup roller65of the feeding portion60sends out the recording material P supported on the front door61, the tray portion62, and the sheet supporting portion63. The recording material P is fed by the pickup roller65to a registration roller pair15and abutted against a nip of the registration roller pair15by which skewing of the sheet is corrected. Then, the registration roller pair15is driven at a matched timing with a transfer timing of the toner image, and conveys the recording material P toward a transfer nip formed by the transfer roller12and the photosensitive drum21. Transfer voltage is applied from a transfer high-voltage power supply to the transfer roller12serving as a transfer unit, and the toner image borne on the photosensitive drum21is transferred to the recording material P conveyed by the registration roller pair15. The recording material P to which toner image has been transferred is conveyed to the feeding portion70, where the toner image is heated and pressed while the recording material P passes through a nip portion between the fixing film71and the pressure roller72of the feeding portion70. Thereby, toner particles are melted and then solidified, by which the toner image is fixed to the recording material P. The recording material P having passed through the feeding portion70is discharged to an exterior of the imaging apparatus1by the sheet discharge roller pair80and supported on a sheet discharge tray81formed on an upper portion of the printer body100. The sheet discharge tray81is inclined upward toward a downstream direction in a discharging direction of the recording material, and the recording material discharged onto the sheet discharge tray81slides down on the sheet discharge tray81where its trailing edge is aligned by a regulating surface84. The reading unit200includes, as illustrated inFIGS.4A and4B, a reading unit201including a reading portion (not shown) arranged in an interior thereof, and a pressure plate202supported in an openable/closable manner on the reading unit201. A platen glass203on which a document is placed and through which light emitted from a reading portion is transmitted is arranged on the upper surface of the reading unit201. In a case where the document image is to be read by the reading unit200, the user places the document on the platen glass203in a state where the pressure plate202is opened. Then, the user closes the pressure plate202to prevent misregistration of the document on the platen glass203and operates the operation unit300to enter a read command to the image forming apparatus1. When the read operation is started, the reading portion in the reading unit201is moved in reciprocating motion in a sub-scanning direction, that is, right and left directions in a state where the operation unit300of the image forming apparatus1is facing the front. The reading portion emits light toward the document from the light emitting portion and receives the light reflected on the document by a light receiving portion, and the received light is subjected to photoelectric conversion to read the document image. In the following description, front and rear directions, right and left directions and up and down directions are defined based on a state where the operation unit300is facing the front. As illustrated inFIGS.2B and3, a first opening portion101that opens upward is formed on the upper portion of the printer body100, and the first opening portion101is covered by a top cover82. The top cover82serving as a loading tray is supported in an openable/closable manner on the printer body100about a pivot shaft82cextending in right and left directions, and the sheet discharge tray81serving as a supporting surface is formed on the upper plane. The top cover82is opened from the front side toward the depth side in a state where the reading unit200is opened from the printer body100. Further, the reading unit200and the top cover82can be configured to be retained in both the opened state and the closed state by a retaining mechanism such as a hinge mechanism. For example, if the recording material is jammed by a sheet jam occurring in a conveyance path CP through which the recording material fed by the pickup roller65passes, the user opens the reading unit200and the top cover82. Then, the user accesses the process unit20through the first opening portion101exposed by opening the top cover82and pulls out the process unit20along a process guide102. The process guide102guides a projection21a(refer toFIG.5A) arranged on an axial end of the photosensitive drum21of the process unit20in sliding motion. In a state where the process unit20is drawn out to the exterior through the first opening portion101, a space is formed through which the user can reach his/her hand into the conveyance path CP. The user reaches his/her hand into the interior of the printer body100through the first opening portion101and accesses the recording material being jammed in the conveyance path CP to remove the jammed recording material. According further to the present embodiment, as illustrated inFIGS.1B and4C, an opening/closing member83is provided in an openable/closable manner on the top cover82. A second opening portion82aserving as an opening portion opening upward is formed on the sheet discharge tray81of the top cover82. The opening/closing member83is designed movably between a closed position in which a replenishing port32ais covered so that a toner pack40cannot be attached to the developer container32and an open position in which the replenishing port32ais exposed so that the toner pack40can be attached to the developer container32. The opening/closing member83functions as a part of the sheet discharge tray81in the closed position. The opening/closing member83and the second opening portion82aare formed on a left side of the sheet discharge tray81. Further, the opening/closing member83is supported in an openable/closable manner on the top cover82about a pivot shaft83aextending in front and rear directions, and the opening/closing member83is opened to the left direction by the user hooking his/her finger on a groove portion82bprovided on the top cover82. Thus, the user can access the replenishing port32aby simply opening the opening/closing member83. The opening/closing member83is formed in an approximately L shape along the configuration of the top cover82. The second opening portion82aof the sheet discharge tray81is opened so that the replenishing port32afor replenishing toner formed on the upper portion of the developer container32is exposed, and by opening the opening/closing member83, the user can access the replenishing port32awithout opening the top cover82. The present embodiment adopts a system, i.e., direct replenishment system, in which the user replenishes toner to the developing apparatus30from the toner pack40(refer toFIGS.1A and1B) filled with toner for replenishment in a state where the developing apparatus30is still attached to the image forming apparatus1. Therefore, in a state where residual amount of toner in the process unit20becomes small, there is no need to remove the process unit20from the printer body100and replace it with a new process unit, so that usability can be improved. Further, toner can be replenished to the developer container32by a lower cost compared to replacing the whole the process unit20. Costs can also be cut down according to the direct replenishment system compared to a case where only the development apparatus30of the process unit20is to be replaced, since there is no need to replace the various rollers and gears. The image forming apparatus1and the toner pack40constitute the image forming system. Collection of Transfer Residual Toner The present embodiment adopts a cleanerless configuration in which transfer residual toner remaining on the photosensitive drum21without being transferred to the recording material P is collected in the developing apparatus30and reused. Transfer residual toner is removed by the following process. Transfer residual toner contains both toner that is charged to positive polarity and toner that is charged to negative polarity but does not have sufficient charge. Transfer residual toner can be charged to negative polarity again by destaticizing the photosensitive drum21after transfer by the pre-exposure unit23and generating uniform charge by the charging roller22. Transfer residual toner charged to negative polarity again at the charging portion reaches the developing portion along with the rotation of the photosensitive drum21. Then, the surface area of the photosensitive drum21having passed through the charging portion is exposed by the scanner unit11while having transfer residual toner still attached to the surface thereof, and an electrostatic latent image is formed thereon. Now, the behavior of transfer residual toner having reached the developing portion will be described by referring to an exposed portion and a non-exposed portion of the photosensitive drum21, respectively. Transfer residual toner attached to a non-exposed portion of the photosensitive drum21is transferred to the developing roller31by potential difference between a developing voltage and a potential, i.e., dark potential, of the non-exposed portion of the photosensitive drum21at the developing portion, and toner is collected in the developer container32. This is because the developing voltage applied to the developing roller31is relatively of positive polarity with respect to the potential of the non-exposed portion, assuming that the normal charge polarity of toner is negative polarity. Toner collected in the developer container32is agitated by the agitation member34and dispersed in toner contained in the developer container and then borne on the developing roller31to be reused in the developing process. Meanwhile, transfer residual toner attached to the exposed portion of the photosensitive drum21remains on the drum surface without being transferred from the photosensitive drum21to the developing roller31at the developing portion. This is because the developing voltage applied to the developing roller31is set to have a potential of negative polarity greater than the potential, i.e., light potential, of the exposed portion assuming that the normal charge polarity of toner is negative polarity. Transfer residual toner remaining on the drum surface is borne on the photosensitive drum21and moves to a transfer portion with other toner being transferred from the developing roller31to the exposed portion and transferred onto the recording material P at the transfer portion. As described, the present embodiment adopts a cleanerless configuration in which transfer residual toner is collected in the developing apparatus30and reused, but the present embodiment can also adopt a conventional configuration where transfer residual toner is collected using a cleaning blade abutted against the photosensitive drum21. In that case, transfer residual toner collected by the cleaning blade is collected in a collecting container that is provided independently from the developing apparatus30. However, by adopting the cleanerless configuration, there is no need to provide an installation space for the collecting container for collecting transfer residual toner, so that the image forming apparatus1can be further downsized, and printing costs can be reduced by reusing transfer residual toner. Configuration of Developer Container and Toner Pack Next, a configuration of the developer container32and the toner pack40serving as a replenishing container will be described.FIG.5Ais a perspective view of the developer container32and the toner pack40, andFIG.5Bis a front view of the developer container32and the toner pack40.FIG.6Ais a cross-sectional view taken at line6A-6A ofFIG.5B, andFIG.6Bis a cross-sectional view taken at line6B-6B ofFIG.5B. As illustrated inFIGS.5A through6B, the developer container32includes a conveyance chamber36storing the agitation member34, and the conveyance chamber36serving as a storage portion for storing toner is extended along an entire length in a longitudinal direction LD (right and left directions) of the developer container32. Further, the conveyance chamber36is formed integrally with a frame that rotatably supports the developing roller31and the feed roller33, and stores toner, i.e., developer, to be borne on the developing roller31. Further, the developer container32includes a first projected portion37that protrudes upward from a first end portion in the longitudinal direction of the conveyance chamber36and that serves as a projected portion communicated with the conveyance chamber36, and a second projected portion38that protrudes upward from a second end portion in the longitudinal direction of the conveyance chamber36. That is, the first projected portion37is provided on the first end portion of the developer container32in the rotational axis direction, i.e., longitudinal direction LD, of the developing roller31, and protrudes toward the sheet discharge tray81in an intersecting direction intersecting the rotational axis relative to the center portion of the developer container32. The second projected portion38is provided at a second end portion of the developer container32in the rotational axis direction of the developing roller31and protrudes toward the sheet discharge tray81in the intersecting direction relative to the center portion of the developer container32. In the present embodiment, the first projected portion37is formed on a left side of the developer container32, and the second projected portion38is formed on a right side of the developer container32. An attachment portion57to which the toner pack40is attached is provided on an upper end portion, i.e., leading edge portion, of the first projected portion37, and the replenishing port32aused to replenish the developer from the toner pack40to the conveyance chamber36is formed on the attachment portion57. The toner pack40can be attached to the attachment portion57in a state where the toner pack40is exposed to the exterior of the apparatus. The first projected portion37and the second projected portion38extend obliquely frontward in the apparatus and upward from the conveyance chamber36. That is, the first projected portion37and the second projected portion38protrude upward toward a downstream direction in a discharge direction of the sheet discharge roller pair80. Therefore, the replenishing port32aformed in the first projected portion37is arranged frontward of the image forming apparatus1, enabling easy toner replenishment operation to the developer container32. Especially according to the present embodiment, the reading unit200capable of opening and closing about a depth side of the apparatus is arranged above the opening/closing member83, so that the replenishing port32ashould be arranged on the front side of the apparatus to allow efficient use of space between the replenishing port32aand the reading unit200. Therefore, workability for replenishing toner from the replenishing port32acan be improved. An upper portion of the first projected portion37and an upper portion of the second projected portion38are connected by a handle portion39serving as a connecting portion. A laser passage space SP serving as a space through which laser L (refer toFIG.1A) irradiated from the scanner unit11(refer toFIG.1A) toward the photosensitive drum21is capable of passing is formed between the handle portion39and the conveyance chamber36. The handle portion39includes a grip portion39athat allows the user to hook his/her fingers to grip the handle portion39, and the grip portion39ais formed to protrude upward from a top panel of the handle portion39. The first projected portion37has a hollow interior and the replenishing port32ais formed on the upper face. The replenishing port32ais configured to allow the toner pack40to be connected thereto. By providing the first projected portion37having the replenishing port32aformed on a tip portion thereof arranged on one side in the longitudinal direction of the developer container32, the laser passage space SP through which the laser L emitted from the scanner unit11can pass is secured, and the image forming apparatus1can be downsized. Further, since the second projected portion38is provided on the other side in the longitudinal direction of the developer container32and the handle portion39connecting the first projected portion37and the second projected portion38is provided, the usability during removal of the process unit20from the printer body100is improved. The second projected portion38can be formed in a hollow shape similarly as the first projected portion37or can be formed as a solid body. The toner pack40is configured to be detachably attached to the attachment portion57of the first projected portion37, as illustrated inFIGS.5A through6B. Further, the toner pack40includes a shutter member41provided on the opening portion and capable of being opened and closed, and a plurality of (according to the present embodiment, three) projections42formed to correspond to a plurality of (according to the present embodiment, three) groove portions32bformed on the attachment portion57. When replenishing toner to the developer container32, the user positions the projections42of the toner pack40so that they pass through the groove portions32bof the attachment portion57to thereby connect the toner pack40to the attachment portion57. By rotating the toner pack40for 180 degrees in this state, the shutter member41of the toner pack40abuts against an abutment portion (not shown) of the attachment portion57and rotates with respect to the body of the toner pack40, and the shutter member41is thereby opened. Thereby, toner stored in the toner pack40leaks from the toner pack40and the leaked toner passes through the replenishing port32ato enter the hollow first projected portion37. The shutter member41may be provided on the replenishing port32aside. The first projected portion37has an inclined plane37aprovided at a position opposed to an opening of the replenishing port32a, and the inclined plane37ais inclined downward toward the conveyance chamber36. Therefore, toner replenished through the replenishing port32ais guided along the inclined plane37ato the conveyance chamber36. Further, the agitation member34includes, as illustrated inFIGS.6A and6B, an agitation shaft34aextending in the longitudinal direction, and a blade portion34bextending outward in a radial direction from the agitation shaft34a. The blade portion34bis a sheet having flexibility. Toner replenished through the replenishing port32aarranged upstream in the conveyance direction of the agitation member34is conveyed toward the developing roller31and the feed roller33by the rotation of the agitation member34. The conveyance direction of the agitation member34is a direction parallel to a longitudinal direction LD (refer toFIG.5B) of the developer container32. The replenishing port32aand the first projected portion37are arranged at a first end portion in the longitudinal direction of the developer container32, and through repeated rotation of the agitation member34, toner spreads across the entire length of the developer container32. In the present embodiment, the agitation member34is composed of the agitation shaft34aand the blade portion34b, but a helical agitation shaft can also be adopted as a configuration for spreading toner across the entire length of the developer container32. According to the present embodiment, the toner pack40is formed of an easily deformable plastic bag, as illustrated inFIGS.7and8A, but the toner pack configuration is not limited thereto. For example, the toner pack can be formed of an approximately conical bottle container40B as illustrated inFIG.8Bor can be formed of a paper package40C as illustrated inFIG.8C. In any case, the toner pack can be formed of any material and can have any shape. A preferable method for discharging toner from the toner pack is to have the user squeeze the pack in the case of the toner pack40or the paper package40C, or to have the user flick the container to vibrate the container and discharge toner in the case of the bottle container40B. Further, a discharge mechanism can be provided in the bottle container40B to discharge toner from the bottle container40B. Even further, the discharge mechanism can be engaged with the printer body100to receive driving force from the printer body100. In any of the toner packs, the shutter member41can be omitted, or a slide-type shutter member can be adopted instead of the rotation-type shutter member41. The shutter member41can also adopt a configuration where the toner pack is broken by attaching the toner pack to the replenishing port32aor by rotating the toner pack in the attached state, or the shutter member41can adopt a detachable lid structure such as a seal. Method for Detecting Residual Amount of Toner Next, the method for detecting a residual amount of toner in the developer container32will be described with reference toFIGS.9A through11B.FIG.9Ais a cross-sectional view of a toner residual amount sensor51, andFIG.9Bis a cross-sectional schematic view of a9B-9B cross section ofFIG.9Aviewed from the developing roller31side toward the developer container32. Further,FIG.10is a circuit diagram illustrating one example of a circuit configuration of the toner residual amount sensor51. The toner residual amount sensor51for detecting residual amount information corresponding to the residual amount of toner in the conveyance chamber36is provided in the developer container32according to the present embodiment, as illustrated inFIGS.9A and9B. The toner residual amount sensor51is arranged on a side face of the developer container32that is opposite from the developing roller31, that is, on a side surface36a, and includes a light emitting portion51aand a light receiving portion51b. The light emitting portion51aand the light receiving portion51bare arranged in an aligned manner along the longitudinal direction LD of the process unit20. The light emitted from the light emitting portion51apasses through the interior of the conveyance chamber36and is received by the light receiving portion51b. That is, the light emitting portion51aand the light receiving portion51bform an optical path Q1in the interior of the conveyance chamber36. The optical path Q1extends in the longitudinal direction LD. The light emitting portion51aand the light receiving portion51bcan have their light emitting element and photodetecting element arranged on an interior of the conveyance chamber36, or they can have their light emitting element and photodetecting element arranged on an exterior of the conveyance chamber36with a light guide guiding the light into and out of the conveyance chamber36. Further, the light emitting portion51aand the light receiving portion51bare arranged at a center part of the conveyance chamber36in the longitudinal direction LD. More specifically, the light emitting portion51aand the light receiving portion51bare arranged within an area AR1corresponding to the laser passage space SP in the longitudinal direction LD. The light emitting portion51ais arranged between the replenishing port32aand a center31aof the developing roller31in the longitudinal direction LD. A broken line inFIG.9Billustrates a position corresponding to the center31aof the developing roller31. The center31aof the developing roller31is arranged between the light emitting portion51aand the light receiving portion51bin the longitudinal direction LD. The light emitting portion51aand the light receiving portion51bare arranged at the center part of the conveyance chamber36, so that the residual amount of toner of the conveyance chamber36can be detected favorably. In other words, developer, i.e., toner, may be distributed unevenly at the end portions in the longitudinal direction LD in the conveyance chamber36, but since uneven distribution of developer does not often occur at the center part of the conveyance chamber36, the actual residual amount of toner can be detected. FIG.10illustrates a case where an LED is used as the light emitting portion51aand a phototransistor that is turned on by receiving light from the LED is used as the light receiving portion51b, but the present invention is not limited thereto. For example, a halogen lamp or a fluorescent lamp can be adopted in the light emitting portion51a, and a photodiode or an avalanche photodiode can be adopted in the light receiving portion51b. A switch (not shown) is provided between the light emitting portion51aand a power supply voltage Vcc, and by turning the switch on, a voltage from the power supply voltage Vcc is applied to the light emitting portion51a, and the light emitting portion51awill be in a conduction state. Meanwhile, a switch (not shown) is also provided between the light receiving portion51band the power supply voltage Vcc, and by turning the switch on, the light receiving portion51bwill be in a conduction state by current corresponding to the detected light amount. The power supply voltage Vcc and a current limiting resistor R1are connected to the light emitting portion51a, and the light emitting portion51aemits light based on a current determined by the current limiting resistor R1. The light emitted from the light emitting portion51apasses through the optical path Q1as illustrated inFIG.9Band is received by the light receiving portion51b. The power supply voltage Vcc is connected to a corrector terminal of the light receiving portion51b, and a detecting resistor R2is connected to an emitter terminal. The light receiving portion51bserving as a phototransistor receives the light emitted from the light emitting portion51aand outputs a signal, i.e., current, corresponding to the received light amount. The signal is converted into a voltage V1by the detecting resistor R2and entered to an A/D conversion unit95of a control unit90(refer toFIG.12). In other words, the light receiving portion51bvaries an output value in accordance with an amount of toner, i.e., developer, stored in the conveyance chamber36. The control unit90, i.e., CPU91, determines whether light from the light emitting portion51ahas been received by the light receiving portion51bbased on an entered voltage level. The control unit90, i.e., CPU91, computes an amount of toner, i.e., amount of developer, within the developer container32based on a length of time during which the light receiving portion51bdetects light and the received light intensity in a state where toner within the developer container32has been agitated by the agitation member34for a predetermined period of time. That is, a ROM93stores in advance a table for outputting the residual amount of toner based on the light reception time and the light intensity while conveying toner by the agitation member34, and the control unit90predicts/computes the residual amount of toner based on the input to the A/D conversion unit95and the table. More specifically, as illustrated inFIG.9A, the optical path Q1of the toner residual amount sensor51is set to overlap with a rotation trajectory T of the agitation member34when viewed in the axial direction of the rotation shaft of the agitation member34. In other words, the light emitted from the light emitting portion51aof the toner residual amount sensor51passes through the interior of the conveyance chamber36within the rotation trajectory of the agitation member34when viewed in the axial direction of the agitation member34. The time during which the optical path Q1has been blocked by toner conveyed by the agitation member34while the agitation member34rotates once, that is, the time during which the light receiving portion51blacks to detect light from the light emitting portion51a, varies depending on the residual amount of toner. Further, the light intensity of light received by the light receiving portion51balso varies depending on the residual amount of toner. That is, when the residual amount of toner is large, the optical path Q1tends to be blocked by toner, so that the time during which the light receiving portion51breceives light becomes short and the light intensity of the light received by the light receiving portion51bbecomes weak. In contrast, when the residual amount of toner is small, the time during which the light receiving portion51breceives light becomes long and the light intensity of the light received by the light receiving portion51bbecomes strong. Accordingly, the control unit90can determine the level of the residual amount of toner as follows based on the light reception time and the received light intensity of the light receiving portion51b. For example, as illustrated inFIG.11A, if there is only a very small amount of toner in the conveyance chamber36of the developer container32, the time during which the light receiving portion51breceives light becomes long and the light intensity of light received by the light receiving portion51bbecomes strong, so that it is determined that there is only a small amount of residual toner. Meanwhile, as illustrated inFIG.11B, if the amount of toner in the conveyance chamber36of the developer container32is large, the timer during which the light receiving portion51breceives light becomes short and the light intensity of light received by the light receiving portion51bbecomes weak, so that it is determined that there is a large amount of residual toner. The method for detecting/estimating the residual amount of toner is not limited to the method for detecting the residual amount of toner using light as described with reference toFIG.9, and various types of known methods for detecting/estimating the residual amount of toner can be adopted. For example, two or more metal plates or conductive resin sheets extending in the longitudinal direction of the developing roller are arranged on an inner wall of the developer container32serving as a frame, and electrostatic capacity between the two metal plates or two conductive resin sheets is measured to detect/estimate the residual amount of toner. Alternatively, a load cell can be arranged in a manner supporting the developing apparatus30from below, and the CPU91subtracts a weight of the developing apparatus30when the toner is empty from a weight measured by the load cell to compute the residual amount of toner. Control System of Image Forming Apparatus FIG.12is a block diagram illustrating a control system of the image forming apparatus1. The control unit90serving as a controller of the image forming apparatus1includes a CPU91serving as an arithmetic unit, a RAM92used as a work area of the CPU91, and a ROM93storing various programs. Further, the control unit90includes an I/O interface94serving as an input/output port that is connected to an external apparatus, and an A/D conversion unit95that converts analog signals into digital signals. The toner residual amount sensor51, a mounting sensor53and an opening/closing sensor54are connected to an input side of the control unit90, and the mounting sensor53detects that the toner pack40has been mounted to the replenishing port32aof the developer container32. For example, the mounting sensor53is composed of a pressure sensitive switch provided on the replenishing port32athat outputs a detection signal when pressed by the projection42of the toner pack40. Further, the opening/closing sensor54detects whether the opening/closing member83has been opened with respect to the top cover82. The opening/closing sensor54is composed of a pressure sensitive switch or a magnetic sensor. Further, the operation unit300, the image forming unit10, and a toner residual amount panel400serving as a notification unit for notifying information related to the residual amount of toner are connected to the control unit90, and the operation unit300includes a display unit301capable of displaying various setting screens and physical keys. The display unit301is composed, for example, of a liquid crystal panel. The image forming unit10includes the motor M1serving as a driving source for driving the photosensitive drum21, the developing roller31, the feed roller33, and the agitation member34. Further, it is possible to configure the photosensitive drum21, the developing roller31and the feed roller33, and the agitation member34to be driven by different motors. The toner residual amount panel400is provided on a front right side of the casing of the printer body100, that is, on an opposite side from the operation unit300arranged on the left side, as illustrated InFIG.1BandFIGS.13A through13Dand displays information regarding the residual amount of toner inside the developer container32. According to the present embodiment, the toner residual amount panel400is a panel member composed of a plurality of (three, according to the present embodiment) scales arranged in an aligned manner in the vertical direction, and the scales correspond to a low level, a middle (mid) level, and a full level. As illustrated inFIG.13A, a state where only the lower scale is blinking indicates that the residual amount of toner in the developer container32is at a nearly empty level. As illustrated inFIG.13B, a state where only the lower scale is lit indicates that the residual amount of toner in the developer container32is at a low level. As illustrated inFIG.13C, a state where the lower and middle scales are lit and the upper scale is not lit indicates that the residual amount of toner in the developer container32is at a middle level. As illustrated inFIG.13D, a state where all the three scales are lit indicates that the residual amount of toner in the developer container32is at a full level. The nearly empty level indicates that the residual amount of toner in the developer container32will soon run out and image formation cannot be performed properly. The low level indicates that the residual amount of toner is greater than the nearly empty level and smaller than the middle level. The middle level indicates that the residual amount of toner is greater than the low level and smaller than the full level. Instead of being composed of a liquid crystal panel, the toner residual amount panel400can be composed of a light source such as an LED or an incandescent lamp and a diffusion lens. Alternatively, a configuration can be adopted where the residual amount of toner is displayed by scales as described according to the present embodiment on a display of the operation unit300, without providing the toner residual amount panel400. Further, a replenishment notification for prompting toner replenishment on the operation unit300can be displayed when the residual amount of toner in the developer container32is at a low level. Moreover, if toner has run out, a replenishment notification for prompting toner replenishment on the operation unit300can be displayed. According further to the present embodiment, a configuration has been described where four states are displayed using three scales, but the number of scales is not limited thereto, and the number can be set arbitrarily according to the configuration of the image forming apparatus. Further, a configuration can also be adopted where the residual amount of toner is displayed successively by a percentage indication or a gauge display. Further, the notification of the residual amount of toner to the user can be performed by audio through a speaker. Further, in the example illustrated inFIGS.13A through13D, the toner residual amount panel400is illustrated as a notification unit displaying the residual amount of toner, but the present invention is not limited thereto. For example, the display ofFIG.13Bcan be a display indicating that toner replenishment is required, the display ofFIG.13Ccan be a display indicating that toner replenishment is not required, and the display ofFIG.13Dcan be a display indicating that sufficient toner replenishment has been performed. As described, according to the present embodiment, the light emitting portion51aand the light receiving portion51bof the toner residual amount sensor51are provided on the process unit20that includes the conveyance chamber36for storing toner. Therefore, the relative position of the optical path Q1in the conveyance chamber36is fixed, so that the residual amount of toner can be detected stably regardless of the positional accuracy of the process unit20on the printer body100. Further, since the relative position of the optical path Q1in the conveyance chamber36is fixed, there is no need to take into consideration the misregistration of the conveyance chamber36and the optical path Q1in advance when designing the toner residual amount sensor51and the developer container32. Thus, there is no need to select an optical element with a margin in the light amount of the toner residual amount sensor51, and the freedom of design of the toner residual amount sensor51and the developer container32can be improved and costs can be cut down. Further, the light emitting portion51aand the light receiving portion51baccording to the present embodiment are arranged in an aligned manner along the longitudinal direction LD of the process unit20, and are arranged on a same side, i.e., front side, with respect to the conveyance chamber36when viewed in the longitudinal direction LD. Therefore, the light emitting portion51aand the light receiving portion51bcan be arranged in a compact manner and the power supply configuration for supplying power to the light emitting portion51aand the light receiving portion51bcan also be arranged in a small space. Therefore, the process unit20can be downsized. The present embodiment adopts a system, i.e., direct replenishment system, where toner is replenished directly from the toner pack40to the developer container32through the replenishing port32a, so that there is no need to remove the process unit20when replenishing toner to the developer container32. Further, since the replenishing port32aof the developer container32is formed on the upper plane of the first projected portion37that is protruded upward from the first end portion in the longitudinal direction of the conveyance chamber36, it is arranged close to the second opening portion82a. Therefore, the user can perform the toner replenishment operation to the developer container32easily through the replenishing port32a. Further, there is no need to replace components such as the developing roller31or the feed roller33when replenishing toner to the developer container32, so that costs can be cut down. Further, since the laser passage space SP is formed in a manner surrounded by the first projected portion37, the second projected portion38, the handle portion39, and the conveyance chamber36, the developer container32and the scanner unit11can be arranged close to one another, and the image forming apparatus1can be downsized. Further, since the agitation member34is driven when the toner pack40is mounted to the replenishing port32aand toner replenishment operation is performed, a toner packing phenomenon can be reduced even if the replenishing port32ais arranged on the first end side in the longitudinal direction of the developer container32. Thereby, image defects can be reduced and detection accuracy of residual amount information of toner using the light emitting portion51aand the light receiving portion51bcan be improved. Second Embodiment Next, a second embodiment of the present invention will be described. The configuration of the developing apparatus30of the first embodiment has been changed according to the second embodiment. Configurations similar to the first embodiment are either not shown or denoted with the same reference numbers and described. A developing apparatus330according to the present embodiment will be described with reference toFIGS.14through17. The developing apparatus330constitutes a part of the process unit20(refer toFIG.3).FIG.14is a perspective view illustrating the developing apparatus330.FIG.15Ais a perspective view illustrating a state where a circuit board700and a circuit board retaining member710are assembled to a developer container lid321.FIG.15Bis a perspective view illustrating the circuit board700and the circuit board retaining member710, andFIG.15Cis another perspective view illustrating the circuit board700and the circuit board retaining member710.FIG.16Ais a cross-sectional view that passes through a light emitting portion510aof the developing apparatus330, andFIG.16Bis a cross-sectional view taken at line16B-16B ofFIG.16A.FIG.17Ais a cross-sectional view illustrating a developer container320in a state where the amount of residual toner is small, andFIG.17Bis a cross-sectional view illustrating the developer container320in a state where the amount of residual toner is great. As illustrated inFIG.14, the developing apparatus330includes the developer container320and the developer container lid321, and the developer container320and the developer container lid321are connected by a connecting portion322. The developer container320, the developer container lid321and the connecting portion322constitute a frame340of the developing apparatus330. The frame340is provided with the conveyance chamber36(refer toFIG.16A) for storing developer including toner (hereinafter referred to as toner). The developing roller31is supported on the frame340. The developer container lid321constituting a part of the frame340includes circuit board positioning members321aand321band circuit board fixing parts321cand321d, wherein an optical path guide610is provided at a position between the circuit board fixing parts321cand321dof the developer container lid321. The optical path guide610includes a first light guide portion610aand a second light guide portion610b. The first light guide portion610aextends toward the light emitting portion510a(described later), and the second light guide portion610bextends toward the light receiving portion510b(described later). The first light guide portion610aguides light emitted from the light emitting portion510ainto the conveyance chamber36of the developer container320. The second light guide portion610bguides the light having passed through the first light guide portion610aand the conveyance chamber36to the light receiving portion510b. The circuit board positioning members321aand321bserving as positioning portions are arranged on outer sides of the circuit board fixing parts321cand321din the longitudinal direction LD of the developer container320, and the members321aand321bare boss-shaped to protrude toward a direction separating from the developer container320. Further, the shape of the circuit board positioning members321aand321bis not limited to the boss shape, and it can be any arbitrary shape. Further, the longitudinal direction LD of the developer container320is the same as the longitudinal direction LD of the process unit20(refer toFIG.5A). Fixing tools such as screws can be screw-engaged to the circuit board fixing parts321cand321d. According to the present embodiment, as illustrated inFIG.15A, the circuit board700and the circuit board retaining member710are assembled to the developer container lid321. The circuit board retaining member710is assembled to the developer container lid321in a state sandwiched between the developer container lid321and the circuit board700. That is, the circuit board retaining member710is disposed between the developer container lid321and the circuit board700. In this state, the circuit board retaining member710covers a surface510c, on which the light emitting portion510aand the light receiving portion510bare installed, of the circuit board700. Thereby, it is possible to suppress adhesion of foreign substances such as dust or toner and to prevent a service personnel from touching to the surface510c. The circuit board700is disposed at a face opposed to the circuit board retaining member710, as illustrated inFIG.15B, and includes the light emitting portion510aand the light receiving portion510bfor detecting the residual amount of toner in the conveyance chamber36. According to the present embodiment, an LED is used for the light emitting portion510aand a phototransistor that is turned on by receiving light from the LED is used for the light receiving portion510b, but the present invention is not limited thereto. For example, a halogen lamp or a flouorescent lamp can be adopted as the light emitting portion510aand a photodiode or an avalanche photodiode can be adopted as the light receiving portion510b. Further, the circuit board700includes positioning holes700aand700bto which the circuit board positioning members321aand321bare inserted and fixed and circuit board fixing holes700cand700dthrough which screws engageable with the circuit board fixing parts321cand321dcan pass. Further, the circuit board retaining member710similarly includes positioning holes710aand710bto which the circuit board positioning members321aand321bare inserted and fixed and circuit board fixing holes710cand710dthrough which screws engageable with the circuit board fixing parts321cand321dcan pass. Furthermore, the circuit board retaining member710includes a first penetrating hole portion711ain which the first light guide portion610aof the optical path guide610is inserted and a second penetrating hole portion711bin which the second light guide portion610bof the optical path guide610is inserted. The circuit board retaining member710includes a first opposing surface710hopposing the developer container lid321, and a first cylindrical portion711cand a second cylindrical portion711dconfigured to extend toward the developer container lid321from the first opposing surface710krespectively. The first penetrating hole portion711aand the second penetrating hole portion711bare defined by the first cylindrical portion711cand the second cylindrical portion711d. The circuit board retaining member710serving as an attachment member or a cover is brought into contact with the circuit board700. Light shielding plates710eand710fserving as shielding portions are provided on a side, of the circuit board retaining member710, facing the circuit board700. The light shielding plates710eand710fare arranged between the light emitting portion510aand the light receiving portion510band are arranged close to the circuit board700in a state where the circuit board700and the circuit board retaining member710are assembled to the developer container lid321. As illustrated inFIGS.14through16A, the circuit board retaining member710is positioned on the developer container lid321by having the circuit board positioning members321aand321bof the developer container lid321pass through and engage with the positioning holes710aand710b. Further, the circuit board700is positioned on the developer container lid321by having the circuit board positioning members321aand321bof the developer container lid321pass through and engage with the positioning holes700aand700b. As described, by using the circuit board positioning members321aand321bcommonly for positioning the circuit board retaining member710and the circuit board700, the developer container lid321, the circuit board retaining member710and the circuit board700can be positioned with even higher accuracy. Further, in a state where the circuit board retaining member710and the circuit board700are positioned on the developer container lid321, screws are inserted to the circuit board fixing holes700c,700d,710c, and710d, and the screws are engaged with the circuit board fixing parts321cand321dof the developer container lid321. Thereby, the circuit board retaining member710and the circuit board700are commonly engaged by a screw with the developer container lid321, and the circuit board retaining member710and the circuit board700are fixed to the developer container lid321. As illustrated inFIGS.14through16B, in a state where the circuit board retaining member710and the circuit board700are assembled to the developer container lid321, the first light guide portion610aof the optical path guide610is inserted to the first penetrating hole portion711aof the circuit board retaining member710. Thereby, the first light guide portion610ais positioned at a position close to the light emitting portion510aof the circuit board700. Similarly, the second light guide portion610bof the optical path guide610is inserted to the second penetrating hole portion711bof the circuit board retaining member710. Thereby, the second light guide portion610bis positioned at a position close to the light receiving portion510bof the circuit board700. The first penetrating hole portion711acovers a side surface610a1of the first light guide portion610ainserted into the first penetrating hole portion711abetween the developer container lid321and the light emitting portion510a. Similarly, the second penetrating hole portion711bcovers a side surface610b1of the second light guide portion610binserted into the second penetrating hole portion711bbetween the developer container lid321and the light receiving portion510b. Thereby, it is possible to suppress entering light other than the light emitted from the light emitting portion510ainto the first light guide portion610aor the second light guide portion610b, so that detection accuracy of the residual amount of toner can be improved. As described, since the circuit board retaining member710and the circuit board700are positioned highly accurately on the developer container lid321, the light emitted from the light emitting portion510ais guided infallibly by the first light guide portion610a. Then, the light guided by the first light guide portion610ato the conveyance chamber36in the interior of the developer container320is emitted from the first light guide portion610ain the longitudinal direction LD. The light traveling through an optical path Q2in the interior of the conveyance chamber36is guided to an exterior of the developing container320by the second light guide portion610b. Since the second light guide portion610bis arranged close to the light receiving portion510b, the light exiting the second light guide portion610bis received infallibly by the light receiving portion510b. Thereby, the detection accuracy of the residual amount of toner by the light emitting portion510aand the light receiving portion510bcan be improved. The light shielding plates710eand710fare arranged at the position between the light emitting portion510aand the light receiving portion510band near the circuit board700. As shown inFIG.15C, the circuit board retaining member710has a second opposing surface710gopposing to the circuit board700. The light shielding plates710eand710fare ribs erected from the second opposing surface710gso as to approach the circuit board700. Therefore, the light emitted from the light emitting portion510aand directed toward the light receiving portion510bwithout passing through the first light guide portion610aand the second light guide portion610bis shielded by the light shielding plates710eand710f. Especially according to the present embodiment, an LED element is used for the light emitting portion510a, which has a weaker directivity compared to a shell-type LED, so that light directly traveling from the light emitting portion510ato the light receiving portion510bshould desirably be shielded. Therefore, false detection caused by light not passing through the optical path Q2being received by the light receiving portion510bcan be suppressed, and detection accuracy of the residual amount of toner by the light emitting portion510aand the light receiving portion510bcan be improved. Now, the arrangement of the light emitting portion510aand the light receiving portion510bwill be described in further detail. The light emitting portion510aand the light receiving portion510bare arranged on a side of the side surface36a, of the frame340, that is opposite from a side of the developing roller31in a direction perpendicular to a longitudinal direction of the developing roller31, as illustrated inFIGS.16A and16B. Further, the light emitting portion510aand the light receiving portion510bare arranged at the center part of the conveyance chamber36in the longitudinal direction LD. In further detail, the light emitting portion510aand the light receiving portion510bare arranged within the area AR1that corresponds to the laser passage space SP in the longitudinal direction LD (refer toFIG.9B). The light emitting portion510ais arranged between the replenishing port32aand the center31aof the developing roller31in the longitudinal direction LD. A broken line ofFIG.16Billustrates a position corresponding to the center31aof the developing roller31. The center31aof the developing roller31is arranged between the light emitting portion510aand the light receiving portion510bin the longitudinal direction LD. Since the light emitting portion510aand the light receiving portion510bare arranged at the center part of the conveyance chamber36, the residual amount of toner in the conveyance chamber36can be detected favorably. In other words, developer, i.e., toner, may be distributed unevenly at the end portion of the conveyance chamber36in the longitudinal direction LD, but since uneven distribution of developer is less likely to occur at the center part of the conveyance chamber36, the present arrangement enables to detect the actual residual amount of toner. The method for detecting the residual amount of toner can be similar to the method disclosed in the first embodiment with reference toFIGS.10through13. As illustrated in FIG.16A, the optical path Q2is set so as to overlap with the rotation trajectory T of the agitation member34when viewed in the axial direction of the rotation shaft of the agitation member34. In other words, the light emitted from the light emitting portion510apasses through the interior of the conveyance chamber36within the rotation trajectory of the agitation member34when viewed in the axial direction of the agitation member34. Then, the time during which the optical path Q2is blocked by toner conveyed by the agitation member34, that is, the time during which the light receiving portion51bdoes not detect light from the light emitting portion51a, while the agitation member34rotates once varies depending on the residual amount of toner. Further, the light intensity of light received by the light receiving portion51balso varies depending on the residual amount of toner. Thereby, the control unit90can determine the level of residual amount of toner. For example, as illustrated inFIG.17A, in a state where the amount of toner within the conveyance chamber36of the developer container320is very small, the time during which the light receiving portion510breceives light becomes long and the light intensity of light received by the light receiving portion510bbecomes strong, so it is determined that the residual amount of toner is small. Meanwhile, as illustrated inFIG.17B, in a state where the amount of toner within the conveyance chamber36of the developer container320is large, the time during which the light receiving portion510breceives light becomes short, and the light intensity of light received by the light receiving portion510bbecomes weak, so it is determined that the residual amount of toner is large. The display of the level of residual amount of toner using the toner residual amount panel400is carried out in the manner described with reference toFIGS.13A through13D. As described, according to the present embodiment, the circuit board retaining member710and the circuit board700are attached to the process unit20including the developer container320, or the conveyance chamber36, storing toner, and the circuit board700is provided with the light emitting portion510aand the light receiving portion510b. Therefore, the relative position of the optical path Q2in the conveyance chamber36becomes fixed, so that the residual amount of toner can be detected stably regardless of the positional accuracy of the process unit20on the printer body100. Further, since the relative position of the optical path Q2in the conveyance chamber36becomes fixed, there is no need to take into consideration the misregistration of the conveyance chamber36and the optical path Q2in advance when designing the light emitting portion510a, the light receiving portion510band the developer container320. Therefore, there is no need to select the optical element by providing a margin in the light amount of the light emitting portion510a, so that the freedom of design of the light emitting portion510a, the light receiving portion510band the developer container32can be improved and costs can be cut down. Furthermore, since the circuit board positioning members321aand321bprovided on the developer container lid321can be used in common for determining the positions of the circuit board retaining member710and the circuit board700, the positioning of the developer container lid321, the circuit board retaining member710and the circuit board700can be performed with even higher accuracy. Since the first light guide portion610a, the second light guide portion610b, and the circuit board retaining member710including the light shielding plates710eand710fare positioned between the developer container lid321and the circuit board700, it becomes possible to reduce the possibility of light that has not passed through the optical path Q2being received by the light receiving portion510b. Therefore, false detection by the light receiving portion510bcan be suppressed, and the detection accuracy of the residual amount of toner by the light emitting portion510aand the light receiving portion510bcan be improved. Further, the light emitting portion510aand the light receiving portion510baccording to the present embodiment is arranged in an aligned manner along the longitudinal direction LD of the process unit20, and arranged on the same side, i.e., front side, of the conveyance chamber36when viewed in the longitudinal direction LD. Therefore, the light emitting portion51aand the light receiving portion51bcan be arranged in a compact manner. Further, since the light emitting portion510aand the light receiving portion510bare disposed collectively on the circuit board700, power can be supplied easily to the light emitting portion510aand the light receiving portion510band communication through signals with the light emitting portion510aand the light receiving portion510bcan be performed easily. Therefore, the process unit20can be downsized. A connector is provided on the circuit board700, and the control unit90provided on the printer body100and the connector are connected via a cable. In a case where the process unit20is attached to or detached from the printer body100, the cable and the connector are attached or detached. Other Embodiments According to the first embodiment described above, the toner residual amount sensor51including the light emitting portion51aand the light receiving portion51bwere provided on the developing apparatus30, but the light emitting portion51aand the light receiving portion51bcan be provided on the circuit board instead. According to the second embodiment described above, the circuit board retaining member710was provided between the developer container lid321and the circuit board700, but the present invention is not limited thereto. That is, the circuit board700can be attached directly to the developer container lid321without providing the circuit board retaining member710. According further to the second embodiment, the circuit board positioning members321aand321bwere provided on the developer container lid321and the circuit board positioning members321aand321bwere engaged to positioning holes710aand710bof the circuit board retaining member710and positioning holes700aand700bof the circuit board700, but the present invention is not limited thereto. For example, it is possible to have boss-shaped positioning portions protrude through both side faces of the circuit board retaining member710which are engaged with holes provided on each of the developer container lid321and the circuit board700. For example, it is possible to provide boss-shaped positioning portions on the circuit board700and have holes formed on the developer container lid321and the circuit board retaining member710respectively engage with the positioning portions. In any case, the method for positioning the circuit board700on the process unit20, the position of the circuit board700, and the method or position of fixture are not limited. Further, in all the aforementioned embodiments, the light emitting portion and the light receiving portion are arranged in an aligned manner along the longitudinal direction LD, but the arrangement is not limited thereto. The light emitting portion and the light receiving portion can be arranged at any position as long as they are positioned on the side face of the conveyance chamber36opposite from the developing roller31. In all the aforementioned embodiments, the optical paths Q1and Q2are arranged at a position overlapped with the rotation trajectory T of the agitation member34when viewed in the axial direction of the agitation member34, but the present invention is not limited thereto. That is, the optical paths Q1and Q2can be arranged so as not to overlap with the rotation trajectory T of the agitation member34. In all the aforementioned embodiments, the reading unit200was provided above the printer body, but the present invention is not limited thereto. That is, the image forming apparatus can be a printer without a reading unit. Further, the reading unit may be a reading unit equipped with an ADF (Auto Document Feeder) for feeding documents. Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2020-218238, filed Dec. 28, 2020, which is hereby incorporated by reference herein in its entirety. | 74,339 |
11860557 | DETAILED DESCRIPTION FIG.1shows a print apparatus100which may comprise, for example, at least components of a Liquid Electro Photographic (LEP) print apparatus. In a Liquid Electro Photographic (LEP) print apparatus, a pattern to be printed may first be formed as an electrostatic pattern of charges on an image forming surface (which may be curved around a cylinder). Print agent is attracted to the image forming surface according to the charge pattern to form an image. In some examples, during a printing operation, an image may be transferred from an image forming surface to an intermediate transfer member104. In some examples, the intermediate transfer member104may comprise a ‘blanket’, for example formed of rubber. In some examples, the image is transferred under a voltage. In some examples, the image may be at least partially dried or cured while on the intermediate transfer member104. In some examples, the image may be heated while on the intermediate transfer member104. In some examples, a number of ‘separations’, i.e. images formed of different (e.g. different color) print agents, may be built up on the intermediate transfer member104before being further transferred to a substrate. In other examples, separations may be transferred from the intermediate transfer member104to a substrate individually. When printing, the image on the intermediate transfer member104may then be transferred to a substrate. This transfer may be effected by urging the substrate against the intermediate transfer member104. After a number of transfers have taken place from the intermediate transfer member104to a substrate, contaminants such as print agent residue, dust, machine oil and the like may build up on the surface of the intermediate transfer member104which can reduce the quality of subsequent prints. The apparatus100ofFIG.1includes an intermediate transfer member104which is engageable with a photoconductive surface102(which is shown in dotted outline for context, but which may be provided separately) to receive thermostatic print agent106from the photoconductive surface102. The apparatus100also includes a rotatably mounted endless cleaning surface108, which can be, for example and as shown inFIG.1, a roller. The endless cleaning surface108is engageable with the intermediate transfer member104to receive the thermostatic print agent106from the intermediate transfer member104. In this way, in use, a layer of thermostatic print agent106, which may be, for example, thermoplastic ink, can be applied to the endless cleaning surface108. In some examples, the endless cleaning surface108may, in use of the apparatus100, receive a plurality of layers of print agent106. In some examples, the print apparatus100also includes a heater110to apply heat to the endless cleaning surface108. The heater110may heat the endless cleaning surface108to a temperature such that the thermoplastic print agent106acts as an adhesive. In other words, when heated, the layer of thermoplastic print agent106becomes ‘sticky’ and has a surface energy sufficient to remove residue from the intermediate transfer member104to the layer of print agent106on the endless cleaning surface108. In some examples, the heater110may heat the endless cleaning surface108to a temperature of between 70° C. and 150° C., or between 70° C. and 90° C. In some examples, the heater110may heat the endless cleaning surface108to a temperature of around 80° C. In some examples, the temperature of the endless cleaning surface108is intended to be substantially the same as the temperature of the intermediate transfer member104(which may be in the range of 70 and 90° C., and in some examples is around 80° C.). This may reduce energy consumption and assist in maintaining a stable working temperature for the intermediate transfer member104. However, in some examples, the temperature of the endless cleaning surface108is intended to be higher than the temperature of the intermediate transfer member104. In some examples, the temperature of the intermediate transfer member104may be controlled, or allowed to reduce, for example to around 30-50° C., or in some examples to around 40° C. (whereas the temperature of the endless cleaning surface108may still be in the range of 70 and 90° C., and in some examples is around 80° C.). This may enhance a cleaning effect. In some examples, there may be two modes of operation: a first mode in which the temperature of the endless cleaning surface108is substantially the same as the temperature of the intermediate transfer member104and a second mode in which the temperature of the endless cleaning surface108is higher than the temperature of the intermediate transfer member104. In some such examples, the first mode may be utilised throughout or during a print job and the second mode may be utilised after a print job, for example when a residue persists despite operation of the endless cleaning surface108at the higher temperature. This may allow ‘recovery’ of an intermediate transfer member104which has collected hard-to-clean residue without unduly impacting the temperature of the intermediate transfer member104during print jobs, which could otherwise in turn cause print quality issues. In some such examples, on entering the second mode, the intermediate transfer member104may be allowed to cool until it reaches an intended operational temperature, at which point the cleaning surface108may be re-engaged with the intermediate transfer member104. In some examples, where the endless cleaning surface108is the surface of a roller, the heater110may be provided inside the roller. In some other examples, the heater110may be provided externally to the endless cleaning surface108. In use of the apparatus100, when cleaning of the intermediate transfer member104is indicated (for example under the control of a controller of the print apparatus100or the like, wherein the controller may comprise one or more processors), the layer of thermoplastic print agent106on the cleaning surface108is heated and the cleaning surface108may be then brought into contact with the intermediate transfer member104such that contaminants left on the intermediate transfer member104by the printing process adheres to the thermoplastic print agent layer106. Once any residue, dust or the like has been removed from the intermediate transfer member104, the cleaning surface108can be disengaged from the intermediate transfer member104so as not to interfere with the normal printing process. In this way, the transfer of residue from the intermediate transfer member to the layer of thermoplastic print agent on the endless cleaning surface may be effected, caused or carried out. In other words, the intermediate transfer member may be cleaned of such residue. The arrangement of the cleaning108enables cleaning of the intermediate transfer member104with negligible interruption to the printing process, without use of consumable substrates. If the cleaning surface108is arranged to contact the intermediate transfer member104after an image is transferred to a substrate, the cleaning may carried out during a print operation. In examples where multiple images are transferred to the intermediate transfer member before being transferred to a substrate, the cleaning surface108may be disengaged from the intermediate transfer member104. This may enable continuous or periodic cleaning of the intermediate transfer member104. Furthermore, in some examples, the intermediate transfer member104may be cleaned outside of the standard print job for example to clean severe contamination using a relatively ‘cold’ intermediate transfer member104, as set out above (i.e. when the cleaning surface108is hotter than the intermediate transfer member104, which may in some examples be unheated). The cleaning system of the print apparatus100also has a low associated cleaning cost per page, as it uses a low amount of consumables. In particular, the system can use the print agent that is already used for the printing process to clean the intermediate transfer member, rather than any additional cleaning substances. In some examples, an image may be printed to a substrate and a layer may be transferred to the cleaning surface108in a single revolution of the intermediate transfer member104. For example, if an image occupies less than the full surface of the intermediate transfer member104, and the amount of surface of the intermediate transfer member104which is not used for the image is sufficient, a layer of print agent106may be provided for transfer to the cleaning surface108. In some examples, the layer106may span a seam portion of the intermediate transfer member104. Such seam portions may generally be avoided when printing images as they can cause image quality issues. However, as image quality is less of a concern when providing a layer106to the cleaning surface108, this portion of the intermediate transfer member104may be utilised (if present: some designs of intermediate transfer member104do not have a seam). FIG.2shows a print apparatus200similar to the apparatus100shown inFIG.1(like parts have been labeled with the same reference numerals) except that in the example shown inFIG.2, the endless cleaning surface208is an endless belt (i.e. a continuous loop) which is engageable with the intermediate transfer member104. In some examples, the intermediate transfer member104may comprise an endless belt rather than a roller as shown. Each of the print apparatuses100,200ofFIGS.1and2may be at least sub-components of Liquid Electro Photographic (LEP) printing apparatus which may be used to print a thermoplastic print agent such as an electronic ink composition. A photo charging unit may deposit a uniform static charge on the electrostatic imaging plate102, which in some examples may be a Photo Imaging Plate, or ‘PIP’ of the electrostatic imaging cylinder and then a laser imaging portion of the photo charging unit may dissipate the static charges in selected portions of the image area on the PIP to leave a latent electrostatic image. The latent electrostatic image is an electrostatic charge pattern that represents the image to be printed. The electronic ink composition may then be transferred to the PIP from a print agent source, which may comprise a Binary Ink Developer (BID) unit, and which may present a uniform film of the print agent to the PIP. The print agent may be electrically charged by virtue of an appropriate potential applied to the print agent. The charged ink composition, by virtue of an appropriate potential on the electrostatic image areas, is attracted to the latent electrostatic image on the electrostatic imaging plate102. The electrostatic imaging plate102then has a developed print agent/electrostatic ink composition image on its surface. The image may be transferred from the electrostatic imaging plate102to the intermediate transfer member104, in some examples by virtue of an appropriate potential and/or pressure applied between the electrostatic imaging plate102and the intermediate transfer member104, such that the charged print agent is attracted to intermediate transfer member104. The image may in some examples be dried and fused on the intermediate transfer member104before being transferred to the substrate/endless cleaning substrate108(for example, adhering thereto under pressure) depending on the operational mode. While this process may be used both when transferring print agent to a substrate and to a cleaning surface108, as the layer106to be transferred to cleaning surface108may be a substantially continuous area of print agent, the PIP may be left out of the transfer process. For example, the print agent may be transferred directly from a BID or other print agent source to the intermediate transfer member104. FIG.3shows an example of a method300, which may be a method for cleaning a print apparatus. In some examples, the method300may be carried out under the control of processing circuitry of print apparatus (e.g. a controller comprising at least one processor). The method comprises, at block302, applying a thermoplastic print agent to a first surface of a print apparatus. The first surface may be, for example, a surface of an intermediate transfer member. In some examples, the thermoplastic print agent may be applied to the first surface by depositing the print agent on a photoconductive surface (which may be an electrostatic imaging plate as described above) and then transferring the print agent from the photoconductive surface to the first surface by engaging the photoconductive surface with the first surface. In some examples, applying a layer of thermoplastic print agent to the first surface comprises applying a single continuous area of thermoplastic print agent to the first surface. In some examples, the thermoplastic print agent is thermoplastic ink. Block304comprises transferring the layer of thermoplastic print agent to the endless cleaning surface. In some examples, this may comprise engaging the endless cleaning surface with the first surface and heating the first surface such that the thermoplastic print agent adheres to the endless cleaning surface. For example, the first surface may be heated to a temperature of between 70-100° C., in some examples, to around 80° C. Block306comprises heating the layer of thermoplastic print agent on the endless cleaning surface to a temperature at which the thermoplastic print agent acts as an adhesive or becomes ‘sticky’. For example, the thermoplastic print agent may be heated to a temperature between 80 and 100° C. Block308comprises engaging the endless cleaning surface with the first surface of the print apparatus to transfer residue, for example print agent residue, from the first surface of the print apparatus to the heated layer of thermoplastic print agent on the endless cleaning surface, thereby cleaning residue from the first surface. Figure400shows another example of a method400which may be a method for cleaning a print apparatus during a print operation. In some examples, the method400may be carried out under the control of processing circuitry of print apparatus. Block402of method400comprises applying thermoplastic print agent to a first surface. Block404comprises transferring the thermoplastic print agent to an endless cleaning surface. Block406comprises applying an image to the first surface of the print apparatus and block408comprises transferring the image from the first surface to a print substrate. During transfer of the image from the first surface to the print substrate, the endless cleaning surface may be disengaged from the first surface. Block410comprises heating a layer of thermoplastic print agent on the endless cleaning surface. In some examples, the layer may have been applied to the endless cleaning surface as described in relation to blocks302and304. Block412comprises engaging the endless cleaning surface with the first surface to clean the first surface. In some examples, the endless cleaning surface may be engaged with the first surface directly after the image has been transferred to the print substrate. In some examples, where a number of separations are built up on the intermediate transfer member, the endless cleaning surface is engaged with the intermediate transfer member after a full set of separations have been transferred to the substrate. Over time, as the thermoplastic layer captures residue and dust from the intermediate transfer member, the surface energy (and the stickiness) of the thermoplastic layer may be reduced. In some examples, thermoplastic print agent may be reapplied to the endless cleaning surface to enable the adhesive properties of the endless cleaning surface to be restored. Block414of method400comprises applying thermoplastic print agent to the first surface. The thermoplastic print agent may be applied to the first surface in a single continuous area. At block416, the thermoplastic print agent may be transferred to the endless cleaning surface such that a second layer of thermoplastic print agent is applied to the endless cleaning surface. This may restore the adhesive properties of the endless cleaning surface to be restored. FIG.5shows an example in which an initial layer of thermoplastic print agent106has been applied to the endless cleaning surface108. The endless cleaning surface108may then accumulate a layer of dirt502from the first surface. A further layer of thermoplastic print agent504has then been applied to the endless cleaning surface108to restore the adhesive properties of the endless cleaning surface108. A number of further layers of thermoplastic print agent may be built up on the endless cleaning surface108in this way overtime. The endless cleaning surface108may be periodically cleaned in order to remove the built-up of layers of thermoplastic print agent. FIG.6shows a method600, which may be a method for cleaning a print apparatus. In some examples, the method600may be carried out under the control of processing circuitry of print apparatus. At block602a layer of thermoplastic print agent, which may be thermoplastic liquid ink may be applied to a rotatable cleaning surface, which may be a rotatably mounted endless cleaning surface. Block604comprises printing one or more impressions onto a substrate by transferring one or more images from a photoconductive surface to the substrate via an intermediate transfer member. Block606comprises determining if an indication that cleaning of the intermediate transfer member is to be carried out is present. In some examples, the indication may comprise determining that a predetermined number of print impressions has been made. The particular predetermined number may depend on the particular printing application for which the print apparatus is being used. In some examples, the indication may comprise an indication that print quality is low (e.g. below a threshold). In some examples, the indication may comprise an indication that the intermediate transfer member is dirty (or that the residue exceeds a threshold). In response to determining that there an indication that cleaning of the intermediate transfer member is to be carried out is present, the method600proceeds to block608in which the layer of thermoplastic print agent on the rotatable cleaning surface is heated. The layer of thermoplastic print agent may be heated to a temperature at which it acts as an adhesive. In some examples the layer of thermoplastic print agent may be heated to a temperature of between 80 and 100° C. Further, in response to determining that an indication that cleaning of the intermediate transfer member is to be carried out is present, block610comprises engaging the rotatable cleaning surface with the intermediate transfer member, such that residue present on the intermediate transfer member may be transferred from the intermediate transfer member to the layer of thermoplastic print agent on the rotatable cleaning surface. If cleaning of the intermediate transfer member is not indicated at block606, the method600may return to block604which comprises printing one or more impressions onto a substrate. FIG.7shows a method700, which may be a method for cleaning a print apparatus. In some examples, the method may be carried out under the control of processing circuitry of print apparatus. In addition to blocks602to610which are the same as those described in relation to method600, method700also includes blocks702and704. Block702comprises, prior to engaging the endless cleaning surface with the first surface, allowing the intermediate transfer member to cool. This may be carried out such that intermediate transfer member is at a lower temperature than the layer of thermoplastic print agent. For example, the intermediate transfer member may be cooled/allowed to cool to around 30-50° C., or to around 40° C. This may for example follow an indication that persistent residue (e.g. residue which has persisted despite other cleaning attempts), is present on the intermediate transfer member Block704comprises, after engaging the rotatable cleaning surface with the intermediate transfer member, determining that a reapplication of print agent (which may be thermoplastic liquid ink) to the rotatable cleaning surface is indicated, for example by determining that there is an indication that restoration of the adhesive layer on the rotatable cleaning surface is indicated and/or that the layer has lost its stickiness. In some examples, block704may comprise determining that the rotatable cleaning surface has engaged with the intermediate transfer member a predetermined number of times. The particular predetermined number may depend on the particular printing application for which the print apparatus is being used. In response to determining that reapplication of thermoplastic print agent is indicated, the method700returns to block602, which comprises applying a layer of thermoplastic print agent to the rotatable cleaning surface. I.e. a second layer of thermoplastic print agent is applied to the rotatable cleaning surface. If no reapplication of print agent is indicated, the method700may return to block604which comprises printing one or more impressions onto a substrate. The present disclosure is described with reference to flow charts. Although the flow charts described above show a specific order of execution, the order of execution may differ from that which is depicted. Blocks described in relation to one flow chart may be combined with those of another flow chart. It shall be understood that some blocks in the flow charts can be realized using machine readable instructions, such as any combination of software, hardware, firmware or the like. Such machine readable instructions may be included on a computer readable storage medium (including but not limited to disc storage, CD-ROM, optical storage, etc.) having computer readable program codes therein or thereon. The machine readable instructions may, for example, be executed by a general purpose computer, a special purpose computer, an embedded processor or processors of other programmable data processing devices to realize the functions described in the description and diagrams. In particular, a processor or processing apparatus may execute the machine readable instructions. Thus functional modules of the apparatus and devices may be implemented by a processor executing machine readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry. The term ‘processor’ is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate array etc. The methods and functional modules may all be performed by a single processor or divided amongst several processors. Such machine readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode. Further, some teachings herein may be implemented in the form of a computer software product, the computer software product being stored in a storage medium and comprising a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure. Although not shown, the print apparatus100may comprise additional apparatus, such as any or any combination of the photoconductive surface102, print agent source(s) (e.g. Binary Ink Developer (BID) unit(s)), charging unit(s) to charge the photoconductive surface102, selective charge dissipation apparatus (for example a laser imaging apparatus to dissipate charge in selective regions of a PIP), electric field units, for example to transfer a pattern of print agent from the photoconductive surface102to the intermediate transfer member104, other cleaning apparatus, for example associated with the photoconductive surface102and/or intermediate transfer member104, further heating and/or curing apparatus, substrate transport apparatus, and the like. The print apparatus100may also comprise control circuitry, for example to control the print apparatus100to engage and disengage the cleaning surface108from the intermediate transfer member104. Such control circuitry may also control other aspects of the print apparatus, such as print operations. While the method, apparatus and related aspects have been described with reference to certain examples, various modifications, changes, omissions, and substitutions can be made without departing from the spirit of the present disclosure. It is intended, therefore, that the method, apparatus and related aspects be limited only by the scope of the following claims and their equivalents. It should be noted that the above-mentioned examples illustrate rather than limit what is described herein, and that those skilled in the art will be able to design many alternative implementations without departing from the scope of the appended claims. Features described in relation to one example may be combined with features of another example. The word “comprising” does not exclude the presence of elements other than those listed in a claim, “a” or “an” does not exclude a plurality, and a single processor or other unit may fulfill the functions of several units recited in the claims. The features of any dependent claim may be combined with the features of any of the independent claims or other dependent claims. | 25,546 |
11860558 | First Embodiment Hereinafter, an image forming apparatus1according to a first embodiment of the present invention will be described with reference toFIGS.1to6.FIG.1is a schematic diagram illustrating an example of an internal configuration of the image forming apparatus1according to the first embodiment. As illustrated inFIG.1, the image forming apparatus1is, for example, a monochrome laser printer, and includes, in a housing10, a first feed tray11, a second feed tray12, a third feed tray13, a discharge tray14, an image forming unit5, and a fixing device8. The first feed tray11is detachably mounted in the housing10and supports a sheet P. The second feed tray12is disposed below the first feed tray11and supports the sheet P. The third feed tray13is disposed below the second feed tray12and supports the sheet P. The sheet P is, for example, plain paper. The number of feed trays is not limited to three, and may be changed as appropriate. The image forming apparatus1includes a first conveyance path R1, a second conveyance path R2, and a third conveyance path R3. The first conveyance path R1is a path from the first feed tray11to the discharge tray14via the image forming unit5and the fixing device8. The second conveyance path R2is a path from the second feed tray12to the first conveyance path R1. The third conveyance path R3is a path from the third feed tray13to the second conveyance path R2. The image forming apparatus1further includes pickup rollers21,22, and23, a registration roller24, and a discharge roller25as conveying units for conveying the sheet P along the conveyance paths R1, R2, and R3. The pickup roller21is provided at the first feed tray11, picks up the sheet Pin the first feed tray11, and conveys the sheet P to the first conveyance path R1. The pickup roller22is provided at the second feed tray12, picks up the sheet P in the second feed tray12, and conveys the sheet P to the second conveyance path R2. The pickup roller23is provided at the third feed tray13, picks up the sheet P in the third feed tray13, and conveys the sheet P to the third conveyance path R3. Each of the pickup rollers21,22, and23is rotationally driven by a main motor62. Specifically, as illustrated inFIG.3, a driving force of the main motor62is transmitted to the pickup roller21via a feeding clutch71. The driving force of the main motor62is transmitted to the pickup roller22via a feeding clutch72. The driving force of the main motor62is transmitted to the pickup roller23via a feeding clutch73. The registration roller24and the discharge roller25are rotationally driven by the main motor62. The registration roller24conveys the sheet P pulled out by the pickup rollers21,22, and23toward the image forming unit5in accordance with a forming operation and timing of a toner image in the image forming unit5. The discharge roller25is rotationally driven by a discharge motor (not shown), and discharges the sheet P, on which an image is formed by the image forming unit5, to the discharge tray14. As illustrated inFIG.1, the image forming unit5includes a photosensitive drum51, a charger52, a laser scanner53, a developing device50, a transfer roller55, and a cleaning roller56, and forms a toner image on the sheet P. The photosensitive drum51is rotationally driven by a driving force from the main motor62. The charger52is, for example, a scorotron charger, and is disposed facing the photosensitive drum51. When a given charging bias is applied to the charger52, a surface of the photosensitive drum51is uniformly charged. The laser scanner53is provided at an upper portion in the housing10, includes a polygon mirror530, a laser emitting unit (not shown), a lens, a reflecting mirror, and the like, and irradiates the photosensitive drum51with laser light to expose the photosensitive drum51to form an electrostatic latent image based on image data on the surface of the photosensitive drum51. The developing device50accommodates toner therein. The developing device50includes a developing roller54. The developing roller54is rotationally driven by the main motor62. When a given developing bias is applied to the developing roller54, toner is supplied to the electrostatic latent image formed on the surface of the photosensitive drum51. Accordingly, a toner image is formed on the surface of the photosensitive drum51. The transfer roller55is disposed facing the photosensitive drum51. When a forward transfer bias is applied to the transfer roller55, the toner image formed on the surface of the photosensitive drum51is electrically attracted, and the toner image is transferred to the sheet P. The cleaning roller56is, for example, a sponge roller. By applying a given cleaning bias to the cleaning roller56, the toner and the like remaining on the surface of the photosensitive drum51are removed from the photosensitive drum51. The fixing device8includes a heating unit81, a roller82, and a heater83. The heating unit81includes a fixing belt and a nip plate (not illustrated). The fixing belt is a tubular member having heat resistance and flexibility and extending in an axial direction of the roller82, and is provided so as to be rotatable about the axial direction. Both the heater83and the nip plate have substantially the same length as the fixing belt in the axial direction, and are disposed in a space on an inner circumferential side of the fixing belt. When the heating unit81and the roller82are pressed against each other, a fixing nip is formed between the heating unit81and the roller82. The heater83includes, for example, a halogen heater, and heats the heating unit81. The fixing device8fixes the toner image formed on the sheet P to the sheet P by conveying the sheet P on which the toner image is formed while heating the sheet P at the fixing nip. The fixing belt and the nip plate of the heating unit81are lubricated by grease so as to slide smoothly with respect to each other. The grease is adjusted to have an optimum viscosity when a temperature of the fixing device8is equal to or higher than a given temperature (for example, 150° C.). When the temperature of the fixing device8is lower than the given temperature, the viscosity of the grease is increased, and the fixing belt and the nip plate are difficult to slide with respect to each other. Therefore, until the temperature of the fixing device8reaches the given temperature, it is necessary to rotate the roller82at a slower speed than when fixing the toner image formed on the sheet P to the sheet P so as to soften the grease. [Electrical Configuration of Image Forming Apparatus] FIG.2is a block diagram illustrating an electrical configuration of the image forming apparatus1according to the first embodiment. As illustrated inFIG.2, a controller100includes a central processing unit (CPU)101, a read only memory (ROM)102, a random access memory (RAM)103, a non-volatile memory (NVM)104, and an ASIC105, which are connected by an internal bus. The controller100performs overall control of each unit of the image forming apparatus1. The ROM102stores various control programs for controlling the image forming apparatus1, various settings, and the like. The RAM103is used as a work area in which the various control programs are read and a storage area in which image data is temporarily stored. The NVM104stores in advance various types of data such as programs for controlling application of various biases shown inFIG.6, set values of the various biases, and a printing speed and an exposure speed to be described later. A polygon motor61, the main motor62, a BD sensor80, a temperature sensor90, a sheet sensor110, and a communication interface (I/F)120are electrically connected to the ASIC105. The controller100controls driving of the laser scanner53by driving the polygon motor61. Further, the controller100controls driving of the fixing device8, the photosensitive drum51, the developing roller54, the pickup rollers21,22and23, and the like by driving the main motor62. When the BD sensor80detects laser light emitted from the laser emitting unit, the BD sensor80outputs a BD signal to the controller100. The BD sensor80is disposed at a position where the laser light reflected by a mirror surface of the polygon mirror530is incident when an angle of the mirror surface with respect to an emission direction of the laser light is a specific angle. The temperature sensor90is disposed in the heating unit81and is used to estimate a temperature of the fixing nip. The temperature sensor90outputs a signal corresponding to the temperature of the fixing nip to the controller100. The sheet sensor110is a sensor that is disposed between the registration roller24and the photosensitive drum51in the first conveyance path R and detects passage of the sheet P. As the sheet sensor110, a sensor having an actuator that swings when the sheet P comes into contact with the actuator, an optical sensor, or the like may be used. The sheet sensor110outputs an ON signal in a state where the sheet P is passing, and outputs an OFF signal in a state where the sheet P is not passing. A detection signal from the sheet sensor110is output to the controller100. The communication I/F120is connected to a network such as a LAN, and enables connection to an external device in which a driver for the image forming apparatus1is incorporated. The image forming apparatus1may receive a start command for image forming processing via the communication I/F120. [Driving Mechanism] FIG.3is a schematic diagram of the image forming apparatus1and a block diagram of main members thereof. As illustrated inFIG.3, the driving force of the main motor62is transmitted to the photosensitive drum51via a drum clutch91, and is transmitted to the developing roller54via a developing clutch92. The drum clutch91is, for example, an electromagnetic clutch. The drum clutch91may be any configuration that may be controlled by the controller100, and may be a planetary clutch, a friction clutch, a dog clutch, or the like. As illustrated inFIG.2, the image forming apparatus1includes a fixing gear train63and a drum gear train64. The fixing gear train63transmits the driving force of the main motor62to the roller82of the fixing device8. The drum clutch91is disposed between the drum gear train64and the photosensitive drum51. The drum clutch91switches between a connected state in which the driving force of the main motor62is transmittable to the photosensitive drum51and a disconnected state in which the driving force of the main motor62is not transmitted to the photosensitive drum51. The driving mechanism including the fixing gear train63and the drum gear train64has a connected state and a disconnected state. In the connected state, the driving force of the main motor62is transmitted to both the roller82and the photosensitive drum51. In the disconnected state, the driving force of the main motor62is transmitted to the roller82, but is not transmitted to the photosensitive drum51. The main motor62is connected to the pickup rollers21,22, and23via the feeding clutches71,72, and73respectively. The feeding clutch71also has a connected state in which the driving force of the main motor62is transmittable to the pickup roller21and a disconnected state in which the driving force of the main motor62is not transmitted to the pickup roller21. The feeding clutch72has a connected state in which the driving force of the main motor62is transmittable to the pickup roller22and a disconnected state in which the driving force of the main motor62is not transmitted to the pickup roller22. The feeding clutch73has a connected state in which the driving force of the main motor62is transmittable to the pickup roller23and a disconnected state in which the driving force of the main motor62is not transmitted to the pickup roller23. [Flow of Print Preparation Operation Performed by Controller] Next, a flow of a print preparation operation performed by the controller100will be described with reference toFIGS.4and5.FIG.4is a flowchart illustrating a flow of control at the time of printing of the image forming apparatus1.FIG.5is a timing chart illustrating an operation of each unit of the image forming apparatus1. First, in the flowchart shown inFIG.4, the controller100determines whether a start command for image forming processing, that is, a print job, is received through the communication I/F120(S1). When a start command for image forming processing is not received (S1: NO), the controller100returns to S1, and when a start command for image forming processing is received (S1: YES), the controller100determines whether a temperature of the fixing device8is equal to or lower than a given temperature (S2). Here, the given temperature is set to, for example, about 150° C. In a case where the temperature of the fixing device8is equal to or lower than the given temperature (S2: YES), the controller100starts preheating processing of preheating the fixing device8by driving the main motor62at t1inFIG.5. Specifically, the controller100sets the drum clutch91to the disconnected state and turns on the heater83(S3). Then, the controller100starts driving the main motor62, accelerates the main motor62to a preheating speed (S4), and rotates the roller82in a state where rotation of the photosensitive drum51is stopped. The preheating speed is a rotation speed of the main motor62suitable for preheating the fixing device8. After S4, the controller100starts driving the polygon motor61at t2inFIG.5, and increases a rotation speed of the polygon motor61to an exposure speed (S5). The exposure speed is a rotation speed of the polygon motor61suitable for exposing the photosensitive drum51. Thereafter, while the temperature of the fixing device8does not reach the given temperature (S6: NO), the controller100repeats S6and stands by. After S6, the controller100executes acceleration processing in S10. In the acceleration processing, the rotation speed of the main motor62is accelerated to a printing speed that is a rotation speed higher than the preheating speed while the drum clutch91of the driving mechanism is maintained in the disconnected state. Here, the printing speed is a rotation speed of the main motor62suitable for performing the image forming processing on the sheet P. On the other hand, in a case where the temperature of the fixing device8is not equal to or lower than the given temperature (S2: NO), that is, in a case where the temperature of the fixing device8is higher than the given temperature, the controller100starts driving the polygon motor61and increases the rotation speed of the polygon motor61to the exposure speed (S7). Then, the controller100turns on the heater83to start heating the fixing device8(S8), and starts driving the main motor62to accelerate the main motor62to the preheating speed (S9). After S9, the controller100executes processing of S10to S16as in the case where the temperature of the fixing device8is equal to or lower than the given temperature. Here, application of a charging bias to the charger52in S12will be described in detail with reference toFIG.6.FIG.6is a timing chart illustrating control of each bias of the image forming apparatus1. As illustrated inFIG.6, at t21inFIG.6, the controller100causes a charging bias application unit (not illustrated) to apply a first charging bias to the charger52. Thereafter, at t22inFIG.6, the controller100applies a second charging bias having an absolute value larger than that of the first charging bias. At t21inFIG.6, the controller100causes a cleaning bias application unit (not illustrated) to apply a given cleaning bias to the cleaning roller56. Referring back toFIG.4, after S12, at t6inFIG.5, the controller100sets the feeding clutch71to the connected state and rotates the pickup roller21to start feeding the sheet P from the first feed tray11to the first conveyance path R1(S13). In S13, the controller100may supply the sheet P from the second feed tray12or the third feed tray13to the conveyance paths R1, R2, and R3. After S13, the controller100starts the rotation of the photosensitive drum51(S14). Specifically, in S14, the controller100switches the drum clutch91to the connected state, and executes photosensitive drum driving start processing of starting driving both the roller82of the fixing device8and the photosensitive drum51. In addition, the controller100sets the developing clutch92to a connected state to drive the developing roller54. The drum clutch91and the developing clutch92are switched to a disconnected state at t12inFIG.5. After S14, the controller100applies a high voltage to the developing roller54and the transfer roller55. Specifically, at t22inFIG.6, the controller100causes a developing bias application unit (not illustrated) to apply a first developing bias, which is a high voltage, to the developing roller54. Thereafter, at t23inFIG.6, the controller100applies a second developing bias having an absolute value larger than that of the first developing bias. At t23inFIG.6, the controller100causes a transfer bias application unit (not illustrated) to apply a forward transfer bias of, for example, −0.6 kV to −2.8 kV to the transfer roller55. Accordingly, a state is established in which a toner image may be transferred from the photosensitive drum51to the sheet P. Further, the controller100applies a given reverse transfer bias to the transfer roller55. Accordingly, when transferring the toner image to the sheet P, influence of a current flowing into the transfer bias application unit side through the transfer roller55in contact with the photosensitive drum51may be eliminated, and the transfer bias application unit may be normally operated. Next, in a case where the sheet sensor110is turned on by a leading edge of the sheet P passing through the sheet sensor110at t7inFIG.5, the controller100causes the laser scanner53to start exposure and continues exposure until t10inFIG.5. Subsequently, the controller100determines whether the leading edge of the sheet P reaches the photosensitive drum51(S15). In a case where the leading edge of the sheet P does not reach the photosensitive drum51(S15: NO), the controller100repeats S15, and in a case where the leading edge of the sheet P reaches the photosensitive drum51at t8inFIG.5(S15: YES), the controller100executes the image forming processing of forming an image on the sheet P by the image forming unit5(S16). Specifically, during a period from t8to t11inFIG.5, the controller100rotates the photosensitive drum51and the transfer roller55so that the sheet P passes through a drum nip formed between the photosensitive drum51and the transfer roller55, thereby transferring a toner image on the surface of the photosensitive drum51to the sheet P. Then, the controller100drives the fixing device8to convey the sheet P, on which the toner image is formed, while heating the sheet P at the fixing nip during a period from t9to t13inFIG.5, thereby fixing the toner image formed on the sheet P to the sheet P. Thereafter, in a case where the sheet P on which the toner image is thermally fixed is discharged onto the discharge tray14by the discharge roller25, the controller100stops the rotation of a main motor62and the polygon motor61at t14inFIG.5, and then stops the rotation of the fan motor (not illustrated) and the discharge motor at t15inFIG.5. In this way, the print processing shown inFIG.4ends. Effects of First Embodiment In the image forming apparatus1according to the present embodiment described above, the controller100sets the drum clutch91to the disconnected state and drives the main motor62during the execution of the preheating processing S3and the acceleration processing S10, so that unnecessary rotation of the photosensitive drum51may be reduced. Accordingly, the fixing device8and the photosensitive drum51may be rotated at respective optimum rotation speeds, and deterioration of the photosensitive drum51may be reduced. In a case where the start command for image forming processing is received (S1: YES) and the temperature of the fixing device8detected by the temperature sensor90is equal to or lower than the given temperature (for example, 150° C.) (S2: YES), the controller100starts the rotation of the main motor62in S4and then starts the rotation of the polygon motor61to increase the rotation speed of the polygon motor61to the exposure speed (S5). That is, in a case where a time required for the preheating processing of the fixing device8, that is, a term from t1to t5inFIG.5, is longer than a time required for increasing the rotation speed of the polygon motor61to the exposure speed, that is, a term from t2to t3inFIG.5, the driving of the main motor62is started at t1inFIG.5before the timing (t2inFIG.5) at which the driving of the polygon motor61is started. Accordingly, it is possible to reduce unnecessary rotation of the polygon motor during the preheating processing of the fixing device8, and it is possible to reduce power consumption. In a case where the temperature of the fixing device8is higher than the given temperature (for example, 150° C.) (S2: NO), the time required for the preheating processing of the fixing device8, that is, a term from t4to t5inFIG.5, is shorter than the time required for increasing the rotation speed of the polygon motor61to the exposure speed, that is, a term from t2to t3inFIG.5, and thus the controller100starts the rotation of the polygon motor61before starting the rotation of the main motor62. Accordingly, this procedure contributes to reduction of unnecessary rotation of the main motor62during a period until the rotation speed of the polygon motor61is increased to the exposure speed, and this procedure contributes to reduction of power consumption. Second Embodiment Next, the image forming apparatus1according to a second embodiment of the present disclosure will be described with reference toFIGS.1,3,4, and7.FIG.7is a timing chart illustrating operation states of the feeding clutches71,72, and73and the drum clutch91of the image forming apparatus1according to the second embodiment. For convenience of description, members having the same functions as those described in the first embodiment are denoted by the same reference numerals, and a description thereof will not be repeated. The second embodiment is different from the first embodiment in that, when performing printing on the sheets P of the first feed tray11, the second feed tray12, and the third feed tray13, the controller100sets the drum clutch91to be in a connected state at different timings as illustrated inFIG.7. A schematic configuration of the image forming apparatus1according to the second embodiment is the same as that of the first embodiment. That is, as illustrated inFIG.1, the image forming apparatus1includes, in the housing10, the first feed tray11, the second feed tray12as an example of a first tray, the third feed tray13as an example of a second tray, the discharge tray14, the image forming unit5, and the fixing device8. The first feed tray11is provided with the pickup roller21. The second feed tray12is provided with the pickup roller22that is an example of a first pickup roller. The third feed tray13is provided with the pickup roller23that is an example of a second pickup roller. The second feed tray12and the third feed tray13are arranged such that a conveyance distance of the sheet P from the pickup roller23to the sheet sensor110is longer than a conveyance distance of the sheet P from the pickup roller22to the sheet sensor110. As illustrated inFIG.3, a driving force of the main motor62is transmitted to the pickup roller21via the feeding clutch71. The driving force of the main motor62is transmitted to the pickup roller22via the feeding clutch72that is an example of a first feeding clutch. The driving force of the main motor62is transmitted to the pickup roller23via the feeding clutch73that is an example of a second feeding clutch. In the flowchart illustrated inFIG.4, when the sheet P is conveyed from the second feed tray12in S13, the controller100switches the drum clutch91to the connected state at t61inFIG.7, that is, at a timing when first standby time T1elapses since the feeding clutch72is set to a connected state at t6inFIG.7. On the other hand, when the sheet P is conveyed from the third feed tray13in S13ofFIG.4, the controller100switches the drum clutch91to the connected state at t62ofFIG.7, that is, at a timing when second standby time T2elapses since the feeding clutch73is set to a connected state at t6ofFIG.7. The second standby time T2is set to be longer than the first standby time T1. That is, in the conveyance paths R1, R2, and R3, as the conveyance distance of the sheet P from the pickup rollers22and23to the photosensitive drum51increases, the standby time from when the feeding clutch73is set to the connected state to when the drum clutch91is switched to the connected state is set to be longer. In this way, as time of conveying the sheet P increases, the timing at which the drum clutch91is set to the connected state to drive the photosensitive drum51is made later, so that it is possible to effectively omit unnecessary rotation of the photosensitive drum51and to favorably reduce deterioration of the photosensitive drum51. OTHER EMBODIMENTS Although the image forming apparatus1according to the first and second embodiments is a monochrome laser printer, the image forming apparatus1is not limited thereto, and may be, for example, a multi-function peripheral (MFP) having a printer function, a scanner function, and the like. Although the sheet P is assumed to be plain paper, the type of the sheet P is not limited thereto, and may be thick paper or thin paper, for example. Further, a value of each bias shown inFIG.6may vary depending on the type of the sheet P. In addition, each step of the processing ofFIG.4executed by the controller100is an example, and contents of a part of the processing may be changed or an order of the part of the processing may be changed. [Example of Implementation by Software] The controller100of the image forming apparatus1may be implemented with a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be implemented by software. In the latter case, the image forming apparatus1includes a computer that executes a command of a program that is software for implementing functions. The computer includes, for example, one or more processors and a computer-readable recording medium storing the program. In the computer, the processor reads the program from the recording medium and executes the program, thereby achieving the object of the present invention. As the processor, for example, a central processing unit (CPU) can be used. Examples of the recording medium include “a non-transitory tangible medium” such as a read only memory (ROM), a tape, a disk, a card, a semiconductor memory, and a programmable logic circuit. In addition, a random access memory (RAM) or the like in which the program is loaded may be further provided. The program may be supplied to the computer via any transmission medium (such as a communication network or a broadcast wave) capable of transmitting the program. An aspect of the present invention can also be implemented in a form of a data signal in which the program is embodied by electronic transmission and which is embedded in a carrier wave. The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims. Embodiments obtained by appropriately combining the technical means disclosed in the different embodiments also fall within the technical scope of the present invention. While the invention has been described in conjunction with various example structures outlined above and illustrated in the figures, various alternatives, modifications, variations, improvements, and/or substantial equivalents, whether known or that may be presently unforeseen, may become apparent to those having at least ordinary skill in the art. Accordingly, the example embodiments of the disclosure, as set forth above, are intended to be illustrative of the invention, and not limiting the invention. Various changes may be made without departing from the spirit and scope of the disclosure. Therefore, the disclosure is intended to embrace all known or later developed alternatives, modifications, variations, improvements, and/or substantial equivalents. Some specific examples of potential alternatives, modifications, or variations in the described invention are provided below: | 28,664 |
11860559 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT An embodiment (FIG.1toFIG.12) of the present invention is described below with reference to accompanying drawings. EMBODIMENT [Schematic Configurations of Temperature Detection Device and Temperature Sensor] Schematic configurations of a temperature detection device1and a temperature sensor10according to the present invention are descried with reference toFIG.1andFIGS.2A,2B. As illustrated inFIG.1, the temperature detection device1includes the temperature sensor10, a circuit unit8, and electric wires81and82electrically connecting the temperature sensor10and the circuit unit8. The temperature sensor10is used to detect a temperature of a temperature measurement object, for example, a roller including a heater, provided in an image forming apparatus such as a laser printer. As illustrated inFIGS.2A,2B, the temperature sensor10is disposed at a position facing a temperature measurement object7in a state of being pressed against the temperature measurement object7with necessary pressure. The circuit unit8calculates a temperature of the temperature measurement object7based on an electric signal output from the temperature sensor10. The circuit unit8is electrically connected to the temperature sensor10through the electric wires81and82drawn out from a holding member20. In the following, a direction in which the electric wires81and82are drawn out from the holding member20is defined as an x-direction. The x-direction corresponds to a longitudinal direction of the temperature sensor10. In a planar view of the temperature sensor10, a direction orthogonal to the x-direction is defined as a y-direction. Further, a direction orthogonal to both of the x-direction and the y-direction is defined as a z-direction. In the z-direction, a side provided with the temperature measurement object7is referred to as an “upper side”, and a side opposite thereto is referred to as a “lower side”. As illustrated inFIG.2A, the temperature sensor10is disposed in, for example, a supporting member60provided in the image forming apparatus through a coil spring61. More specifically, the holding member20of the temperature sensor10is disposed on the supporting member60through the coil spring61in a state where a pin63provided on the supporting member60is inserted into a guide hole26(seeFIG.1). The coil spring61is held in a state of being elastically deformed in a compression direction of the z-direction, between an unillustrated spring receiving portion provided on the holding member20and the supporting member60. The temperature sensor10can maintain a state where a heat collection member30is in contact with and pressed against the temperature measurement object7, by elastic force F0of the coil spring61, irrespective of presence/absence of creep deformation of the heat collection member30. Note that the temperature sensor10can be installed by an appropriate method as long as the temperature sensor10can be installed in the state of being in contact with the temperature measurement object7. For example, in place of the coil spring61, a plate spring62as illustrated inFIG.2Bcan be used. Alternatively, although illustration is omitted, a male screw is inserted into a hole provided in the supporting member60, and the male screw engages with a female screw provided in the holding member20, which makes it possible to install the temperature sensor10on the supporting member60in the state where the temperature sensor10is in contact with the temperature measurement object7. [Configuration of Temperature Sensor] The configuration of the temperature sensor10is described with reference toFIG.3toFIG.10. As illustrated inFIG.3, the temperature sensor10includes, as main components, a thermosensitive element11detecting the temperature of the temperature measurement object7, the holding member20, the heat collection member30, an inner film41covering the heat collection member30, and an outer film42covering the thermosensitive element11disposed on the inner film41. The temperature sensor10further includes a heat collection material43filled around the thermosensitive element11between the inner film41and the outer film42. The components of the temperature sensor10are described below. [Thermosensitive Element] The thermosensitive element11is described with reference toFIG.4. The thermosensitive element11is a thermistor element that includes a thermosensitive body111, electrodes111A and111B provided on the thermosensitive body111, paired lead wires112and113electrically connected to the electrodes111A and111B, and a sealing material114sealing the thermosensitive body111. As the thermosensitive element11, a resistor having a temperature coefficient, for example, a thin-film thermistor or a platinum temperature sensor is widely usable. The sealing material114may not be necessarily provided in the thermosensitive element11. In the following, out of the thermosensitive body111and the sealing material114, at least the thermosensitive body111is referred to as a thermosensitive portion110. For example, each of the lead wires112and113at least partially includes a clad wire drawn out from the sealing material114. As the clad wire, for example, a Dumet wire is used. The lead wires112and113are conducted to the electric wires81and82through paired conductive members121and122described below provided in the holding member20, respectively. [Holding Member] The holding member20is described with reference toFIGS.5A and5B. The holding member20according to the present embodiment is formed in a substantially rectangular shape in a planar view, and includes a main body portion22and an electric wire connection portion25to which the electric wires81and82are connected. The main body portion22includes a base portion202and a housing portion201. The base portion202and the electric wire connection portion25are integrally formed so as to be arranged in this order from a front side (F) toward a rear side (R) in the x-direction. The housing portion201includes a wall body21protruding in the z-direction from the base portion202, and a substantially rectangular parallelepiped space20S is provided inside the housing portion201. The heat collection member30described below is disposed in the space20S. The holding member20is integrally formed by injection molding using an insulation resin material. An upper surface22aof the base portion202, a bottom part215of the housing portion201, and an upper surface25aof the electric wire connection portion25are provided at the same height in the z-direction. Further, a first boss221protruding in the z-direction is provided on each of the upper surface22aof the base portion202and the upper surface25aof the electric wire connection portion25. Two second bosses231protruding in the y-direction are provided on each of side surfaces23of the main body portion22. The first bosses221and the second bosses231are used to fix the inner film41and the outer film42to the holding member20. Note that, in the present embodiment, the case where the holding member20is formed in a rectangular shape in a planar view is described as an example; however, the present invention is not limited thereto. The holding member20may be formed in a square shape or a circular shape in a planar view, depending on the shape of the space20S, arrangement of the first bosses221and the second bosses231, and the like. The electric wire connection portion25is a portion to which the electric wires81and82electrically connecting the thermosensitive element11and the circuit unit8are attached. The electric wire connection portion25includes connection holes251and252to connect the electric wires81and82to the conductive members121and122described below, respectively. The housing portion201is provided to house the heat collection member30, and is formed in a rectangular shape and in a concave shape in a planar view. The housing portion201includes the wall body21and the bottom part215. The wall body21includes a right wall211and a left wall212that extend in the x-direction and face each other in the y-direction, and a front wall213and a rear wall214that connect both ends of the right wall211and the left wall212in the x-direction. These walls211to214all erect from the bottom part215extending in the x-direction and the y-direction, toward the z-direction. Upper ends of the walls211to214form a rectangular opening. Inner surfaces21A of the walls211to214and the side surfaces23all perpendicularly erect upward from the bottom part215. Further, a corner216is formed by the front wall213and the bottom part215, and a corner217is formed by the rear wall214and the bottom part215. Outer surfaces213A and214A of the front wall213and the rear wall214are inclined in a direction approaching each other as going upward. Accordingly, the holding member20is formed in a frustum shape in a side view. An electric wire installation portion211A that is lowered by one step in the z-direction is provided at a center in the x-direction of the right wall211. Paired grooves211B extending in the z-direction are provided on an outer surface211C of the electric wire installation portion211A. The paired grooves211B appear as projections inside the housing portion201. The lead wires112and113of the thermosensitive element11are disposed in the paired grooves211B. Substantially arc-shaped notches241and242to respectively expose the conductive members121and122are provided in the left wall212and the bottom part215. The conductive members121and122can be insert-molded by being disposed in a mold for injection molding of the holding member20. The lead wires112and113(FIG.4) of the thermosensitive element11extend toward the right wall211side of the heat collection member30, are bent downward in the z-direction, are bent toward a rear surface side of the holding member20through the electric wire installation portion211A on one of the side surfaces23of the holding member20, and are joined with the conductive members121and122. The space20S is to prevent heat radiated from the temperature measurement object7from escaping to outside through the heat collection member30, and maintains the heat in the thermosensitive element11by heat insulation action of air. This rapidly varies a resistance value of the thermosensitive element11relative to temperature variation of the temperature measurement object7, and improves responsiveness of the temperature sensor10. The space20S has a cross-sectional area (area in x and y-directions) and a thickness (dimension in z-direction) realizing necessary thermal resistance. To make a thermal conductivity of the space20S as low as possible, it is desirable to dispose no substance other than the air in the space20S as much as possible. Note that presence of a substance such as gas and liquid other than the air in the space20S is not completely eliminated, and enclosure of a substance other than the air is not interfered as long as the low thermal conductivity can be maintained. Further, to prevent occurrence of convection in the space20S, installation of a plate member or the like in the space20S is allowed. Note that the space20S may be formed in a shape other than the rectangular parallelepiped shape, for example, a cylindrical shape. Contact protrusions218coming into contact with the heat collection member30are provided at four corners inside the wall body21. The heat collection member30is positioned in the x-direction and the y-direction on the bottom part215by the two contact protrusions218provided on each of the right wall211and the left wall212, the front wall213, and the rear wall214. The contact protrusions218provided on the right wall211protrude toward the left wall212, and the contact protrusions218provided on the left wall212protrude toward the right wall211. Each of the contact protrusions218has an inclined shape gradually protruding in the y-direction from an upper end toward a lower end. The heat collection member30can be disposed on the bottom part215by being guided by the contact protrusions218. The first bosses221each have a truncated conical shape and are formed on the upper surface22aof the main body portion22and the upper surface25aof the electric wire connection portion25, of the holding member20. The first bosses221are to attach the inner film41to the holding member20, and are disposed one by one with the wall body21in between in the x-direction. The second bosses231are to attach the outer film42to the holding member20. The two second bosses231are disposed on each of the side surfaces23of the holding member20. This is illustrative, and the first bosses221and the second bosses231can be provided at appropriate positions of the holding member20. [Heat Collection Member] The heat collection member30is described with reference toFIGS.6A to6C. The heat collection member30collects the heat radiated from the temperature measurement object7to the thermosensitive element11, and is thermally joined with the thermosensitive element11. To rapidly transfer the heat from the temperature measurement object7to the thermosensitive element11, a metal material higher in thermal conductivity than a resin material and the like, or other material having a thermal conductivity equivalent to the thermal conductivity of the metal material, for example, a metal material such as a copper alloy and stainless steel, or a material containing carbon is used for the heat collection member30. For example, in a case where a plate member made of a metal material is used, the heat collection member30can be integrally formed by stamping and bending press working. The material used for the heat collection member30can be appropriately selected in consideration of the thermal conductivity, an elastic modulus, and heat resistance. A thickness of the heat collection member30is, for example, about 0.03 mm to about 0.2 mm. The heat collection member30has a constant thickness; however, the thickness is not limited thereto. When the heat collection member30is made of a metal material or the like having a high thermal conductivity, the resistance value of the thermosensitive element11can be immediately varied relative to the temperature variation of the temperature measurement object7, and responsiveness can be further improved by heat collection action of the heat collection member30in addition to the heat insulation action of the space20S. The term “heat collection” used herein means that the heat is received from the temperature measurement object7and is rapidly transferred to the thermosensitive element11. The heat is maintained in the thermosensitive element11by the heat collection action of the heat collection member30. As illustrated inFIG.6A, the heat collection member30includes an abutting portion31abutting along the temperature measurement object7, and paired leg portions32. The abutting portion31and the paired leg portions32are formed in a substantially U-shape as a whole by bending both ends in a longitudinal direction of a substantially rectangular metal plate. Note that a thickness of the abutting portion31and a thickness of each of the leg portions32may be equal to or different from each other. As illustrated inFIG.6B, the abutting portion31is formed in a square shape in a planar view of the temperature sensor10. The abutting portion31includes a through hole310in which the thermosensitive portion110is disposed, paired grooves311where the lead wires112and113are disposed, and a plate-like flat portion312that comes into surface contact with the temperature measurement object7through the inner film41and the outer film42. The through hole310and the grooves311configure an element arrangement portion where the thermosensitive element11is disposed. The through hole310is a region where the thermosensitive portion110and a part of the lead wires112and113of the thermosensitive element11are disposed, and penetrates through the abutting portion31in the z-direction. The through hole310is formed in a circular shape at a center of the abutting portion31, and a diameter thereof can be set to a diameter sufficient to house the thermosensitive portion110. The shape of the abutting portion31is not particularly limited, and the abutting portion31may be formed in a rectangular shape or a circular shape. The abutting portion31has an area necessary for heat collection. If a heat input area of the abutting portion31is reduced with downsizing of the temperature sensor10in a planar direction, a ratio of heat escaping to the leg portions32in the whole of the heat collection member30is increased. However, since the through hole310provided in the abutting portion31can reduce a heat capacity in the vicinity of the thermosensitive portion110, an amount of heat escaping to the vicinity of the thermosensitive portion110is small. Accordingly, in the present embodiment, responsiveness of temperature detection can be enhanced as compared with a case where the through hole310is not provided. The thermosensitive portion110is disposed at a center or a substantially center of the through hole310. Since the through hole310is formed in a circular shape, heat distribution around the thermosensitive portion110becomes substantially uniform, and stable temperature detection by the thermosensitive portion110can be performed. Note that the shape of the through hole310is not necessarily the circular shape, and the through hole310may be formed in, for example, a square shape, a rectangular shape, or a polygonal shape. The abutting portion31includes the paired grooves311where the lead wires112and113are disposed. As illustrated inFIG.6B, the paired grooves311are provided symmetrically about the center of the through hole310on both sides in the y-direction with the through hole310in between. As illustrated inFIG.6C, each of the grooves311is formed so as to be recessed downward in the z-direction from the flat portion312of the abutting portion31. A width and a depth of each of the grooves311are set to a width and a depth sufficient to house the lead wires112and113inside each of the grooves311. In the present embodiment, the temperature sensor10that includes the thermosensitive element11in which the lead wires112and113of the thermosensitive element11are drawn out to the same side from the sealing material114is used. Therefore, the lead wires112and113are disposed on one of the paired grooves311. In this case, formation of the other groove311can be omitted. The flat portion312is a portion excluding the through hole310and the grooves311of the abutting portion31and is formed flat. The flat portion312comes into surface contact with the temperature measurement object7through the inner film41and the outer film42. The paired leg portions32are provided on both sides in the x-direction of the abutting portion31. As illustrated inFIGS.6A and6B, a width in the y-direction of each of the paired leg portions32is set slightly shorter than the width in the y-direction of the abutting portion31, and notches31A are symmetrically provided on both sides in the y-direction at both ends31xin the x-direction of the abutting portion31, at portions where the flat portion312of the abutting portion31and each of the leg portions32are connected to each other. In the following, the x-direction in which the paired leg portions32are separated from each other is referred to as a “width direction D1” set to the heat collection member30. In the width direction D1, a direction in which the paired leg portions32approach each other is referred to as “inside in width direction D1”, and a direction in which the paired leg portions32are separated from each other is referred to as “outside in width direction D1”. As illustrated inFIG.6C, each of the leg portions32includes a leg portion main body321extending perpendicularly to the abutting portion31, and a tip end part322disposed on the bottom part215. The paired leg portions32are formed in the same shape and to have the same dimensions, and are disposed symmetrically about the center in the width direction D1. An angle formed by each of the leg portions32to the abutting portion31is not necessarily strictly a right angle, and has a tolerance. The leg portion main bodies321are formed by being bent perpendicularly to the abutting portion31, and the tip end parts322are bent outward in the width direction D1from the respective leg portion main bodies321. The tip end parts322are formed by being bent from the respective leg portion main bodies321in a direction opposite to the direction in which the respective corresponding leg portion main bodies321are bent, and each form an obtuse angle to the corresponding leg portion main body321in a side view. Further, peripheral edges322A of the tip end parts322each have an arc-shaped outer shape convex downward. A diameter of each of the peripheral edges322A of the tip end parts322is set equal to a dimension in the y-direction of each of the leg portion main bodies321. Since the heat collection member30is simply formed in a substantially U-shape as described above, the heat collection member30can be easily press-molded at a low cost by moving a mold in a single direction (z-direction). Each of the leg portions32has a length necessary to secure insulation between the temperature measurement object7and the conductive members121and122. Therefore, even after creep deformation due to long use of the temperature sensor10, a necessary space distance is secured between the temperature measurement object7and the conductive members121and122. The leg portion main bodies321are bent along the y-direction at positions of the notches31A. Since the dimension in the y-direction of each of the leg portion main bodies321is set to a dimension substantially equivalent to the dimension in the y-direction of the abutting portion31, rigidity in the z-direction of each of the leg portions32pressed between the temperature measurement object7and the holding member20by the coil spring61or the like is sufficiently secured. As illustrated inFIG.11, the leg portions32support the abutting portion31at a predetermined position against reaction force F2to pressing force F1, and transmit the elastic force F0of the coil spring61or the like to the abutting portion31. Even when creep deformation occurs on the heat collection member30, the abutting portion31is maintained in a state of abutting on the temperature measurement object7with necessary pressing force F1, by the elastic force F0of the coil spring61or the like. Since the leg portions32have sufficient rigidity in the z-direction as described above, deformation in a direction in which the abutting portion31is twisted relative to an x-y plane by the reaction force F2is prevented. The coil spring61or the like is disposed separately from the temperature measurement object7on a lower side27of the holding member20, and the space20S having the heat insulation action is present between the temperature measurement object7and the coil spring61. Therefore, influence of the heat from the temperature measurement object7on the coil spring61is small. Accordingly, creep deformation does not occur on the coil spring61, or even if the creep deformation occurs on the coil spring61, a deformation amount by the creep deformation is negligibly small. In the heat collection member30having the above-described configuration, the length in the x-direction, namely, the length between the peripheral edges322A of the paired leg portions32is set to a length equal to or slightly greater than the length in the x-direction of the space20S of the holding member20. As illustrated inFIG.7A, the heat collection member30is inserted into the space20S of the holding member20from the leg portions32while the paired leg portions32are deflected inward in the width direction D1as necessary. As a result, the tip end part322of the leg portion32positioned on the front side is positioned at the corner216formed by the front wall213and the bottom part215, and the tip end part322of the leg portion32positioned on the rear side is positioned at the corner217formed by the rear wall214and the bottom part215. When external force is applied to the abutting portion31in a direction other than the direction perpendicular to the abutting portion31in this state, the heat collection member30swings in the y-direction with tops of the peripheral edges322A of the leg portions32as fulcrums (see arrows inFIG.6A). As described above, the heat collection member30is swingably held by the holding member20, which enables the heat collection member30and the temperature measurement object7to come into surface contact with each other. When the heat collection member30is housed in the space20S, the abutting portion31and upper sides of the leg portion main bodies321protrude upward from an upper end of the wall body21. The heat collection member30protrudes from the upper end of the wall body21over a region substantially ½ of the entire height. The height of the region protruding from the upper end of the wall body21is varied based on the space distance necessary between the temperature measurement object7and the conductive members121and122. In a ratio of each of the leg portions32in the height in the z-direction, the height of the tip end part322is less than the height of the leg portion main body321. Further, in a ratio of each of the leg portions32in the width in the x-direction, the width of the tip end part322is less than the width of the leg portion main body321. [Inner Film] The inner film41is described with reference toFIG.8. The inner film41is a film-like insulation member that holds the thermosensitive element11while electrically insulating the heat collection member30and the thermosensitive element11, and is disposed so as to overlap with a surface of the abutting portion31. A resin material such as polyimide and a fluorine resin is used for the inner film41. A thickness of the inner film41is, for example, about 10 μm to about 20 μm. The inner film41is formed in a rectangular shape having a size sufficient to cover the entire surface of the abutting portion31in order to sufficiently secure a creepage distance between the temperature measurement object7and the thermosensitive element11. More specifically, a length of a short side of the inner film41is set equivalent to or slightly wider than the dimension in the y-direction of the heat collection member30, and a dimension in the x-direction is set to a length not causing excessive tensile force on the inner film41when the inner film41is attached to the holding member20and the heat collection member30. Although not illustrated, holes into which the paired first bosses221of the holding member20are inserted are provided at both ends in the x-direction of the inner film41. The inner film41is fixed to the holding member20in such a manner that the tip ends of the paired first bosses221are protruded from the surface of the inner film41while positions of the unillustrated paired holes are aligned with the positions of the paired first bosses221and the inner film41is mountain-folded, and then the tops of the paired first bosses221are thermally caulked. The thermosensitive portion110is disposed at a position corresponding to the through hole310of the heat collection member30, on the upper surface side of the inner film41. More specifically, as illustrated by a dashed line inFIG.8, the thermosensitive portion110is disposed at a position corresponding to the center of the through hole310of the heat collection member30. At this time, the thermosensitive element11is disposed such that the lead wires112and113are disposed in the paired grooves211B provided on the outer surface211C of the electric wire installation portion211A of the holding member20illustrated inFIGS.7A,7Band the lead wires112and113extend in the y-direction on the surface of the inner film41. Further, the lead wires112and113are bent at an end part of the inner film41so as to extend toward the holding member20(downward), and are disposed in the respective grooves211B. As described above, when the thermosensitive element11is disposed on the inner film41, the heat collection member30and the thermosensitive element11are thermally joined with each other through the inner film41. At this time, a slight deflection may be formed in a region where the inner film41and the heat collection member30overlap with each other, and the thermosensitive portion110of the thermosensitive element11may be housed in the deflection. [Heat Collection Material] When the thermosensitive element11is disposed on the inner film41, the insulation heat collection material43that is thermally joined with the thermosensitive element11is preferably filled around the thermosensitive portion110disposed in the through hole310in order to collect heat to the thermosensitive element11. For example, a material containing a dispersion medium such as a silicone resin having a high thermal conductivity among resin materials and an insulation dispersion medium such as ceramic powder is used for the heat collection material43. Further, so-called heat conductive grease or silicone oil compound can be used for the heat collection material43. The heat collection material43is filled over the inner film41. [Outer Film] The outer film42is described with reference toFIG.3andFIG.10. The outer film42is provided to insulate the thermosensitive element11and the temperature measurement object7while protecting the thermosensitive element11to prevent direct contact and damage of the thermosensitive element11and the temperature measurement object7when the temperature sensor10according to the present invention abuts on the temperature measurement object7. The outer film42covers the heat collection member30and the holding member20from the temperature measurement object7side, and holds the thermosensitive element11to the heat collection member30. Further, the outer film42covers most of the holding member20including the both side surfaces23in addition to the whole of the heat collection member30, in order to sufficiently secure the creepage distance between the temperature measurement object7and each of the thermosensitive element11and the conductive members121and122. A resin material such as polyimide and a fluorine resin is used for the outer film42. The outer film42is formed in a rectangular shape, and a length in the y-direction thereof is set to a length not causing excessive tensile force on the outer film42when the outer film42is fixed to the holding member20. This is not to inhibit, when the holding member20swings, the swing by the outer film42. Further, unillustrated holes into which the respective second bosses231provided on the holding member20are inserted are provided at positions corresponding to the respective second bosses231, on both ends in the y-direction of the outer film42. Note that a length in the x-direction of the outer film42may be set to a length preventing direct contact of at least the thermosensitive element11and the temperature measurement object7, but is preferably set to a length securing the sufficient creepage distance between the thermosensitive element11and the temperature measurement object7. As illustrated inFIG.10B, the outer film42may be configured by stacking two or more film materials, or may be configured by one film material. An entire thickness of the outer film42is, for example, about 10 μm to about 20 μm. The outer film42is disposed to overlap with the region of the inner film41where at least the thermosensitive element11is disposed, and is disposed on the holding member20by causing the second bosses231to be inserted into the holes. Further, the outer film42is fixed to the holding member20by thermally caulking the tops of the second bosses231on one of the side surfaces23of the holding member20and the tops of the second bosses231on the other side surface23. At this time, tensile force slightly occurs on the outer film42between the second bosses231on one of the side surfaces23and the second bosses231on the other side surface23, and force in the z-direction toward the holding member20is applied from the outer film42to the thermosensitive element11and the inner film41, in some cases. In such a case, the inner film41may be recessed downward relative to the through hole310. As illustrated inFIGS.10A and10B, when the abutting portion31is pressed against the temperature measurement object7through the outer film42by the coil spring61, the thermosensitive portion110sinks to the inside of the through hole310while deforming the inner film41, and abuts on the temperature measurement object7through the outer film42. More specifically, when the abutting portion31is pressed against the temperature measurement object7, the thermosensitive portion110sinks to be enclosed by the inner film41, and the heat collection material43is accumulated inside a recessed dent of the inner film41. At this time, an upper end of the thermosensitive portion110is located at a height position equivalent to the height position of the surface of the abutting portion31, or is located lower than the position of the surface of the abutting portion31. In other words, the thermosensitive portion110does not protrude upward from the surface of the abutting portion31. Likewise, the lead wires112and113do not protrude upward from the surface of the abutting portion31because being housed in the grooves311. This contributes to surface contact of the temperature measurement object7and the abutting portion31. The heat collection material43spreads around the thermosensitive portion110, and is thermally joined with the thermosensitive portion110and the lead wires112and113. The heat collection member30and the thermosensitive element11are sufficiently thermally joined with each other through the heat collection material43and the inner film41. [Behavior at Installation of Heat Collection Member30, and State Change Depending on Heat/External Force Application Condition] As described with reference toFIG.2A, in the temperature sensor10, when the heat collection member30is pressed against the temperature measurement object7by external force such as the elastic force F0of the coil spring61, the reaction force F2to the pressing force F1pressing the abutting portion31against the temperature measurement object7is applied to the abutting portion31as illustrated inFIG.11. The leg portions32of the heat collection member30are hardly elastically deformed by the reaction force F2, and the heat collection member30swings to bring the abutting portion31and the temperature measurement object7into surface contact with each other. At this time, in a case where the flat portion312of the abutting portion31and the temperature measurement object7do not squarely face each other, namely, in a case where the flat portion312is inclined to the temperature measurement object7, an application point of the pressing force F1and the reaction force F2shifts rightward or leftward from the center in the y-direction of the abutting portion31. Therefore, the heat collection member30swings with the peripheral edges322A of the leg portions32as the fulcrums, and the flat portion312of the abutting portion31and the temperature measurement object7accordingly squarely face each other. In other words, the heat collection member30swings rightward or leftward in the y-direction based on a direction of the inclination of the abutting portion31to the temperature measurement object7(see arrows inFIG.6A). As a result, the abutting portion31can follow and be brought into surface contact with the surface of the temperature measurement object7irrespective of dimensional tolerance and assembly tolerance of each of the heat collection member30, the temperature measurement object7, and the like. Further, no gap is generated between the temperature measurement object7and the abutting portion31because the heat collection member30swings. This makes it possible to sufficiently transfer the heat from the temperature measurement object7to the heat collection member30. The temperature sensor10according to the present invention is used in a state of being pressed against the temperature measurement object7, and the heat is continuously transferred from the temperature measurement object7to the heat collection member30. Therefore, creep deformation may occur on the heat collection member30with time. When the dimension in the z-direction of the heat collection member30is reduced due to occurrence of the creep deformation, the pressing force F1is reduced as compared with an initial state, but the pressing force F1greater than or equal to prescribed force sufficient to stably press the abutting portion31against the temperature measurement object7remains by the external force such as the elastic force of the coil spring61. The creep deformation is described with reference toFIG.12AandFIG.11. The bottom part215illustrated by a dashed line rises upward by the elastic force F0of the coil spring61or the like by the dimension in the z-direction of the heat collection member30reduced by the creep deformation (see alternate long and two short dashes straight line L inFIG.12A). Therefore, even when the leg portions32are contracted in the z-direction by the creep deformation, the abutting portion31is maintained, by the leg portions32, in a state of being held at a predetermined position illustrated by an alternate long and short dash line (position of temperature measurement object7). FIGS.12A to12Dschematically illustrate shape change of the heat collection member30by elastic deformation, based on an analysis result. As illustrated inFIG.11that is an extracted diagram ofFIG.12A, the temperature sensor10according to the present embodiment is pressed against the temperature measurement object7with the predetermined pressing force F1by the elastic force F0of the coil spring61or the like in the initial state. The pressing force F1is appropriately set in consideration of rigidity of each of the heat collection member30and the temperature measurement object7. When the reaction force F2acts on the abutting portion31, the abutting portion31is hardly displaced in the z-direction because of rigidity of the leg portions32. In addition, even when force F3that separates the leg portions32from each other in the width direction D1(x-direction) acts by the reaction force F2of the abutting portion31, the leg portions32do not move outward in the width direction D1because outward movement in the width direction D1of the leg portions32is restricted by the front wall213and the rear wall214of the holding member20. Accordingly, in the heat collection member30, the tip end parts322mainly deform in a compression direction of the z-direction. As a result, also in a case where the creep deformation occurs, as illustrated by arrows A1inFIG.11, boundary portions323between the leg portion main bodies321and the respective tip end parts322displace in a direction approaching each other, and stress deforming the abutting portion31in a convex direction relative to the temperature measurement object7acts on the abutting portion31as illustrated in an arrow A2inFIG.11. This is true of a case where the leg portion main bodies321are deformed by the excessive pressing force F1as illustrated inFIG.12D. As described above, the stress in the convex direction relative to the temperature measurement object7acts on the abutting portion31. Therefore, the abutting portion31can tightly contact with the surface of the temperature measurement object7over a wide range including the position of the thermosensitive portion110and surroundings thereof. Unlike the present embodiment, it is assumed that, as with a heat collection member50according to a comparative example illustrated inFIG.13, leg portions52are each formed in a shape opening outward relative to the abutting portion31in a no-load state, and outward movement of the leg portions52by force F3separating the leg portions52from each other by the reaction force F2is not restricted. In this case, the leg portions52further open outward by the force F3, and the heat collection member50deforms in an M-shape in a side view as a whole. As a result, the abutting portion31deforms in a concave direction relative to the temperature measurement object7. Therefore, the abutting portion31is separated from the temperature measurement object7at and around the position of the thermosensitive portion110, and a space distance between conductors in the temperature measurement object7and the temperature sensor10is reduced. To sufficiently generate, in the abutting portion31, stress in the convex direction relative to the temperature measurement object7, the leg portions32are preferably bent inward in the width direction D1from respective ends31xof the abutting portion31. In the present embodiment, the angle formed by each of the leg portions32to the abutting portion31is set to a right angle; however, the tolerance of each of the leg portions32is preferably set inward in the width direction D1such that the leg portions32are processed in a state of being bent inward in the width direction D1even by a minute amount. [Procedure of Assembling Temperature Sensor] For example, the temperature sensor10can be assembled by the following procedure. The procedure is described. Terminal ends of the lead wires112and113are electrically and mechanically connected to the conductive members121and122(FIG.10A) insert-molded in the holding member20, by welding or the like. (2) The leg portions32of the heat collection member30are inserted into the housing portion201of the holding member20illustrated inFIGS.5A and5B. The tip end parts322of the paired leg portions32are disposed at the corners216and217at the bottom part215of the housing portion201. At this time, the abutting portion31is positioned above the upper end of the wall body21. (3) As illustrated inFIG.8, after the heat collection member30is covered with the inner film41, and the first bosses221are inserted into the holes provided at both ends of the inner film41, the inner film41is fixed to the holding member20by thermal caulking to apply heat and pressure to the first bosses221. (4) The lead wires112and113are laid while being shaped, and the thermosensitive element11is assembled to the holding member20and the heat collection member30. More specifically, after the lead wires112and113are bent upward from the conductive members121and122along the electric wire installation portion211A of the holding member20, the lead wires112and113are also bent at the position of the abutting portion31, thereby disposing the thermosensitive portion110in the through hole310of the abutting portion31on which the inner film41is placed. (5) The heat collection material43is supplied to the thermosensitive portion110and the vicinity thereof. (6) As illustrated inFIG.1, the most part of the holding member20including the thermosensitive element11and the heat collection material43is covered with the outer film42, and the outer film42is fixed to the holding member20by thermal caulking of the second bosses231in a manner similar to the first bosses221. The assembly of the temperature sensor10is completed through the steps (1) to (6). With the temperature sensor10according to the present embodiment described above, it is possible to realize the small temperature sensor10that includes heat insulation property, durability under a use condition where the creep deformation occurs, following capability to the temperature measurement object7, and responsiveness greater than or equivalent to a case where so-called ceramic paper is used, by the heat collection member30that is pressed against the temperature measurement object7, the holding member20that receives the heat collection member30against the force F2and F3, and the space20S present on the rear surface side of the heat collection member30. [Other Embodiment of Heat Collection Member] FIGS.14A and14Beach illustrate a heat collection member30-1that can be used as substitute for the heat collection member30according to the above-described embodiment. The abutting portion31of the heat collection member30-1includes, in place of the through hole310, a groove310-1in which the thermosensitive portion110and a part of the lead wires112and113are disposed. The groove310-1extends in the y-direction in the abutting portion31, and is formed in a concave shape that is bent downward from the surface of the abutting portion31. The groove310-1is formed from one end to the other end in the y-direction of the abutting portion31. The groove310-1has a width (dimension in x-direction) and a depth sufficient to house the thermosensitive portion110. As in the above-described embodiment, insulation between the thermosensitive element11disposed in the groove310-1and the heat collection member30-1is realized and the thermosensitive element11is held by the inner film41disposed on the surface of the abutting portion31. The inside of the groove310-1is filled with the heat collection material43through the inner film41. When the heat collection member30-1including the groove310-1is adopted, the heat collection member30-1is disposed below the thermosensitive element11. Therefore, the heat can be sufficiently kept in the thermosensitive element11positioned between the temperature measurement object7and the heat collection member30-1, and the temperature of the thermosensitive element11can excellently follow the temperature variation of the temperature measurement object7. In a case where the lead wires112and113each include an insulation covering, the inner film41is not always necessary even when the heat collection member30-1has conductivity. Likewise, in a case where the heat collection member30-1is a member not having conductivity, for example, a resin molded product, the inner film41is unnecessary, and installation thereof can be omitted. The width and the depth of the groove310-1may not be constant as long as heat collection property and heat insulation property can be secured. For example, the dimension of the groove310-1in the z-direction at the position where the thermosensitive element11is disposed may be greater than the dimension of the region where the lead wires112and113are disposed, or a through hole may be provided. [Modification of Heat Collection Member] A modification of the heat collection member is described below. FIGS.15A to15Ceach illustrate a heat collection member70provided in a temperature sensor according to a modification. The heat collection member70can be adopted in place of the heat collection member30in the temperature sensor10according to the above-described embodiment. Differences from the above-described embodiment are mainly described below. The heat collection member70includes an abutting portion71similar to the abutting portion31of the heat collection member30, and paired spring leg portions72that support the abutting portion71at positions separated in the width direction D1and abut on the bottom part215of the holding member20. The temperature sensor including the thermosensitive element11, the holding member20, and the heat collection member70is placed on the temperature measurement object7in a state where the spring leg portions72are elastically deformed by a predetermined amount in a compression direction of the z-direction, when the holding member20is supported by an unillustrated supporting member. In this state, the spring leg portions72press the abutting portion71against the temperature measurement object7by its elastic force. The heat collection member70is disposed in the housing portion201of the holding member20, and protrudes upward from the upper end of the wall body21over a region substantially ½ of the entire height. The height of the protruding region of the heat collection member70is varied based on the space distance necessary between the temperature measurement object7and the conductive members121and122and the like. As with the heat collection member30, the heat collection member70can be integrally molded by using a material excellent in thermal conductivity, such as a metal material. The heat collection member70according to the present modification is integrally formed by, for example, performing stamping and bending press working on a metal plate having a thickness of about 0.03 mm to about 0.2 mm. The spring leg portions72are bent inward in the width direction D1from the abutting portion71on both sides in the width direction D1of the abutting portion71provided with notches71A. Each of the spring leg portions72forms a downward acute angle to the abutting portion71at each end71xin the width direction D1of the abutting portion71, and straightly extends to the bottom part215in a no-load state. Only tip ends72A of the spring leg portions72are bent upward from portions disposed on the bottom part215, and are separated from the bottom part215. In the no-load state, the tip ends72A of the spring leg portions72are separated from each other by a predetermined dimension in the x-direction. Each of the spring leg portions72is branched in a fork shape by a slit72S extending upward from the tip end72A except for a base end part. When force pressing the abutting portion71against the temperature measurement object7by the elastic force of the spring leg portions72is referred to as pressing force F1, the tip ends72A of the paired spring leg portions72displace in a direction approaching each other while sliding on the bottom part215by reaction force F2applied to the abutting portion71as illustrated by arrows A1inFIG.15B, and stress deforming the abutting portion71in a convex direction relative to the temperature measurement object7acts on the abutting portion71as illustrated by an arrow A2. This is true of a case where an elastic deformation amount is increased as illustrated inFIG.15C. Since the spring leg portions72are disposed in a state of being inclined to the z-direction from positions above the upper end of the holding member20to the bottom part215, the spring leg portions72each have a long spring length. Therefore, even in a case where the heat collection member70is downsized in a planar direction, each of the spring leg portions72realizes the elastic deformation amount necessary to enable the heat collection member70to have strength against a use condition of the temperature sensor. The elastic deformation amount in the z-direction of each of the spring leg portions72is sufficiently greater than the dimensional tolerance and the assembly tolerance of each of the temperature sensor10and the temperature measurement object7, and a deformation amount caused by the creep deformation. The elastic deformation of the heat collection member70enables a posture of the abutting portion71to follow the surface of the temperature measurement object7irrespective of the dimensional tolerance and the assembly tolerance. This makes it possible to bring the abutting portion71and the temperature measurement object7into close surface contact with each other. Since the tip end of each of the spring leg portions72is branched in a fork shape, the posture of the abutting portion71can follow the temperature measurement object7more. Further, even when the creep deformation in the compression direction of the z-direction occurs on the heat collection member70due to long use, and the dimension in the z-direction between the temperature measurement object7and the bottom part215is increased with reduction in the dimension in the z-direction of the heat collection member70, the pressing force F1necessary to stably press the abutting portion71against the temperature measurement object7remains by the elastic force of the spring leg portions72. Since it is possible to press the abutting portion71against the temperature measurement object7with necessary pressure by the elastic force of the spring leg portions72, it is unnecessary to press the abutting portion71by the elastic force of the coil spring61, the plate spring62, or the like, unlike the above-described embodiment (FIGS.2A,2B). Note that, in the modification, pressing of the heat collection member70against the temperature measurement object7by the elastic force of the coil spring61or the plate spring62through the holding member20is not interfered. The temperature sensor according to the modification including the heat collection member70can also realize the small temperature sensor that includes heat insulation property, durability against the creep deformation, following capability to the temperature measurement object7, and responsiveness greater than or equivalent to a so-called ceramic paper, as with the temperature sensor10according to the above-described embodiment. Heat collection members each substitutable for the heat collection member70are described with reference toFIG.16. Each of paired spring leg portions72-1of a heat collection member70-1illustrated inFIGS.16A to16Eis folded back in order to increase a spring length. Upper parts721of the spring leg portions72-1are bent inward in the width direction D1at both ends of the abutting portion71as with the spring leg portions72inFIG.15. Lower parts722of the spring leg portions72-1are bent outward in the width direction D1from lower ends of the upper parts721. Each of the lower parts722forms an acute angle to the corresponding upper part721. Lower ends of the lower parts722are disposed on the bottom part215. Boundaries723between the upper parts721and the lower parts722protrude inward in the width direction D1.FIGS.16B to16Dschematically illustrate shape change of the heat collection member70-1by elastic deformation, based on an analysis result. A heat collection member70-2illustrated inFIGS.16F and16Gis increased in height and spring length as compared with the heat collection member70-1. The heat collection member70-2exhibits behavior similar to the behavior of the heat collection member70-1, as a spring. Other than the above, the configurations described in the above-described embodiment can be selected or appropriately changed to other configurations without departing from the spirit of the present invention. In the embodiment and the modification described above, the case of using the thermosensitive element that has the configuration in which the lead wires112and113extend in one direction from one side of the thermosensitive body111as illustrated inFIG.4is described as an example; however, a thermosensitive element in which the lead wires112and113extend in both directions of the thermosensitive body111may be used. In this case, the lead wires112and113extend up to the conductive members121and122through both side surfaces23of the holding member20. Further, in the embodiment and the modification described above, the case where the lead wires112and113are drawn out from the thermosensitive body111in the y-direction is described as an example; however, the lead wires112and113may be drawn out from the thermosensitive body111in the x-direction. The range where each of the holding member20, the heat collection member30, and the like is covered with each of the inner film41and the outer film42, and the positions where the films41and42are fixed can be appropriately determined in consideration of the shapes of the holding member20and the heat collection member30, the arrangement of the thermosensitive element11, the required creepage distance, and the like. In a case where film materials having the same configuration are used for the inner film41and the outer film42, one continuous film can be folded and piled, an inner region of the film can be used as an inner insulation portion, and an outer region can be used as an outer insulation portion. [Example of Application to Image Forming Apparatus] An example in which the temperature detection device1including the temperature sensor10is applied to a laser printer9as an example of an image forming apparatus is briefly described with reference toFIG.17. Note that, in place of the temperature sensor10, the temperature sensor including any of the heat collection members illustrated in and afterFIG.14is adoptable. As illustrated inFIG.17, the laser printer9includes a photosensitive belt91, a charger92, an exposure device93, developing devices901to904, a guide roller94, an intermediate transfer unit95, a sheet feeding cassette96, a sheet feeding roller97, a transfer roller98, a fuser99, a registration roller910, a sheet discharge roller911, a sheet discharge tray912, and a control device900controlling the units of the laser printer9. The fuser99includes a pressure roller991and a heating roller992. The heating roller992internally includes an unillustrated heater as a heat source. To measure a temperature of the heater incorporated in the heating roller92or a temperature of a member provided in the heater, the temperature sensor10is installed to be pressed against the heater or the member. In a fusing process after a charging process, an exposure process, a development process, and a transfer process that are processes for image formation by the laser printer9, a recording sheet913on which a color toner image has been transferred is sent to a gap between the pressure roller991and the heating roller992of the fuser99. When the recording sheet913is pressurized and heated while passing through the gap between the pressure roller991and the heating roller992, the color toner image is fixed onto the recording sheet913. Thereafter, the recording sheet913is discharged to the sheet discharge tray912through the sheet discharge roller911. The control device900controls an energization state to the heater of the heating roller992by using a temperature measurement value obtained by the temperature sensor10and the circuit unit8connected to the temperature sensor10. For example, when the temperature measurement value exceeds a threshold, the control device900stops energization to the heater of the heating roller992. A surface temperature of the heating roller992is measured by the temperature sensor10with high following capability. Therefore, it is possible to appropriately control the energization state of the heater without performing extra heating of the heating roller992by the heater in anticipation of response delay of the measurement. Other than the above, the configurations described in the above-described embodiment can be selected or appropriately changed to other configurations without departing from the spirit of the present invention. | 58,634 |
11860560 | The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. Also, identical or similar reference numerals designate identical or similar components throughout the several views. DETAILED DESCRIPTION In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result. Referring now to the drawings, embodiments of the present disclosure are described below. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. For the sake of simplicity, like reference numerals are given to identical or corresponding constituent elements such as parts and materials having the same functions, and redundant descriptions thereof are omitted unless otherwise required. As used herein, the term “connected/coupled” includes both direct connections and connections in which there are one or more intermediate connecting elements. Initially, with reference toFIG.1, a description is given of the overall configuration of an image forming apparatus according to an embodiment of the present disclosure. FIG.1is a schematic diagram illustrating a configuration of an image forming apparatus according to an embodiment of the present disclosure. The image forming apparatus that is illustrated inFIG.1is a printer that forms a toner image by electrophotography, transfers and fixes the toner image onto a recording medium such as a sheet of paper, and finally outputs the recording medium as a printed matter. Specifically, the printer500serving as an image forming apparatus includes a toner-image forming device1, a primary transfer device2, a sheet supply device3, a secondary transfer device4, a conveyor belt device5, a fixing device6, a duplex conveyance device7, a sheet ejection device8, an exposure device9, and a toner-bottle accommodation device10. The toner-image forming device1serving as an image forming device includes a plurality of photoconductive developing stations10a,10b,10c, and10d. The photoconductive developing stations10a,10b,10c, and10dare disposed along a moving direction of a primary transfer belt20described later. For example, the photoconductive developing station10aforms a yellow (Y) toner image. The photoconductive developing station10bforms a magenta (M) toner image. The photoconductive developing station10cforms a cyan (C) toner image. The photoconductive developing station10dforms a black (K) toner image. The photoconductive developing stations10a,10b,10c, and10drespectively include drum-shaped photoconductors11a,11b,11c, and11dserving as image bearers, chargers12a,12b,12c, and12dthat respectively charge the surfaces of the photoconductors11a,11b,11c, and11d, developing devices13a,13b,13c, and13dthat respectively develop electrostatic latent images formed on the photoconductors11a,11b,11c, and11d, and cleaners14a,14b,14c, and14dthat respectively clean the surfaces of the photoconductors11a,11b,11c, and11d. The primary transfer device2serving as a first transfer device is disposed below the toner-image forming device1. The primary transfer device2includes the primary transfer belt20serving as a first mover, primary transfer rollers21a,21b,21c, and21d, a secondary-transfer counter roller22, and a primary-transfer belt cleaner23. The primary transfer belt20is an endless belt formed of a single layer or a plurality of layers of, for example, polyvinylidene fluoride (PVDF), ethylene-tetrafluoroethylene copolymer (ETFE), polyimide (PI), or polycarbonate (PC). The primary transfer belt20is entrained around the primary transfer rollers21a,21b,21c, and21d, the secondary-transfer counter roller22, and a plurality of support rollers to move clockwise in a direction A inFIG.1. The primary transfer rollers21a,21b,21c, and21dface the photoconductors11a,11b,11c, and11d, respectively, via the primary transfer belt20. In other words, the primary transfer rollers21a,21b,21c, and21dsandwich the primary transfer belt20together with the photoconductors11a,11b,11c, and11d, respectively. Thus, the primary transfer belt20contacts each of the photoconductors11a,11b,11c, and11dto form a primary transfer area, which may be referred to as a primary transfer nip, between the primary transfer belt20and each of the photoconductors11a,11b,11c, and11d. The primary transfer area forms a primary transfer electric field between each of the primary transfer rollers21a,21b,21c, and21dand the corresponding one of the photoconductors11a,11b,11c, and11dto electrostatically move the toner image from the surface of each of the photoconductors11a,11b,11c, and11dto the primary transfer belt20. As the primary transfer belt20receives the toner image from each of the photoconductors11a,11b,11c, and11dat the position where the primary transfer belt20contacts each of the photoconductors11a,11b,11c, and11d, the primary transfer belt20moves in the direction A to convey the toner image toward the secondary-transfer counter roller22. The secondary-transfer counter roller22forms a secondary transfer area, which may be referred to as a secondary transfer nip, together with a secondary transfer roller41described later. The primary-transfer belt cleaner23is disposed downstream from the secondary-transfer counter roller22in the moving direction of the primary transfer belt20to clean the surface of the primary transfer belt20that has passed by the secondary-transfer counter roller22. The sheet supply device3is disposed below the primary transfer device2. The sheet supply device3includes a conveyance roller pair30and a conveyance roller unit31to convey recording media separated and fed one by one from a sheet storage to the secondary transfer device4. The sheet storage is coupled to the body of the printer500so as to communicate with sheet conveyance passages3a,3b, and3cof the sheet supply device3. The secondary transfer device4serving as a second transfer device is disposed below the primary transfer device2. The secondary transfer device4includes, for example, a secondary transfer belt40serving as a second mover and the secondary transfer roller41. The secondary transfer belt40is an endless belt formed of a single layer or a plurality of layers of, for example, PVDF, ETFE, PI, or PC. The secondary transfer belt40is entrained around the secondary transfer roller41and a plurality of support rollers to move counterclockwise inFIG.1. The secondary transfer belt40conveys the recording medium fed from the conveyance roller pair30. The secondary transfer belt40also transfers the toner image from the primary transfer belt20onto the recording medium at the secondary transfer area where the secondary transfer roller41and the secondary-transfer counter roller22face each other. A detailed description of the secondary transfer device4is deferred. The conveyor belt device5is disposed below the primary transfer device2. The conveyor belt device5guides the recording medium that has passed by the secondary transfer device4to the fixing device6. The fixing device6is disposed below the primary transfer device2. The fixing device6applies, for example, heat and pressure to the toner image transferred onto the recording medium by the secondary transfer device4, to fix the toner image onto the recording medium. The duplex conveyance device7is disposed below, for example, the secondary transfer device4, the conveyor belt device5, and the fixing device6. When the printer500performs duplex printing, the recording medium bearing the fixed toner image passes through the duplex conveyance device7and is returned toward the conveyance roller unit31. The sheet ejection device8is disposed behind the fixing device6, in other words, downstream from the fixing device6in a recording-medium conveyance direction in which the recording medium is conveyed. The sheet ejection device8conveys the recording medium sent out from the fixing device6toward the outside of the printer500or toward the duplex conveyance device7. The exposure device9is disposed above the toner-image forming device1. Laser light that is emitted by a light source of the exposure device9is guided to the photoconductors11a,11b,11c, and11dvia optical components such as lenses and mirrors, to form an electrostatic latent image on the surface of each of the photoconductors11a,11b,11c, and11d. The toner-bottle accommodation device10is disposed above the exposure device9. Toner bottles100a,100d,100c, and100dcontaining toner to be supplied to the developing devices13a,13b,13c, and13d, respectively, are detachably attached to the toner-bottle accommodation device10. In the configuration described above, when receiving image data from, for example, an external computer, the printer500starts a print job and starts driving, for example, the toner-image forming device1, the primary transfer device2, and the exposure device9. In the toner-image forming device1, the chargers12a,12b,12c, and12duniformly charge the surfaces of the rotationally driven photoconductors11a,11b,11c, and11d, respectively, to a given charging potential. The exposure device9forms an electrostatic latent image on the charged surface of each of the photoconductors11a,11b,11c, and11d. The developing devices13a,13b,13c, and13drespectively develop the electrostatic latent images formed on the photoconductors11a,11b,11c, and11das toner images. The toner images are then sequentially transferred onto the primary transfer belt20. After the toner images are transferred, the cleaners14a,14b,14c, and14dclean the surfaces of the photoconductors11a,11b,11c, and11d, respectively. In parallel with the toner image formation described above, the sheet supply device3conveys the recording medium toward the conveyance roller pair30. The conveyance roller pair30is a registration roller pair. When the recording medium abuts against the conveyance roller pair30, the conveyance roller pair30temporarily stops the conveyance of the recording medium. The conveyance roller pair30resumes the conveyance of the recording medium so that the recording medium meets the toner image that has been transferred onto the primary transfer belt20and arrives at the secondary transfer nip. In other words, the recording medium whose conveyance is resumed meets the toner image on the primary transfer belt20at the secondary transfer nip, where the toner image is transferred onto the surface of the recording medium. The conveyor belt device5conveys the recording medium bearing the transferred toner image to the fixing device6. The fixing device6applies heat and pressure to the recording medium bearing the toner image to fix the toner image onto the recording medium. After the toner image is fixed to the recording medium, the recording medium is conveyed to the sheet ejection device8. In the sheet ejection device8, for example, a direction switching claw switches the course of the recording medium to the outside of the printer500or to the duplex conveyance device7. When the recording medium is sent from the sheet ejection device8to the duplex conveyance device7, the recording medium is sent again to the secondary transfer nip, where another toner image is formed on the back side of the recording medium. Thereafter, the recording medium is finally ejected from the sheet ejection device8. After the primary transfer belt20passes through the secondary transfer nip, the primary-transfer belt cleaner23cleans the surface of the primary transfer belt20to remove the residue such as the toner from the surface of the primary transfer belt20. The printer500includes a temperature-humidity sensor200that detects the temperature and humidity inside the body of the printer500. The temperature and humidity information that is detected by the temperature-humidity sensor200is used to adjust image-forming conditions described later. The numbers of, for example, the photoconductive developing stations10a,10b,10c, and10dand the toner bottles100a,100b,100c, and100dincluded in the printer500may increase or decrease as appropriate for the type and number of colors of the toner used in the printer500. The recording medium that is used for printing is not limited to a sheet of paper. Alternatively, for example, the recording medium may be made of fiber, fabric, leather, metal, plastic, glass, wood, or ceramics. Referring now toFIG.2, a description is given of a configuration around the secondary transfer device4. FIG.2is a diagram illustrating a configuration around the secondary transfer device4, according to the present embodiment. In addition to the secondary transfer belt40and the secondary transfer roller41illustrated inFIG.1, the secondary transfer device4includes a plurality of support rollers42a,42b,42c, and42d, a density sensor43serving as an adhesion-amount detector, a secondary-transfer belt cleaner44, and a frame45that holds the secondary transfer belt40, the secondary transfer roller41, the support rollers42a,42b,42c, and42d, the density sensor43, and the secondary-transfer belt cleaner44. The secondary transfer device4further includes a pressure applier46serving as a transfer-pressure changer. The secondary transfer belt40is entrained around the secondary transfer roller41and the plurality of support rollers42a,42b,42c, and42dto move counterclockwise in a direction B inFIG.2. The secondary transfer roller41sandwiches the secondary transfer belt40together with the primary transfer belt20facing the secondary transfer roller41to form the secondary transfer area, which may be referred to as a secondary transfer nip P in the following description, between the primary transfer belt20and the secondary transfer belt40facing each other. The secondary transfer nip P forms a secondary transfer electric field to electrostatically move a toner image T, which has been transferred onto the surface of the primary transfer belt20, to a recording medium S conveyed to the secondary transfer nip P. On the other hand, the secondary transfer area P forms the secondary transfer electric field to electrostatically move a toner image T′, which has been transferred onto the surface of the primary transfer belt20, to the secondary transfer belt40. The density sensor43is disposed to face the surface of the secondary transfer belt40. The density sensor43detects the amount of toner adhering to the secondary transfer belt when the toner image T′ is transferred from the primary transfer belt20onto the secondary transfer belt40. In the following description, the amount of toner adhering to the secondary transfer belt40may be referred to simply as the amount of adhered toner. The density sensor43includes a light-emitting device such as an infrared light emitting diode (LED) and a light-receiving device such as a phototransistor that receives reflected light and outputs an electric signal corresponding to the intensity of the light received. The configuration of the density sensor43is not limited to the aforementioned configuration provided that the density sensor43can detect the amount of adhered toner. The secondary-transfer belt cleaner44is disposed downstream from the density sensor43in a moving direction of the secondary transfer belt40to clean the surface of the secondary transfer belt40that has passed by the density sensor43. The pressure applier46includes a cam47and an arm48that is supported so as to be swingable in a direction C with rotation of the cam47. The pressure applier46is positioned to allow the arm48to contact part of the frame45. The frame45is displaceable in the direction C depending on the position of the arm48. In other words, the pressure applier46can change a transfer pressure, which is a pressure generated at the secondary transfer nip P. At the secondary transfer nip P, the toner image T is transferred from the primary transfer belt20onto the recording medium S. The recording medium S is then conveyed in a direction D toward the fixing device6, which fixes the toner image T onto the recording medium S. On the other hand, the toner image T′ is not transferred onto the recording medium S at the secondary transfer nip P. Instead, the toner image T′ is transferred onto the secondary transfer belt40in a sheet interval between the preceding recording medium and the succeeding recording medium. In short, no recording medium is present in the sheet interval. The toner image T′ is, for example, a given test pattern image. The toner image T′ is formed on the secondary transfer belt40and detected by the density sensor43for every given number of recording media S onto each of which the toner image T is transferred. Referring now toFIGS.3A and3B, a description is given of a reason why the density sensor43is disposed in the secondary transfer device4. FIGS.3A and3Bare diagrams each illustrating the density sensor43according to the present embodiment. The density sensor43includes a light-emitting device43a, a first light-receiving device43b, and a second light-receiving device43c. The light-emitting device43ais, for example, an infrared LED. The first light-receiving device43breceives specularly reflected light, which is light reflected at a reflection angle equal to an incident angle of light striking on a reflection surface Rs. The second light-receiving device43creceives diffusely reflected light, which is light diffusely reflected by the reflection surface Rs. In the present embodiment, the density sensor43detects the specularly reflected light with the first light-receiving device43b. The reflection surface Rs is an elastic body. FIG.3Aillustrates a case where no toner image is present on the reflection face Rs whereasFIG.3Billustrates a case where a toner image t adheres on the reflection surface Rs. In the case that is illustrated inFIG.3A, the light from the light-emitting device43ais reflected in proportion to the specular glossiness of the surface of the elastic body. The first light-receiving device43bdetects the reflected light. By contrast, in the case that is illustrated inFIG.3B, the light from the light-emitting device43ais scattered by the toner image t. In other words, the specularly reflected light decreases as the amount of adhered toner increases. In particular, in a case where the toner image t is a black toner image, the light from the light-emitting device43ais scattered or absorbed by the toner surface. In short, the specularly reflected light remarkably decreases. When the amount of adhered toner is obtained based on the specularly reflected light detected by the first light-receiving device43b, the amount of adhered toner can be obtained by the ratio between the smoothness of the reflection surface Rs and the roughness of the toner image t, in other words, the ratio in the specular glossiness between the reflection surface Rs and the toner image t. However, in a case where the reflection surface Rs is an elastic body, the reflection surface Rs is relatively rough, and thus the density sensor43hardly obtains the light specularly reflected from the reflection surface Rs. In short, the density sensor43may fail to correctly detect the amount of adhered toner. In the printer500of the present embodiment, the primary transfer belt20is an elastic belt having an elastic layer at least on the surface of the primary transfer belt20. If the amount of adhered toner is to be detected on the primary transfer belt20, a density sensor may fail to correctly detect the amount of adhered toner. On the other hand, the secondary transfer belt40is made of a resin film having relatively high glossiness such as PI. For this reason, the density sensor43easily obtains the light specularly reflected from the reflection surface Rs. Thus, the ratio in the specular glossiness between the reflection surface Rs and the toner image t is easily obtained. Accordingly, in the present embodiment, the density sensor43is disposed in the secondary transfer device4. The secondary transfer device4includes the pressure applier46as illustrated inFIG.2to allow printing on various types of recording media (for example, sheets in different thicknesses or surface roughness). In the secondary transfer device4, the pressure applier46sets the transfer pressure at the secondary transfer nip P as appropriate for the type of the recording medium S and transfers the toner image T onto the recording medium S. On the other hand, since the toner image T′ that is formed as a test pattern image is transferred onto the secondary transfer belt40in the sheet interval, the toner image T′ is to be transferred onto the secondary transfer belt40at a constant transfer pressure regardless of the type of the recording medium S that is used for printing. However, in typical printers, the toner image T′, which may be referred to as a test pattern image T′ in the following description, is transferred onto a secondary transfer belt at the same transfer pressure as the transfer pressure at which the toner image T is transferred onto the recording medium S. This is because the switching operation of the transfer pressure in the sheet interval does not catch up with the printing speed and lowers the productivity of the printed matter. For this reason, the test pattern image T′ is transferred onto the secondary transfer belt at the same transfer pressure as the transfer pressure at which the toner image T is transferred onto the recording medium S. As a result, the transfer rate of the test pattern image T′ onto the secondary transfer belt decreases. The amount of adhered toner that is calculated based on the detection value output from the density sensor43also indicates a value lower than the actual amount of adhered toner. In short, typical printers have some difficulties in optimizing the image-forming conditions. In other words, typical printers have some difficulties in keeping the density of the toner images stable during continuous printing. For this reason, the image quality decreases. By contrast, the printer500according to the present embodiment estimates a variable value for the amount of adhered toner of the test pattern image T′ that varies with a change in the transfer pressure at the secondary transfer nip P. Based on the estimated variable value, the printer500corrects the detection value output from the density sensor43. Based on the corrected detection value, the printer500adjusts the image-forming conditions. Accordingly, the detection of the amount of adhered toner on the secondary transfer belt40is not affected by the change in the transfer pressure at the secondary transfer nip P, allowing the printer500to keep the density of the toner images stable during continuous printing. Referring now toFIGS.4A and4B, a description is given of the relation between the transfer pressure and the detected amount of adhered toner. The detected amount of adhered toner is a detected amount of toner adhering to the secondary transfer belt40and may be referred to as a detection value that indicates the amount of adhered toner in the following description. FIGS.4A and4Bare graphs each illustrating the relation between the transfer pressure and the detected amount of adhered toner. Specifically,FIG.4Ais a graph according to a comparative example whereasFIG.4Bis a graph according to the present embodiment. As illustrated inFIGS.4A and4B, the transfer pressure at the secondary transfer nip P is set in four levels of 1 to 4. As the number increases, the transfer pressure increases. The optimum transfer pressure against the secondary transfer belt40is at level 1. InFIG.4A, as the transfer pressure increases from level 1, the transfer rate of the toner image T′ (i.e., test pattern image) from the primary transfer belt20to the secondary transfer belt40decreases. For this reason, the detection value that indicates the amount of adhered toner calculated based on the detection value output from the density sensor43also decreases as indicated by the broken line. In other words,FIG.4Aillustrates a relation satisfying “the target value of the amount of adhered toner>the detection value that indicates the amount of adhered toner.” In the comparative example, the toner images T and T′ are formed on the primary transfer belt20with the values indicated by the broken line being regarded as correct values. As a result, the image density varies among the toner images formed at, for example, the transfer pressure level 4. On the other hand, inFIG.4Baccording to the present embodiment, the variable values for the amount of adhered toner corresponding to the transfer pressure levels 2 to 4 are indicated by Δ2, Δ3, and Δ4. Each variable value at the corresponding one of the transfer pressure levels 1 to 4 may be expressed as “the variable value for the amount of adhered toner (Δn)=the target value of the amount of adhered toner—the detection value that indicates the amount of adhered toner,” where n=1 to 4. Since the variable value for the amount of adhered toner at each of the transfer pressure levels 1 to 4 may be considered as a correction value or a correction amount, the detection value that indicates the amount of adhered toner after correction may be expressed as “the detection value that indicates the amount of adhered toner after correction=the target value of the amount of adhered toner=the detection value that indicates the amount of adhered toner+the variable value for the amount of adhered toner.” In the following description, the detection value that indicates the amount of adhered toner after correction may be referred to simply as a corrected detection value. In the following description, the target value of the amount of adhered toner of the test pattern image T′ may be referred to as a target value M. The detection value that indicates the amount of adhered toner may be referred to as a detection value Mn. The detection value Mn based on the test pattern image T′ differs between the transfer pressure levels. When the pressure values at the transfer pressure levels 1, 2, 3, and 4 are represented by T1, T2, T3, and T4, respectively, the actual detection values that indicate the amounts of toner of the test pattern image T′ adhering to the secondary transfer belt40at the transfer pressure levels 1, 2, 3, and 4 are respectively represented by M1, M2, M3, and M4 (≠M). The difference between the target value M and the actual detection values M1, M2, M3, and M4, in other words, the variable values Δ1, Δ2, Δ3, and Δ4 at the transfer pressure levels 1, 2, 3, and 4 may be expressed as Δ1=M−M1, Δ2=M−M2, Δ3=M−M3, and Δ4=M−M4, respectively. In the comparative example, the density sensor43detects the detection values M1 to M4 (≠M) that vary depending on the pressure values T1 to T4 at the transfer pressure levels 1 to 4, respectively. The image-forming conditions are adjusted so that the detection values M1 to M4 get close to the target value M. By contrast, in the present embodiment, the variable values Δ1 to Δ4 are estimated for the detection values M1 to M4 to correct the detection values M1 to M4 based on the variable values Δ1 to Δ4. The corrected detection values M1+Δ1, M2+Δ2, M3+Δ3, and M4+Δ4, each corresponding to the target value, are set as the detection values that indicate the amounts of adhered toner at the transfer pressure levels 1, 2, 3, and 4, respectively. The image-forming conditions are adjusted so as to get close to the corrected detection values reflecting the variable values for the amount of adhered toner. Accordingly, in the printer500according to the present embodiment, a stable amount of toner of the test pattern image T′ adheres to the secondary transfer belt40during continuous printing regardless of the transfer pressure. In the configuration described above according to the present embodiment, the detection value that indicates the amount of adhered toner decreases as the transfer pressure increases. In an alternative configuration, the detection value that indicates the amount of adhered toner may increase as the transfer pressure increases. The relation between the transfer pressure and the detection value that indicates the amount of adhered toner may vary depending on the combination of the materials of the primary transfer belt20and the secondary transfer belt40. In the present embodiment, the primary transfer belt20is an elastic belt whereas the secondary transfer belt40is a PI film. In this case, the influences of minute gap discharge that is generated between the elastic belt and the toner image and between the PI film and the toner image tend to increase as the transfer pressure increases. Thus, the detection value that indicates the amount of adhered toner may decrease as the transfer pressure increases. Referring now toFIGS.5to7, a description is given of operation and processing according to an embodiment of the present disclosure. FIG.5is a block diagram of elements related to the adjustment of the image-forming conditions, according to the present embodiment. FIG.6is a block diagram illustrating a hardware configuration of a controller400according to the present embodiment. FIG.7is a flowchart of a process to correct the detection value that indicates the amount of adhered toner, according to the present embodiment. InFIG.5, the controller400includes a calculation unit401, a correction unit402, and an image-forming-condition adjustment unit403. The density sensor43and a memory404are connected to the calculation unit401. Based on the target value of the amount of adhered toner stored in the memory404and the detection value that indicates the amount of adhered toner provided by the density sensor43, the calculation unit401calculates the variable value for the amount of toner of the toner image T′ adhering to the secondary transfer belt40that varies with the change in the transfer pressure. In addition to the density sensor43, the temperature-humidity sensor200that detects the temperature and humidity inside the body of the printer500may be connected to the calculation unit401. By acquiring the temperature and humidity information inside the body of the printer500, the calculation unit401more appropriately calculates the variable value for the amount of adhered toner. The correction unit402reflects (for example, adds) the variable value for the amount of adhered toner calculated by the calculation unit401in the detection value that indicates the amount of adhered toner, thus correcting the detection value that indicates the amount of adhered toner. The image-forming-condition adjustment unit403adjusts the image-forming conditions for the toner-image forming device1, based on the detection value that indicates the amount of adhered toner after correction, in other words, the detection value that indicates the amount of adhered toner corrected by the correction unit402. The memory404stores information that is used by the calculation unit401to calculate the variable value for the amount of adhered toner. For example, the memory404stores a correction value table storing information on the type such as the thickness and surface roughness of the recording medium S in addition to the target value of the amount of adhered toner. In a case where the temperature-humidity sensor200is connected to the calculation unit401, the correction value table may be a table taking the temperature and humidity information into consideration. For example, the table may indicate that the correction value of the amount of adhered toner is greater in the correction amount for a high-temperature and high-humidity environment than for a normal-temperature and normal-humidity environment. The memory404may be disposed inside the controller400. Referring now toFIG.6, a description is given of the hardware configuration of the controller400. Components may be optionally added to or removed from the hardware configuration illustrated inFIG.6. The controller400includes a central processing unit (CPU)4001, a read only memory (ROM)4002, a random access memory (RAM)4003, a hard disk drive (HDD)/solid state drive (SSD)4004, an input/output (I/O) interface4005, a communication interface4006, and a bus line4007. The CPU4001controls the entire printer500serving as an image forming apparatus. The CPU4001is an arithmetic device that reads programs or data stored in the ROM4002onto the RAM4003and execute processing to implement the functions of the printer500. The ROM4002is a nonvolatile memory that retains the programs or data even when the power is turned off. The RAM4003is a volatile memory that is used as, for example, a work area for the CPU4001. The HDD/SSD4004controls reading or writing of various kinds of data under the control of the CPU4001. The above-described functions of the memory404is implemented by the HDD/SSD4004. The I/O interface4005is an interface in which data are sent from internal logic to external sources and from which data are received from external sources. Examples of the external sources include, but are not limited to, motors and sensors such as the density sensor43and the temperature-humidity sensor200included in the printer500and the heater of the fixing device6. The communication interface4006is an interface that performs communication (connection) with a device that performs data processing to input data to the printer500, such as a digital front end (DFE), via a communication network. The bus line4007is, for example, an address bus or a data bus to electrically connect the components described above and transmit, for example, address signals, data signals, and various control signals. The CPU4001, ROM4002, RAM4003, HDD/SSD4004, I/O interface4005, and the communication interface4006are connected with each other via the bus line4007. In the configuration described above, the printer500performs a process illustrated inFIG.7to adjust the image-forming conditions of the printer500based on the toner image T′ (i.e., test pattern image). Specifically, in step S1, the printer500forms the toner image T′ with, for example, the toner-image forming device1. The toner image T′ that is thus formed in step S1is transferred onto the secondary transfer belt40through the primary transfer and the secondary transfer and passes immediately below the density sensor43. At this time, in step S2, the density sensor43detects the toner image T′ passing immediately below the density sensor43and outputs the detection value that indicates the amount of adhered toner to the calculation unit401. Subsequently, in step S3, the calculation unit401calculates the variable value for the amount of adhered toner based on the detection value that indicates the amount of adhered toner provided by the density sensor43and the transfer pressure at that time. For example, when the toner image T′ is secondarily transferred onto the secondary transfer belt40at an inappropriate transfer pressure, a difference between the target value of the amount of adhered toner and the detection value that indicates the amount of adhered toner is calculated as the variable value for the amount of adhered toner at the transfer pressure. Subsequently, in step S4, the controller reflects (adds, in the present embodiment) the variable value for the amount of adhered toner thus calculated in step S3in the detection value that indicates the amount of adhered toner, thus correcting the detection value that indicates the amount of adhered toner. The corrected detection value that is thus obtained in step S4is notified to, for example, the toner-image forming device1via the image-forming-condition adjustment unit403. The toner-image forming device1regards the notified detection value that indicates the amount of adhered toner as a correct value. The image-forming-condition adjustment unit403adjusts the image-forming conditions to keep the image density stable. As described above, according to the present embodiment, the printer500includes the toner-image forming device1, the primary transfer device2, the secondary transfer device4, the pressure applier46, the density sensor43, and the controller400. The toner-image forming device1includes the photoconductors11a,11b,11c, and11dto form the toner image T′ on each of the photoconductors11a,11b,11c, and11d. The primary transfer device2includes the primary transfer belt20to transfer the toner image T′ from each of the photoconductors11a,11b,11c, and11donto the primary transfer belt20. The secondary transfer device4includes the secondary transfer belt40to transfer the toner image T′ from the primary transfer belt20onto the secondary transfer belt40. The pressure applier46changes a transfer pressure of the secondary transfer belt40against the primary transfer belt20. The density sensor43detects the toner image T′ transferred onto the secondary transfer belt40and outputs the detection value that indicates the amount of adhered toner, which is a detection value that indicates the amount of toner of the toner image T′ adhering to the secondary transfer belt40. The controller400sets an image-forming condition of the toner image T′ in the toner-image forming device1based on the detection value. The controller400includes the calculation unit401, the correction unit402, and the image-forming-condition adjustment unit403. The calculation unit401calculates the variable value for the amount of toner of the toner image T′ adhering to the secondary transfer belt40, based on the detection value and the target value of the amount of adhered toner, which is a target value of the amount of toner of the toner image T′ to be transferred and adhere to the secondary transfer belt40. The variable value for the amount of toner of the toner image T′ adhering to the secondary transfer belt40is a variable value for the amount of toner of the toner image T′ adhering to the secondary transfer belt40. The variable value varies with a change in the transfer pressure. The correction unit402reflects the variable value calculated by the calculation unit401in the detection value to correct the detection value. The image-forming-condition adjustment unit403adjusts the image-forming condition based on the detection value corrected by the correction unit402. Thus, the printer500adjusts the image-forming condition based on the detection value corrected to be a constant value regardless of the transfer pressure, to keep the image density stable. According to one aspect of the present disclosure, the image forming apparatus acquires a stable image density regardless of the transfer pressure. The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above. The functionality of the elements disclosed herein may be implemented using circuitry or processing circuitry which includes general purpose processors, special purpose processors, integrated circuits, application specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), conventional circuitry and/or combinations thereof which are configured or programmed to perform the disclosed functionality. Processors are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In the disclosure, the circuitry, units, or means are hardware that carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality. When the hardware is a processor which may be considered a type of circuitry, the circuitry, means, or units are a combination of hardware and software, the software being used to configure the hardware and/or processor. | 40,653 |
11860561 | The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. DETAILED DESCRIPTION It will be understood that if an element or layer is referred to as being “on,” “against,” “connected to” or “coupled to” another element or layer, then it can be directly on, against, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, if an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, then there are no intervening elements or layers present. Like numbers referred to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements describes as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, term such as “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors herein interpreted accordingly. The terminology used herein is for describing particular embodiments and examples and is not intended to be limiting of exemplary embodiments of this disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Referring now to the drawings, embodiments of the present disclosure are described below. In the drawings for explaining the following embodiments, the same reference codes are allocated to elements (members or components) having the same function or shape and redundant descriptions thereof are omitted below. Descriptions are given of an image reading device and an image forming apparatus according to an embodiment of the present disclosure, with reference to the following figures. In the present embodiment, the image reading device and the image forming apparatus according to the present embodiment achieve an abnormality detection function, for example, using a reader such as an inline sensor. In short, the image reading device and the image forming apparatus according to the present embodiment incorporating the image reading device scan a print sample having an actual defective image and set the threshold value for abnormality determination using the defective image, so as to achieve an abnormality image detection function at the level expected by a user in actual detection. Descriptions are given of the features of an image reading device and an image forming apparatus including the image reading device according to the present disclosure, with reference to the drawings. FIG.1is a diagram illustrating a schematic view of a configuration of an image forming apparatus100according to the present embodiment. An image forming apparatus100includes an image forming device110, a medium conveyor120, an image reader130, and a controller150. The controller150controls the overall operation of the image forming apparatus100. FIG.2is a control block diagram of the image forming apparatus100according to the present disclosure. As illustrated inFIG.2, the controller150divides the blocks for each function and connected to a sheet feeding unit121, the image forming device110, an abnormality detection unit40, an operation unit24, a display23, a communication unit1400, a storage unit1500, and a purge processing unit50. The controller150includes devices such as a central processing unit (CPU) and a random access memory (RAM), reads various programs from the storage unit1500, and controls each unit. Each of the operation unit24and the display23is a user interface mounted on the top of the image forming apparatus100illustrated inFIG.1. The operation unit24generates an operation signal in accordance with an operation by a user (manual instruction) and outputs the operation signal to the controller150. The operation unit24may include, e.g., a keypad and a touch panel integrally formed with the display23. The display23displays an operation screen in accordance with an instruction from the controller150. The display23may include, e.g., a liquid crystal display (LCD) or an organic electro luminescence display (OELD). The communication unit1400transmits and receives data to and from an external device connected to a communication network. The storage unit1500stores, e.g., a program readable by the controller150and data used at the time of executing the program. The storage unit1500may include, e.g., a hard disk and a nonvolatile semiconductor memory. The sheet feeding unit121includes multiple sheet feed trays121A and121B, each containing sheets specified in a job. Each sheet is fed from a corresponding one of the multiple sheet feed trays121A and121B to supply the sheet to the image forming device110. The abnormality detection unit40includes the image reader130, an analyzing unit42, and a defective image determination unit43. The abnormality detection unit40reads an image by the image reader130, analyzes the image by the analyzing unit42, and determines whether the image has a defect, in other words, whether the image has image abnormality, by the defective image determination unit43. An image reading device500includes, e.g., the image reader130and the defective image determination unit43included in the abnormality detection unit40, the operation unit24, and the controller150. The image forming device110inFIG.1includes photoconductor drums112for forming latent images corresponding to images of respective colors. To be more specific, the photoconductor drums112are the photoconductor drums112Y,112M,112C, and112K disposed so as to correspond to an image forming process using toners of yellow (Y), magenta (M), cyan (C), and black (K), which are image forming materials (for example, toners) of the respective colors. The photoconductor drums112Y,112M,112C, and112K are disposed along an intermediate transfer belt111that is an endless belt included in a movement assembly. The intermediate transfer belt111is wound around at least one drive roller and a plurality of driven rollers, and moves between a primary transfer position where an image (toner image) developed on the photoconductor drum112(i.e., photoconductor drums112Y,112M,112C, and112K) is transferred and a secondary transfer position where the image (toner image) is transferred to the sheet S. A transfer device113is disposed at the secondary transfer position. The transfer device113includes a transfer roller113aand a counter roller113bthat is disposed facing the transfer roller113a. In the transfer device113, the toner image is transferred from the intermediate transfer belt111to the sheet S to form an image at a predetermined position (i.e., image forming position) on the sheet S. A gap is provided between the transfer roller113aand the counter roller113b, so that the intermediate transfer belt111and the sheet S pass through the gap while being nipped between the transfer roller113aand the counter roller113b. An image is transferred onto the sheet S while the sheet S is nipped in the gap between the transfer roller113aand the counter roller113band conveyed in the conveyance direction of the sheet S (sub-scanning direction). The medium conveyor120includes the sheet feeding unit121(sheet feed trays121A and121B), a conveyance passage122, a fixing roller pair123, a conveyance passage switcher124, and a reversal passage125. Each of the sheet feed trays121A and121B contains the sheet S (sheets S). The conveyance passage122is defined by multiple roller pairs to convey the sheet S. The fixing roller pair123is disposed downstream from the transfer device113in the conveyance direction of the sheet S. When the image forming process is performed, under the predetermined control process by the controller150, the sheet S loaded in the sheet feed tray121A is separated by, e.g., a pickup roller and conveyed along the conveyance passage122. Then, the sheet S reaches the transfer device113. As the sheet S reaches the transfer device113, the transfer process is performed. That is, the sheet S is conveyed in the predetermined conveyance direction of the sheet S while being nipped between the surface of the intermediate transfer belt111and the counter roller113b. The transfer roller113abiases (presses) the intermediate transfer belt111toward the counter roller113b. When the sheet S passes between the intermediate transfer belt111and the counter roller113b, an image forming material on the surface of the intermediate transfer belt111is transferred onto the sheet S. In this transfer process, an image is formed on one side (first face) of the sheet S. The sheet S having the image on the first face is further conveyed, so that the image is fixed to the sheet S by the fixing roller pair123. Then, the sheet S is conveyed to the conveyance passage switcher124disposed downstream from fixing roller pair123in the conveyance direction of the sheet S. Then, the travel direction of the sheet S is reversed in the conveyance passage switcher124. The sheet S is then conveyed to the reversal passage125. Thereafter, the sheet S is conveyed again to the transfer position of the transfer roller113aso that the image formed on the intermediate transfer belt111is transferred onto the second face of the sheet S. The sheet S having the image on the second face is further conveyed, so that the image on the second face of the sheet S is fixed to the sheet S by the fixing roller pair123. Then, the sheet S is conveyed to the image reader130disposed downstream from the fixing roller pair123in the conveyance direction of the sheet S. The image reader130includes readers130aand130b. The reader130areads the first face of the sheet S. The reader130breads the second face of the sheet S. The sheet S that has passed through the image reader130is ejected to a sheet ejection unit126including multiple sheet ejection trays126A and126B. To be more specific, the sheet S is ejected to a corresponding one of the sheet ejection trays126A and126B. FIG.3is a diagram illustrating an example of a configuration of a reader of an image reader130included in the image forming apparatus100. As illustrated inFIG.3, the reader130bincludes a reading unit710and a line image sensor. The reading unit710irradiates a sheet S with light when the sheet S passes through a reading position. The line image sensor includes multiple imaging elements725that perform photoelectric conversion for each pixel. The imaging elements725are disposed in a one-dimensional shape along the width direction of the sheet S. The reader130brepeatedly performs a reading operation of an image for one line extending in the width direction in accordance with a passing operation of the sheet S that passes the reading position, so as to read the image printed on the sheet S as a two-dimensional image. After this operation, the analyzing unit42of the abnormality detection unit40analyzes the image, and then the defective image determination unit43of the abnormality detection unit40determines whether the image is a defective image. Each of the multiple imaging elements725is an optical sensor that performs a reading operation on an image formed on the sheet S at the reading position. The background switching revolver705is disposed at a position facing the reader130bacross the conveyance passage to reflect irradiation light with which the sheet S is irradiated when the image on the sheet S is read. The reading unit710includes an exposure glass723disposed facing the background switching revolver705. The exposure glass723penetrates light emitted from the reading unit710and reflected light returning after the emitted light is reflected by the background switching revolver705or the sheet S. Note that the reader130ahas the substantially identical structure to the reader130band includes the reading unit710and the imaging element725. Different from the reader130b, the reading unit710and the imaging elements725of the reader130aare disposed vertically opposite with respect to the background switching revolver705across the conveyance passage. To be more specific, the background switching revolver705is disposed above the conveyance passage and the reading unit710and the imaging element725of the reader130aare disposed below the conveyance passage. The image forming apparatus100illustrated inFIG.1may be applied to, for example, an image forming system10illustrated inFIG.4. FIG.4is a diagram illustrating a configuration of the image forming system10including the image forming apparatus100ofFIG.1. The image forming system10includes the image forming apparatus100, an inline sensor unit12, a sheet ejection unit13, and a sheet feeding unit14. The image forming apparatus100forms an image on a sheet fed from the sheet feeding unit14and ejects the sheet toward the inline sensor unit12. The inline sensor unit12is disposed downstream from the image forming apparatus100in the conveyance direction of the sheet S to inspect the sheet S ejected from the image forming apparatus100. The sheet ejection unit13is disposed downstream from the inline sensor unit12in the conveyance direction of the sheet S to receive the sheet that has passed through the inline sensor unit12and sequentially stack multiple sheets S ejected from the inline sensor unit12. In the image forming system10illustrated inFIG.4, the sheet feeding unit14is disposed upstream from the image forming apparatus100in the conveyance direction of the sheet S to contain a large number of sheets to be fed to the image forming apparatus100. In the image forming system10having such a configuration, abnormality detection is performed on an image read by the inline sensor unit12. An image read from an automatic document feeder (ADF) or a scanner11is used as an image used for setting a threshold value of abnormality detection in the abnormality detection. Further, the image forming apparatus100includes an operation unit (control panel)24having a display23for setting and displaying a threshold setting mode for defective image detection that is described below. Further, the configuration of the image forming apparatus100may be applied to an image forming system1that is not provided with a scanner mounted on the housing of the image forming apparatus100, as illustrated inFIG.5. FIG.5is a block diagram illustrating an example of a hardware configuration of the image forming system1including the image forming apparatus100on which a scanner is not mounted. As illustrated inFIG.5, the image forming system1includes the image forming apparatus100, a medium position detection device200, and a stacker300. The image forming apparatus100includes an operation unit101that is similar to the operation unit24illustrated inFIG.4, an image forming device, a transfer belt, a secondary transfer roller, a sheet feeding device, a conveyance roller pair, a fixing roller, and a reversal passage provided in an image forming apparatus that is similar to the image forming apparatus100illustrated inFIG.1. Even in such an apparatus (image forming apparatus100), the threshold value for defective image detection may be set based on, for example, an image read by another apparatus as illustrated inFIG.5, so that the read image can be used to determine the threshold value for defective image detection. Next, a description is given of a detection threshold setting mode of defective image detection according to the present embodiment. FIG.6is a diagram illustrating an example of a screen set in a detection threshold setting mode of a defective image (detection threshold setting screen) according to an embodiment of the present disclosure. FIG.5illustrates an example of a detection threshold setting screen displayed in a detection threshold setting mode of defective image detection on the operation unit24(display23) illustrated inFIG.4, the operation unit101illustrated inFIG.5, or the display screen of a personal computer (PC)15. The PC15is a typical information processing device that is electrically connected to the image forming apparatus100. As illustrated inFIG.6, a detection threshold setting screen501includes a read image displaying area502and a defective image threshold setting area503. The read image displaying area502displays an image read by the image reader130. The defective image threshold setting area503is an area to set the defective image threshold value for determining that the read image has a defective image portion. In the detection threshold setting screen501, defective image information and defect type information are set based on the image on a sheet read by the image reader130. The defective image information is, for example, an image read by the image reader130. The defect type information is, for example, the type of the defective image portion included in the image and the threshold or rank indicating the degree of abnormality. Further, as illustrated inFIG.6, the defective image threshold setting area503includes a selection area5031, a determination result area5032, and a registration content display area5033. The selection area5031displays a defective image portion selected from the read image. The determination result area5032displays the determination result indicating the degree of abnormality of the selected defective image portion. The registration content display area5033associates the type of the selected defective image portion with the rank indicating the threshold value of the degree of abnormality. For example, in response to a touch operation by a user on the operation panel, the controller150displays an area of a predetermined range including the position on the screen at which the touch operation is received, in the selection area5031. InFIG.6, an area X is selected as a defective image portion by the user, in the image displayed in the read image displaying area502and is displayed in the selection area5031. Further, the controller150determines the type and the degree (level) of abnormality of the defective image portion displayed in the selection area5031illustrated inFIG.6. For example, the controller150determines whether the abnormality of the defective image portion is any of a white spot, a black spot, or a white vertical streak. The determination method may analyze a defective image portion using a known image analysis technique. Then, when the result of the analysis satisfies a condition such as a predetermined threshold value specified for each type, the determination method may determine that the image is a defective image portion of the type. In addition, the method of determining the degree of abnormality may be determined in stages according to the rate of deviation from the predetermined threshold value. For example, as the rate of deviation from the predetermined threshold value increases, the degree of abnormality of the type become greater (higher). The controller150displays the result of the determination in the determination result area5032. Then, the controller150stores the type of the defective image portion displayed in the determination result area5032and the degree of abnormality in association with each other in the storage unit1500such as a memory. The registration content display area5033inFIG.6indicates that the controller150stores the type of the abnormal image portion “white spot” and the degree of abnormality “rank3” in association with each other and registers them in the image forming apparatus100. The registration content display area5033inFIG.6also displays the types of defective image portions registered in the past and the degrees of image abnormality in a list format. Since a defective image to be detected differs depending on a user, as described above, an image read by the image reader130is displayed in the read image displaying area502ofFIG.6, and a defective image portion is selected from the area to be registered. After the user has selected a defective image portion, the controller150determines the type and rank (level of image abnormality) of the defective image portion, and then registers the defective image portion in the registration content display area5033as an image abnormality list. After the controller150has registered the type and rank of the defective image portion in association with each other, the type and rank of the defective image portion are displayed as the registered combination in the image abnormality list in the registration content display area5033. As the user selects the registered combination, the defective image portion is set. At this time, one or more defective image portions displayed in the image abnormality list may not be selected from an image of one print sample. For example, the controller150may store a defective image portion determined among images of a plurality of print samples previously read in the past in association with a rank that corresponds to a threshold value, in a storage unit such as a memory, so that the controller150reads and sets the defective image portion or sets a new rank obtained by changing the threshold value of the read rank. As described above, the defective image portion in the past and the rank are read to be settable or changeable. By so doing, a new rank is determined with reference to the defective image portion previously determined. The image abnormality list may be set for each sheet type. For example, uneven sheet tends to cause unevenness, thin paper tends to cause skew, and thick paper tends to cause shock jitter. In order to address these inconveniences, a user may designate a sheet type having a particularly high frequency of occurrence of a defective image portion, on the screen, so that the controller150may register the above-described combination with respect to the designated sheet type, in the image abnormality list. Next, a description is given of the defective image portion illustrated inFIG.6, with reference toFIGS.7A and7B. FIG.7Ais a diagram illustrating an image read by the image reader130(readers130aand130b). FIG.7Bis a diagram illustrating an image read by the image reader130(readers130aand130b) and defective image portions in the image. As described with reference toFIGS.7A and7B, it is likely that the image read by the image reader130includes various defective image portions such as a white spot601, a vertical white streak602, and a black spot603. Therefore, these defective image portions are to be detected and removed from the image to be output. The thickness, size, and range of such defective image portions, in other words, the thickness, size, and range of a portion to be abnormal (defective) vary depending on the request of a user who uses the image forming apparatus. Further, an image of the defective image portion included in a typical test image that is not generated based on an actual read image may differ from an image that is read actually by the image reader130. Therefore, as described above, the level of image abnormality detection is set based on the actual read image. As a result, a defective image (image abnormality) is detected with accuracy in accordance with a request from each user. FIGS.8A and8Bare flowcharts of respective process procedures of a defective image reading mode in which a defective image is read and a defective image determination mode in which the defective image is determined. FIG.8Ais a flowchart of a defective image reading process executed in a mode of reading a defective image. FIG.8Bis a flowchart of a defective image determining process executed in a mode of determining image abnormality. The flowcharts ofFIGS.8A and8Bare described with reference toFIG.6. As illustrated inFIG.8A, in the defective image reading process, the image reader130reads a sheet that is a reading object having a defective image portion by a user, and outputs the image on the sheet read by the image reader130(step S701). In other words, the controller150causes the image reader130to read the printed portion of the defective image on the sheet and output the sheet having the defective image portion (image abnormality). Subsequently, the controller150displays the read image on the sheet in the read image displaying area502of the detection threshold setting screen501, and then receives the selection of the defective image portion from the user (step S702). The controller150displays the defective image portion selected by the user in the selection area5031of the defective image threshold setting area503. In addition, the controller150associates the type of the selected defective image portion in association with the rank indicating the degree of image abnormality, and then causes the operation unit24(display23), the operation unit101, or the display screen of the PC15to display the result, so as to receive the user's input in the registration content display area5033(step S703). As a result, the controller150displays the determination result including the degree of image abnormality of the selected defective image portion, in the determination result area5032. The controller150further receives the selection of another defective image portion from the user (step S704). Thereafter, the processing of S702and S703is repeated until an end instruction is received from the user. InFIG.8B, the controller150activates a defective image determination mode for determining a defective image portion based on the type and the rank of the defective image portion set in the defective image threshold setting area503(step S711). After step S711, the image reader130reads an image on a sheet (step S712), and then the controller150determines a defective image portion based on the type and rank of the defective image portion set in the defective image threshold setting area503(step S713). FIG.9is a flowchart of a process of the image forming apparatus100according to a first embodiment of the present disclosure. Now, a description is given of the detailed process of the image forming apparatus100with reference to the flowchart ofFIG.9, in connection with handling of a sheet with an image abnormality occurred due to the overall operation of the image forming apparatus100. In the defective image reading mode, when the sheet that is a reading object determined to have a defective image portion is set by a user on a scanner such as the scanner11or an ADF, the image reader130reads the sheet, and then outputs the image on the sheet read by the scanner11or the ADF (step S801). Subsequently, the controller150causes the operation unit24(display23), the operation unit101, or the display screen of the PC15to display the read image on the sheet in the read image displaying area502of the detection threshold setting screen501(step S802). In other words, and then receives the selection of the defective image portion from the user (step S803). The controller150displays the defective image portion selected by the user in the selection area5031of the defective image threshold setting area503. In addition, the controller150associates the type of the selected defective image portion in association with the rank indicating the degree of image abnormality, and then causes the operation unit24(display23), the operation unit101, or the display screen of the PC15to display the result, so as to receive the user's input in the registration content display area5033(step S803). As a result, the controller150displays the determination result including the degree of image abnormality of the selected defective image portion, in the determination result area5032. The controller150further determines whether there is another defective image portion determined by the user, in other words, whether there is another piece of defect type information (step S804). When there is another piece of defect type information (YES in step S804), the process procedure returns to step S803to receive selection of the determination result of each defective image portion. On the other hand, when the controller150processes the whole pieces of type information and completely registers the defective image portions, in other words, there is no more piece of type information (NO in step S804), the process returns to an image forming mode. As illustrated inFIG.9, the image reader130reads the image having a defective image portion in which an image abnormality actually occurs, and the controller150sets a determination threshold value corresponding to the type of a defective image portion as a degree of the defective image portion used when the controller150determines that the image has an abnormality, according to the input of the user's selection. When reading an image including a defective image portion, an image that is read using a flatbed scanner or an ADF is used to obtain a highly accurate image. Alternatively, an image that is read using an inline sensor may be used. The controller150extracts the defective image portion to be detected as an image having an abnormality, from the image read by the image reader130. Then, the controller150classifies the type of the defective image portion (e.g., vertical streak, white spot, black spot), quantifies the degree of abnormality for each defective image, and causes the storage unit to store the result as a threshold for determining the defective image as a defective image or an image having an abnormality. In the above-described example, a user determines the type of the defective image portion. However, the controller150may compare models of various types of defective image portions stored in advance in a storage unit such as a memory, with the defective image portion read by the image reader130, determine that the defective image portion is of a specific type when a predetermined condition is satisfied, and cause the storage unit to store the type of the defective image portion in association with the rank of the defective image portion, based on the determination result. Specifically, when the predetermined condition is satisfied, that is, when a defective image portion of the model and a defective image portion that is actually read at the time of setting have indexes including a value representing an image, e.g., a pixel value or a luminance value on an image and a shape and size of a defective image portion, close to each other by a predetermined threshold or more, the controller150may determine that the defective image portion of the model is the same type as the defective image portion actually read. Thereafter, the controller150receives, from the user, an input of the abnormality level of the image determined to be a defective image portion of the same type. The level of the abnormality may not be input by a user. For example, the controller150may automatically set the level of the abnormality in accordance with the indexes including a value representing an image, e.g., a pixel value or a luminance value on an image and a shape and size of a defective image portion, as in the above description. As illustrated in the flowchart ofFIG.8B, when the type of the defective image portion and the threshold value indicating the level of the defective image portion are set, the defective image determination mode ends and the image forming operation is started again. As described above, in the present embodiment, after a user reads a sheet on which an image having actual image abnormality, and then determines image abnormality for setting a defective image, based on the defective image portion included in the image on the sheet that is actually read. For example, an image reading device (for example, the image reading device500) includes an image reader (for example, the image reader130) and a defective image determination unit (for example, the defective image determination unit43). The image reader is configured to read an image on a recording medium (for example, the sheet S) to be conveyed. The defective image determination unit is configured to obtain defective image information (for example, an image read by the image reader130) and defect type information (for example, the type of the defective image portion included in the image, the threshold or rank indicating the degree of the abnormality) based on the image on the recording medium read by the image reader, and then determine an abnormality of the image on the recording medium. According to such a configuration, the defective image is accurately determined for each image data of the user. Typical image reading devices detect the abnormality level set in advance in each device or output a test image for resetting the level of the defective image, so as to set the abnormality level based on the test image. However, such typical image reading devices set a constantly occurring defective image alone, and therefore a defective image at the level at which a user can recognize the abnormality cannot be set as a threshold value at an actual image level. Further, a threshold value for determining a defective image portion with respect to the defective image such as an image with spots (voids) suddenly generated cannot be set. However, in the present embodiment, a threshold value for determining a defective image is set using an image in which an abnormality has occurred when the user has actually read the image, instead of the test image as described above. Therefore, a defective image is accurately determined for each image data of the user. In addition, as illustrated inFIG.6, the defective image determination unit (for example, the defective image determination unit43) compares the image read by a scanner that is an internal or external reading unit electrically connected to the image abnormality determination unit, with the defect type information, and determines whether the image on the recording medium read by the reading unit is a defective image. The circuitry (for example, the controller150) outputs the determination result on a display, and then sets the defect type information. As a result, an image used for determining a defective image portion is set from a medium such as a sheet that is actually read by the user, and therefore the original image is captured by the scanner (or the ADF). Further, as illustrated inFIGS.4and5, the image reading device further includes an operation unit (for example, the operation unit24, the operation unit101) and the controller150. The operation unit includes a screen (for example, the detection threshold setting screen501) to receive an input. A user inputs an instruction through the screen. The defective image determination unit (for example, the defective image determination unit43) compares the value specified via the screen by the user, with the defect type information, and determines whether the image on the recording medium read by the reading unit is a defective image. The controller150outputs (displays) the determination result of the abnormality of the image on the recording medium on the screen, and then sets the defect type information. Due to such a configuration, the defect type information is set in accordance with the user's intention. Further, as illustrated inFIG.6, the defective image determination unit (for example, the defective image determination unit43) determines the defective image portion of the image having an abnormality, out of the images of the media read by the image reader (for example, the image reader130), and the controller150sets the defect type information about the defective image portion. Due to such a configuration, the defect type information is set for a defective image portion that the user determines as an image having abnormality. The image reading device (for example, the image reading device500) further includes an operation unit (for example, the operation unit24, the operation unit101) and the controller150. The operation unit includes a screen through which an instruction is received from a user. The defective image determination unit (for example, the defective image determination unit43) causes the operation unit to display the defective image portion on the screen, and then determines the defective image portion displayed on the screen with a threshold value specified by the user on the screen. The controller150sets the defect type information. Due to such a configuration, the user selects the defective image portion on the screen, and then set the defect type information. Further, the defective image determination unit43may set the defect type information based on an image read by an external reader connected via a network. Accordingly, for example, multiple image reading units disposed respective locations apart from each other set the defect type information using a user's image shared between the multiple image reading units. Due to such a configuration, the defective image portion is determined based on the same standard, thereby equalizing, that is, making the image quality uniform. Further,FIG.10is a flowchart of a process of the image forming operation of the image forming apparatus100according to a second embodiment of the present disclosure. As described in the flowchart ofFIG.10, as the image formation mode is initiated, the defective image determination mode is turned on (step S901). Then, the image forming operation starts (step S902), and the controller150causes the image reader to execute reading (step S903). The image reader counts the number of output pages of the recording media, and the defective image determination unit43determines whether there is a defective image (step S904). When there is not a defective image (NO in step S904), the process returns to step S903and repeats the processing of step S903until a defective image is detected. When there is a defective image (YES in step S904), the controller150causes the display to display the number of output pages of the recording media including the image determined to be defective (step S905). As a result, in a case in which a sheet having an image abnormality and a normal sheet having no image abnormality are mixed in the destination of ejection (for example, in a case in which the destination of ejection is not switched from the image abnormality determination processing in time), the user grasps later about which page has the image abnormality, and then a print sample of a defective image is extracted (step S906). Further,FIG.11is a flowchart of a process of the image forming operation of the image forming apparatus100according to a third embodiment of the present disclosure. As described in the flowchart ofFIG.11, as the image formation mode is initiated, the defective image determination mode is turned on (step S1001). Then, the image forming operation starts (step S1002), and the controller150causes the image reader to execute reading (step S1003). The defective image determination unit43determines whether there is a defective image (step S1004). When there is not a defective image (NO in step S1004), the process returns to step S1003and repeats the processing of step S1003until a defective image is detected. When there is a defective image (YES in step S1004), the controller150causes a purge processing unit50to purge the sheet having an image with an image abnormality and the sheet having the image without an image abnormality to respective destinations of ejection different from each other (step S1105). Specifically, when there is a defective image (YES in step S1004), a sheet having an image with no image abnormality is ejected to a destination of ejection that is different from the destination of ejection of the sheet having an image with an abnormality. For example, in the image forming apparatus100illustrated inFIG.1, when an image is formed on a sheet and the sheet ejection tray126B is specified as the destination of ejection of the printed sheet and the defective image determination unit43detects a defective image, the detected print sample having a defective image is ejected to the sheet ejection tray126A. The user removes the print sample having a defective image from the sheet ejection tray126A (step S1006). As a result, the output product that is detected to be defective is distinguished from the normal product without an abnormality and is ejected to the destination of ejection different from the destination of ejection of the normal product. Further, inFIG.6, the image reader may read multiple images, the defective image determination unit43may display the defective image portion included in each of the images read by the image reader on the screen, and the controller150may set a value designated on the screen by the user as the defect type information for the displayed defective image portion. Due to such a configuration, the defect type information is set using the multiple images specified by the user. Further, as described with reference toFIG.6, the defective image determination unit43determines the abnormality of the image read by the image reader130for each type of the recording media. Thereafter, the controller150sets the defect type information for each sheet type based on the determination since the level or object of the abnormality detection may be different for each sheet type. The present disclosure is not limited to specific embodiments described above, and numerous additional modifications and variations are possible in light of the teachings within the technical scope of the appended claims. It is therefore to be understood that, the disclosure of this patent specification may be practiced otherwise by those skilled in the art than as specifically described herein, and such, modifications, alternatives are within the technical scope of the appended claims. Such embodiments and variations thereof are included in the scope and gist of the embodiments of the present disclosure and are included in the embodiments described in claims and the equivalent scope thereof. The effects described in the embodiments of this disclosure are listed as the examples of preferable effects derived from this disclosure, and therefore are not intended to limit to the embodiments of this disclosure. The embodiments described above are presented as an example to implement this disclosure. The embodiments described above are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, replacements, or changes can be made without departing from the gist of the invention. These embodiments and their variations are included in the scope and gist of this disclosure and are included in the scope of the invention recited in the claims and its equivalent. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above. Each of the functions of the described embodiments may be implemented by one or more processing circuits or circuitry. Processing circuitry includes a programmed processor, as a processor includes circuitry. A processing circuit also includes devices such as an application specific integrated circuit (ASIC), digital signal processor (DSP), field programmable gate array (FPGA), and conventional circuit components arranged to perform the recited functions. | 44,436 |
11860562 | DETAILED DESCRIPTION An exemplary embodiment of the present disclosure is described in detail below with reference to the drawings. An upstream side in a transport direction of recording paper P that is an example of a recording medium may hereinafter be referred to simply as “upstream side”. A downstream side in the transport direction may hereinafter be referred to simply as “downstream side”. An upstream side in a circulating direction (transport direction) of a transfer belt (belt) (image forming target)52may hereinafter be referred to simply as “upstream side”. A downstream side in the circulating direction (transport direction) may hereinafter be referred to simply as “downstream side”. As illustrated inFIG.1, an image forming apparatus10uses, for example, an electrophotographic system that forms a toner image (example of an image) on the recording paper P. The image forming apparatus10includes an image forming unit12, a container14, a transporter16, and a fixing device18in an apparatus body (not illustrated). The components of the image forming apparatus10(image forming unit12, container14, transporter16, and fixing device18) are described below. In the following description, a width direction (horizontal direction) of the apparatus body is an X direction, an up-and-down direction (vertical direction) of the apparatus body is a Y direction, and a direction orthogonal to the X direction and the Y direction (direction orthogonal to each drawing sheet) is a Z direction. <Image Forming Unit> The image forming unit12has a function of forming toner images on the recording paper P. Specifically, the image forming unit12includes first photoconductor units20, second photoconductor units30, and a transfer device50. [Photoconductor Units] As illustrated inFIG.1, two first photoconductor units20and two second photoconductor units30are provided. The first photoconductor units20and the second photoconductor units30are detachable from the apparatus body. The image forming apparatus10of this exemplary embodiment includes first photoconductor units20Y and20M for two colors that are yellow (Y) and magenta (M), and second photoconductor units30C and30K for two colors that are cyan (C) and black (K). To distinguish yellow (Y), magenta (M), cyan (C), and black (K), the reference numerals of the members may be suffixed with letters “Y”, “M”, “C”, and “K”. Without the color distinction, the letters “Y”, “M”, “C”, and “K” may be omitted. In the transfer device50described later, the transfer belt52made of an elastic material has two straight portions shaped straight when viewed in the Z direction. The two straight portions are an upper portion52A and a lower portion52B. When viewed in the Z direction, the upper portion52A extends along the X direction, and the lower portion52B is inclined with respect to the X direction. That is, when viewed in the Z direction, an angle θB (seeFIG.1) between the lower portion52B and the X direction is an acute angle and is larger than an angle θA (not illustrated) between the upper portion52A and the X direction. The angle θA is 0° or an acute angle slightly larger than 0°. When viewed in the Z direction, the upper portion52A and the lower portion52B are arranged in the Y direction. The term “straight portion” in this specification and in the claims is not limited to a portion shaped completely straight. For example, the upper portion52A positioned between a steering roller45and a loop roller48described later is slightly concave at a part pushed by two first photoconductor drums22and two first transfer rollers41, but corresponds to the “straight portion”. Similarly, the lower portion52B positioned between the steering roller45and a loop roller47is slightly concave at a part pushed by two second photoconductor drums32and two first transfer rollers41, but corresponds to the “straight portion”. The two first photoconductor units20face the outer peripheral surface (upper surface) of the upper portion52A, and are arranged in the X direction along the upper portion52A. In particular, the two first photoconductor units20are arranged so that the flat lower surfaces of support plates28of the first photoconductor units20described later are parallel to the outer peripheral surface (upper surface) of the upper portion52A. The lower surface of the support plate28and the outer peripheral surface of the upper portion52A face each other in the Y direction at a short distance therebetween. Each first photoconductor unit20includes the first photoconductor drum22that rotates in one direction (e.g., a counterclockwise direction inFIG.1). Each first photoconductor drum22is rotatable about a rotation axis20X extending in the Z direction. When viewed in the Z direction, a distance (adjacency distance) between the rotation axes20X of the two first photoconductor units20is a first distance20B. Each first photoconductor unit20includes a first charger24, a first exposer25, a first developer26, and a first remover27in order from an upstream side in the rotating direction of the first photoconductor drum22. Each first photoconductor unit20includes a pair of support plates28spaced away from each other in the Z direction. InFIG.1, illustration of one support plate28is omitted. The first charger24, the first exposer25, the first developer26, and the first remover27extend in the Z direction. Both ends of each of the first charger24, the first exposer25, the first developer26, and the first remover27in the Z direction are supported by the pair of support plates28. Relative movement of the pair of support plates28is restricted. As illustrated inFIG.1, the dimension of each first photoconductor unit20in the X direction is a horizontal dimension20L. The two second photoconductor units30face the outer peripheral surface (lower surface) of the lower portion52B, and are arranged along the lower portion52B. Each second photoconductor unit30includes the second photoconductor drum32that rotates in one direction (e.g., a counterclockwise direction inFIG.1). Each second photoconductor drum32is rotatable about a rotation axis30X extending in the Z direction. When viewed in the Z direction, a distance (adjacency distance) between the rotation axes30X of the two second photoconductor units30is a second distance30B. Each second photoconductor unit30includes a second charger34, a second exposer35, a second developer36, and a second remover37in order from an upstream side in the rotating direction of the second photoconductor drum32. Each second photoconductor unit30includes a pair of second support plates38spaced away from each other in the Z direction. InFIG.1, illustration of one second support plate38is omitted. The second charger34, the second exposer35, the second developer36, and the second remover37extend in the Z direction. Both ends of each of the second charger34, the second exposer35, the second developer36, and the second remover37in the Z direction are supported by the pair of second support plates38. Relative movement of the pair of second support plates38is restricted. As illustrated inFIG.1, the dimension of each second photoconductor unit30in the X direction is a horizontal dimension30L. The term “image former” in this specification and in the claims causes a toner or ink to adhere to the image forming target (e.g., the transfer belt52). That is, the first photoconductor drum22of the first photoconductor unit20corresponds to the “image former”, and the second photoconductor drum32of the second photoconductor unit30corresponds to the “image former”. That is, the first charger24, the first exposer25, the first developer26, and the first remover27do not correspond to the “image former”. Similarly, the second charger34, the second exposer35, the second developer36, and the second remover37do not correspond to the “image former”. When the image forming apparatus10uses an ink jet system as described later, an ink jet head corresponds to the “image former”. As illustrated inFIG.1, the first developer26includes a developing roller26A, a collection auger26B, a supply auger26C, and a stirring auger26D. Similarly, the second developer36includes a developing roller36A, a collection auger36B, a supply auger36C, and a stirring auger36D. The supply auger26C and the stirring auger26D are arranged in the X direction. The supply auger36C and the stirring auger36D are arranged in the Y direction. Therefore, the horizontal dimension of the second developer36is smaller than the horizontal dimension of the first developer26. Thus, the horizontal dimension30L is smaller than the horizontal dimension20L. As illustrated inFIG.1, the two first photoconductor units20are arranged in the X direction when viewed in the Z direction. That is, the two first photoconductor units20are not arranged in the Y direction. When viewed in the Z direction, the two second photoconductor units30are partly arranged in the Y direction. InFIG.1, a horizontal dimension30V is a dimension of the parts of the two second photoconductor units30in the X direction. InFIG.1, a horizontal dimension30E is a horizontal dimension of a portion including the two second photoconductor units30. InFIG.1, a horizontal dimension30G is a horizontal dimension of a portion including the lower portion52B and the two second photoconductor units30. In each first photoconductor unit20, the first charger24charges the outer peripheral surface of the first photoconductor drum22. The first exposer25exposes the charged outer peripheral surface of the first photoconductor drum22to light to form an electrostatic latent image on the outer peripheral surface of the first photoconductor drum22. The first developer26develops the formed electrostatic latent image to form a toner image. After the toner image is transferred onto the transfer belt52, the first remover27removes the residual toner on the outer peripheral surface of the first photoconductor drum22. In each second photoconductor unit30, the second charger34charges the outer peripheral surface of the second photoconductor drum32. The second exposer35exposes the charged outer peripheral surface of the second photoconductor drum32to light to form an electrostatic latent image on the outer peripheral surface of the second photoconductor drum32. The second developer36develops the formed electrostatic latent image to form a toner image. After the toner image is transferred onto the transfer belt52, the second remover37removes the residual toner on the outer peripheral surface of the second photoconductor drum32. [Transfer Device] As illustrated inFIG.1, the transfer device50includes four first transfer rollers41that are examples of a first transferer, the transfer belt52that is an example of an intermediate transferer, and a transfer barrel60that is an example of a second transferer. In the transfer device50, the toner images formed on the outer peripheral surfaces of the first photoconductor drums22are firstly transferred onto the transfer belt52while being laid over one another, and the laid toner images are secondly transferred onto the recording paper P. (First Transfer Rollers) As illustrated inFIG.1, each first transfer roller41facing the upper portion52A transfers the toner image formed on the outer peripheral surface of each first photoconductor drum22onto the outer peripheral surface of the transfer belt52at a first transfer position T1between the first photoconductor drum22and the first transfer roller41. Each first transfer roller41facing the lower portion52B transfers the toner image formed on the outer peripheral surface of each second photoconductor drum32onto the outer peripheral surface of the transfer belt52at a first transfer position T1between the second photoconductor drum32and the first transfer roller41. A distance between the first transfer positions T1of the two first photoconductor drums22corresponds to the first distance20B. Similarly, a distance between the first transfer positions T1of the two second photoconductor drums32corresponds to the second distance30B. In this exemplary embodiment, the toner image formed on the outer peripheral surface of the first photoconductor drum22is transferred onto the outer peripheral surface of the transfer belt52at the first transfer position T1by applying a first transfer voltage between the first transfer roller41and the first photoconductor drum22. Similarly, the toner image formed on the outer peripheral surface of the second photoconductor drum32is transferred onto the outer peripheral surface of the transfer belt52at the first transfer position T1by applying the first transfer voltage between the first transfer roller41and the second photoconductor drum32. (Transfer Belt) As illustrated inFIG.1, the transfer belt52has an annular shape so that the toner images are transferred onto the outer peripheral surface, and is looped around a driving roller44, the steering roller45, a backup roller46, the loop roller47, the loop roller48, and a push roller49to determine the posture. The steering roller45is an example of a belt winding motion correcting roller (image forming target winding motion correcting roller). The driving roller44having a circular cross section is driven by a driver (not illustrated) to rotate about an axis44X extending in the Z direction, thereby circulating the transfer belt52in a circulating direction indicated by an arrow A at a predetermined speed. The diameter of the steering roller45having a circular cross section is equal to the diameter of the driving roller44within a tolerance. In other words, an outer peripheral length45C of the steering roller45is equal to an outer peripheral length44C of the driving roller44within a tolerance. The steering roller45is rotatable about an axis45X extending in the Z direction. The steering roller45is configured to swivel about a center in the direction of the axis45X. Therefore, the steering roller45suppresses a winding motion of the transfer belt52. Each of the first distance20B between the two first photoconductor drums22and the second distance30B between the two second photoconductor drums32is set to an integral multiple of each of the outer peripheral length44C of the driving roller44and the outer peripheral length45C of the steering roller45. The second distance30B is shorter than the first distance20B. For example, in this exemplary embodiment, the first distance20B is set to four times as large as each of the outer peripheral length44C and the outer peripheral length45C, and the second distance30B is set to three times as large as each of the outer peripheral length44C and the outer peripheral length45C. A distance along the transfer belt52between the first transfer position T1of the first photoconductor drum22on the downstream side and the first transfer position T1of the second photoconductor drum32on the upstream side differs from the first distance20B and the second distance30B. That is, the distance along the transfer belt52between the first transfer position T1of the first photoconductor drum22on the downstream side and the first transfer position T1of the second photoconductor drum32on the upstream side does not correspond to the “adjacency distance (first distance, second distance)” in the claims. The distance along the transfer belt52between the first transfer position T1of the first photoconductor drum22on the downstream side and the first transfer position T1of the second photoconductor drum32on the upstream side is also set to an integral multiple of each of the outer peripheral length44C of the driving roller44and the outer peripheral length45C of the steering roller45. The backup roller46faces the transfer barrel60across the transfer belt52. A contact area between the transfer barrel60and the transfer belt52is a nip area Np (seeFIG.1). The nip area Np is a second transfer position T2where the toner images are transferred from the transfer belt52onto the recording paper P. The loop roller47positioned on a downstream side of the second photoconductor unit30K and on an upstream side of the backup roller46is rotatably in contact with the inner peripheral surface of the transfer belt52. The loop roller48positioned on an upstream side of the first photoconductor unit20Y and on a downstream side of the driving roller44is rotatably in contact with the inner peripheral surface of the transfer belt52. The push roller49positioned on an upstream side of the loop roller48and on a downstream side of the driving roller44is rotatably in contact with the outer peripheral surface of the transfer belt52and pushes the transfer belt52toward the inner periphery. If the push roller49is not provided, a portion of the transfer belt52between the driving roller44and the loop roller48is shaped as indicated by an imaginary line inFIG.2. In this case, an overlap angle between the transfer belt52and the driving roller44is θI. In this exemplary embodiment, the overlap angle between the transfer belt52and the driving roller44is θ because the push roller49is provided.FIG.2demonstrates that the overlap angle θ is larger than the overlap angle θI. <Transporter> As illustrated inFIG.1, the transporter16includes a transport device (not illustrated) that transports the recording paper P fed out from the container14in an arrow B direction. The transport device transports the recording paper P from the container14to the transfer barrel60. After the toner images are secondly transferred onto the recording paper P passing over the transfer barrel60(second transfer position T2), the transport device transports the recording paper P to the fixing device18. <Fixing Device> As illustrated inFIG.1, the fixing device18includes a heating roller42that is an example of a heating member, and a pressurizing roller43that is an example of a pressurizing member. In the fixing device18, the toner images transferred onto the recording paper P at the transfer barrel60are fixed onto the recording paper P by heating and pressurizing the recording paper P between the heating roller42and the pressurizing roller43. Next, the image forming apparatus10having the structure described above is described in detail. In the image forming apparatus10of this exemplary embodiment, the second distance (adjacency distance)30B between the rotation axes30X of the two second photoconductor drums32(image formers) positioned on the downstream side of the steering roller45and on the upstream side of the transfer position for the recording paper P is an integral multiple of the outer peripheral length45C of the steering roller45. In the image forming apparatus10, each of the first distance20B between the two first photoconductor drums22and the second distance30B between the two second photoconductor drums32is set to an integral multiple of the outer peripheral length44C of the driving roller44. The second distance30B between the two second photoconductor drums32positioned on the downstream side of the first photoconductor drums22is shorter than the first distance20B. In a comparative example (not illustrated) in which the first distance20B is equal to the second distance30B, the second distance30B is adjusted to the first distance20B. Therefore, a distance along the transfer belt52from the driving roller44to the second photoconductor unit30K is shorter in this exemplary embodiment than in the comparative example. As this distance increases, the cumulative amounts of variation in the speed of the transfer belt52and variation in the adjacency distance increase. In the comparative example, the misregistration amount of the toner images on the second photoconductor unit30C and the second photoconductor unit30K tends to increase compared with the misregistration amount of the toner images on the first photoconductor unit20Y and the first photoconductor unit20M. In the exemplary embodiment, the distance between the second photoconductor unit30C and the second photoconductor unit30K (second distance30B) is shorter than in the comparative example. Therefore, the cumulative amounts of the variation in the speed and the variation in the adjacency distance are smaller than in the comparative example. The push roller49that is positioned between the driving roller44and the loop roller48and is rotatably in contact with the outer peripheral surface of the transfer belt52pushes the transfer belt52toward the inner periphery. For example, a transfer belt52of an image forming apparatus10according to a first modified example illustrated inFIG.3includes one straight portion52E. InFIG.3, illustration of the developing roller26A, the collection auger26B, the supply auger26C, the stirring auger26D, the developing roller36A, the collection auger36B, the supply auger36C, and the stirring auger26D is omitted. The end of the straight portion52E on the upstream side is looped around the steering roller45, and the end of the straight portion52E on the downstream side is looped around the driving roller44. That is, the steering roller45is positioned on the upstream side of the driving roller44. This image forming apparatus10includes two first photoconductor units20and two second photoconductor units30arranged along the straight portion52E. That is, all the photoconductor units (first photoconductor units20and second photoconductor units30) of the image forming apparatus10are positioned on the downstream side of the steering roller45and on the upstream side of the driving roller44. An adjacency distance23B between the rotation axis20X of the first photoconductor drum22on the downstream side and the rotation axis30X of the second photoconductor drum32on the upstream side is set to an integral multiple of each of the outer peripheral length44C of the driving roller44and the outer peripheral length45C of the steering roller45. There is a relationship of first distance20B>adjacency distance23B>second distance30B. FIG.4illustrates a second modified example of the exemplary embodiment of the present disclosure. In an image forming apparatus10of the second modified example, an acute angle between the X direction and an upstream portion52C that is a straight portion of the transfer belt52positioned on an upstream side of the steering roller45and on a downstream side of the loop roller48is θ1. An acute angle between the X direction and a downstream portion52D that is a straight portion positioned on a downstream side of the steering roller45and continuous with the upstream portion52C is θ2larger than θ1.FIG.4demonstrates that the upstream portion52C and the downstream portion52D are not arranged in the Y direction but are arranged in the X direction. Two first photoconductor units20are provided along the upper surface (outer peripheral surface) of the upstream portion52C, and two second photoconductor units30are provided along the upper surface (outer peripheral surface) of the downstream portion52D. The first photoconductor unit20of the second modified example has the same specifications as the first photoconductor unit20of the exemplary embodiment. The second photoconductor unit30of the second modified example has the same specifications as the second photoconductor unit30of the exemplary embodiment. When viewed in the Z direction, a distance (adjacency distance) between the rotation axes20X of the two first photoconductor units20is the first distance20B. When viewed in the Z direction, a distance (adjacency distance) between the rotation axes30X of the two second photoconductor units30is the second distance30B. As illustrated inFIG.4, the horizontal dimension of each first photoconductor unit20is20HL, and the horizontal dimension of each second photoconductor unit30is30HL. The horizontal dimension30HL is smaller than the horizontal dimension20HL. When viewed in the Z direction, the two second photoconductor units30are partly arranged in the Y direction. InFIG.4, a horizontal dimension30P is a dimension of the parts of the two second photoconductor units30in the X direction. InFIG.4, a dimension30F is a dimension of a portion including the two second photoconductor units30in the X direction. The horizontal dimension30P is larger than the horizontal dimension30V inFIG.1. Therefore, the horizontal dimension30F is smaller than the horizontal dimension30E inFIG.1. In the image forming apparatus10of the second modified example illustrated inFIG.4, the angle θ2is larger than the angle θ1. The two second photoconductor units30are provided along the downstream portion52D. The second distance30B is shorter than the first distance20B. Therefore, the horizontal dimension of a portion including the downstream portion52D and the two second photoconductor units30is small compared with a case where the downstream portion52D is parallel to the horizontal direction and the second distance30B is equal to the first distance20B. When viewed in the Z direction, the two second photoconductor units30are partly arranged in the Y direction. Therefore, the horizontal dimension30F of the portion including the two second photoconductor units30is small compared with a case where the two second photoconductor units30are arranged away from each other in the X direction when viewed in the Z direction. Any number of photoconductor drums (image formers) may be arranged along the transfer belt52as long as the number is three or more. Any number of image formers may be provided in the area on the downstream side of the steering roller45and on the upstream side of the transfer position for the recording paper P as long as the number is plural. In the image forming apparatus10, the first photoconductor units20and the second photoconductor units30may form the toner images on the recording paper P (image forming target) transported by a transport belt (not illustrated) provided in place of the transfer belt52. The toner image is described as an example of the image, and is formed by a dry type electrophotographic system. The exemplary embodiment of the present disclosure is not limited thereto. For example, the toner image may be formed by a wet type electrophotographic system, or the image may be formed by an ink jet system. In the image forming apparatus10, an ink or toner image may be formed on long non-annular continuous paper (image forming target) placed over a plurality of rotators including the driving roller44, having at least one straight portion by the rotators, and transported by the driving roller44and the rotators, and the steering roller (image forming target winding motion correcting roller)45may rotatably be in contact with the inner peripheral surface of the continuous paper. In a case where the image forming apparatus10uses the ink jet system, each of a first distance between the centers of ink jet heads (image formers) corresponding to the first photoconductor units20and a second distance between the centers of ink jet heads (image formers) corresponding to the second photoconductor units30is set to an integral multiple of each of the outer peripheral length44C and the outer peripheral length45C. In the case where the image forming apparatus10includes the first photoconductor units20and the second photoconductor units30, the adjacency distances may be equal to each other within a tolerance. In the case where the image forming apparatus10includes the ink jet heads, the adjacency distances may similarly be equal to each other within a tolerance. Both in the cases where the image forming apparatus10includes the first photoconductor units20and the second photoconductor units30and where the image forming apparatus10includes the ink jet heads, each adjacency distance need not be an integral multiple of each of the outer peripheral length44C and the outer peripheral length45C. The diameter of the steering roller45may differ from the diameter of the driving roller44. Also in this case, the diameter of the steering roller45and the diameter of the driving roller44may be set so that each adjacency distance is an integral multiple of each of the outer peripheral length45C and the outer peripheral length44C. The colors of the images (toner or ink images) to be formed on the image forming target (transfer belt52or recording medium P) need not be four colors. For example, six colors may be used for the images. For example, in a case where three or more first photoconductor units20are arranged along the upper portion52A or the upstream portion52C, all the plurality of first distances may be equal to each other within a tolerance, or at least one first distance may differ from the other first distance. In the claims, description “all the first distances are equal to each other” means that all the plurality of first distances are equal to each other within the tolerance. For example, the first distance between the first photoconductor unit20at the downstream end and the first photoconductor unit20adjacent to this first photoconductor unit20may be shorter than the first distance between the first photoconductor unit20at the upstream end and the first photoconductor unit20adjacent to this first photoconductor unit20. For example, in a case where three or more second photoconductor units30are arranged along the lower portion52B or the downstream portion52D, all the plurality of second distances may be equal to each other within a tolerance, or at least one second distance may differ from the other second distance. In the claims, description “all the second distances are equal to each other” means that all the plurality of second distances are equal to each other within the tolerance. For example, the second distance between the second photoconductor unit30at the downstream end and the second photoconductor unit30adjacent to this second photoconductor unit30may be shorter than the second distance between the second photoconductor unit30at the upstream end and the second photoconductor unit30adjacent to this second photoconductor unit30. The foregoing description of the exemplary embodiments of the present disclosure has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, thereby enabling others skilled in the art to understand the disclosure for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the following claims and their equivalents. | 30,768 |
11860563 | FIRST EMBODIMENT Hereinafter, a printer100as an example of an image forming apparatus according to a first embodiment of the present disclosure will be described with reference toFIGS.1to5.FIG.1is a cross-sectional view illustrating a schematic configuration of the printer100. InFIG.1, directions along a vertical direction in a state where the printer100is installed will be referred to as “upward” and “downward”, respectively. Further, along a horizontal direction orthogonal to the vertical direction, a side at which a user interface55(described later) is disposed will be referred to as “frontward”, while a side opposite the frontward will be referred to as “rearward”. The printer100is a direct transfer tandem-type color printer. The printer100mainly includes a housing10, a feeder unit12, a conveyor20, an image forming unit30, and a controller50. The housing10has a substantially box-like shape, and has a top cover11constituting an upper portion of the housing10. The top cover11is provided so as to be movable between an open position and a closed position about an upper-rear end of the printer100. By moving the top cover11to the open position, a portion of the image forming unit30is allowed to be pulled out upward of an interior of the housing10. The top cover11has a discharge tray11aonto which a sheet(s) M as a recording medium is discharged. The feeder unit12is provided at a lower portion of the housing10. The feeder unit12includes a feed tray13, a feed roller14, a separation roller15, a pair of pinch rollers16, a pair of registration rollers17, and a manual tray18. The feed tray13is a tray in which a sheet(s) M is accommodated, and is attached to the housing10so that the feed tray13can be pulled out of the housing10. A feed path R1 along which the sheet M picked up from the feed tray13is conveyed toward the image forming unit30is provided frontward of the feed tray13. The feed roller14, the separation roller15, and the pair of pinch rollers16are arranged in this order along the feed path R1. When the feed roller14is rotated, the sheet M in the feed tray13is picked up toward the separation roller15. The separation roller15is rotated to feed the sheet M picked up by the feed roller14toward the pair of pinch rollers16. The pair of registration rollers17are provided upward of the pair of pinch rollers16. The pair of registration rollers17temporality stops movement of the sheet M by contacting a front edge of the sheet M before feeding the sheet M toward the image forming unit30, thereby correcting skewing of the sheet M and adjusting timing for image formation on the sheet M. The manual tray18is provided frontward of the housing10. When the manual tray18is at its open position, the sheet M can be manually fed toward the pair of registration rollers17in the housing10. Inside the housing10, an upstream path R2 along which the sheet M is conveyed is formed downstream of the feed path R1. Specifically, the upstream path R2 extends from the pair of registration rollers17toward a fixing unit37(described later). The conveyor20includes a drive roller21, a driven roller22, a conveying belt23, and a pair of downstream conveying rollers24. Of the conveyor20, the drive roller21, the driven roller22, and the conveying belt23are disposed along the upstream path R2. The drive roller21is a roller that rotates in accordance with rotation of a DC motor61(described later). The conveying belt23is an endless belt looped over the drive roller21and the driven roller22. As the drive roller21rotates, the conveying belt23is driven to be circularly moved clockwise inFIG.1. As a result, the sheet M placed on an upper surface of the conveying belt23is conveyed along the upstream path R2 from the upstream side (i.e., the pair of registration rollers17side) toward the downstream side (i.e., the fixing unit37side). A belt cleaning unit for removing deposits such as developer (hereinafter referred to as “toner”) deposited on the conveying belt23is provided downward of the conveying belt23. The pair of downstream conveying rollers24will be described later. Along the upstream path R2, the image forming unit30is disposed at an image forming position where an image is formed on the sheet M. The image forming unit30is configured to form an image on the sheet M that has been conveyed to the image forming position by the conveying belt23. The image forming unit30includes a plurality of process units31, a plurality of LED units32, and the fixing unit37. The image forming unit30includes four of the process units31provided at the image forming position to be arranged from the upstream side to the downstream side. The four process units31correspond to four colors of cyan (C), magenta (M), yellow (Y), and black (K), respectively. Each of the process units31includes a developing device33, a photosensitive drum34, a transfer roller35, and a charger36. The developing device33accommodates therein toner of corresponding color (C, M, Y, or K) and configured to supply the toner to an outer circumferential surface of the photosensitive drum34. In each of the process units31, the photosensitive drum34and the transfer roller35contact the conveying belt23so as to nip the conveying belt23from the upper and lower sides thereof. The charger36is disposed diagonally rearward of the photosensitive drum34to be spaced apart therefrom with a predetermined interval in order to avoid contact with the photosensitive drum34. Each of the LED units32is a well-known unit configured to form an electrostatic latent image on the outer circumferential surface of the corresponding photosensitive drum34based on image data or a control command inputted from the controller50. The fixing unit37is configured to fix a toner image that has been formed on the sheet M by the process units31to the sheet M, and includes a heat roller38and a pressure roller39. The heat roller38has a heater therein for raising a temperature of the heat roller38. The pressure roller39is disposed at a position where the pressure roller39can nip the sheet M that has been conveyed from the image forming position in cooperation with the heat roller38from the upper and lower sides of the sheet M. In the image forming unit30configured as described above, when the sheet M passes through the image forming position on the upstream path R2, the photosensitive drums34are exposed to light with LED light emitted from the corresponding LED units32, and development is performed to form toner images on the photosensitive drums34using toners supplied from the developing devices33. The sheet M is nipped between the photosensitive drum34and the transfer roller35in each of the process units31, whereby a toner image of each color formed on the photosensitive drums34is transferred onto the sheet M. Thereafter, the sheet M is nipped between the heat roller38and the pressure roller39, and the toner image formed on the sheet M is fixed to the sheet M. A downstream path R3 is formed downstream of the upstream path R2, i.e., downstream of the fixing unit37. The sheet M that has passed through the fixing unit37is conveyed along the downstream path R3. The pair of downstream conveying rollers24constituting the conveyor20and the discharge rollers40are disposed along the downstream path R3. The sheet M that has passed through the fixing unit37is conveyed toward the discharge rollers40by the downstream conveying rollers24. The discharge rollers40are configured to discharge the sheet M onto the discharge tray11a. In the present embodiment, the upstream path R2 and the downstream path R3 are examples of the conveying path. The controller50for controlling the printer100to be driven is provided at a rear portion of the housing10. As illustrated inFIG.2, the controller50is a microcomputer including a CPU51, a ROM52, a RAM53, and a non-volatile memory54. The controller50is connected to the user interface55, an input interface56, a temperature sensor57, a sheet sensor58, an image-fixed sheet sensor59, and a motor drive circuit60. The user interface55is an interface between the controller50and a user, and includes a display panel, operation keys, and the like. The user interface55may be a touch panel that can receive a touch operation of the user. The input interface56is an interface configured to communicate with personal computers as external devices. The input interface56includes a network interface that allows the printer100to be connected to a wired or wireless network, and a USB interface that allows the printer100to be connected to the personal computers through a USB cable so that the printer100can communicate with the personal computers, for example. Note that the term “interface” is abbreviated as “I/F” inFIG.2. The temperature sensor57is configured to detect an ambient temperature which is a temperature inside the housing10. The temperature sensor57is provided at an upper portion inside the housing10and positioned close to the discharge rollers40. In the present embodiment, the temperature sensor57is an example of the temperature detector. The sheet sensor58detects presence or absence of the sheet M in the upstream path R2. In the housing10, the sheet sensor58is disposed downstream of the pair of registration rollers17and upstream of the drive roller21. That is, in the upstream path R2, the sheet sensor58is disposed at the image forming position where the image forming unit30forms an image on the sheet M. The image-fixed sheet sensor59is disposed downstream of the fixing unit37, i.e., along the downstream path R3. Each of the sheet sensor58and the image-fixed sheet sensor59includes, for example, a pivot lever pivotally movable upon contact of the sheet M passing therethrough, and an optical sensor configured to detect pivotal movement of the pivot lever. In the present embodiment, while a sheet M passes therethrough (that is, when the pivot lever is pivotally moved downward by the sheet M), the sheet sensor58and the image-fixed sheet sensor59are in their ON states. On the other hand, while a sheet M does not pass therethrough (that is, when the pivot lever is not pivotally moved downward by the sheet M), the sheet sensor58and the image-fixed sheet sensor59are in their OFF state. However, relationship between a posture of the pivot lever and ON/OFF state of the sheet sensor58and the image-fixed sheet sensor59may be reversed. The number of sheet sensors provided in the printer100is not limited to two (the sheet sensor58and the image-fixed sheet sensor59), but two or more sheet sensors58may be disposed on the upstream path R2, for example. In the present embodiment, the sheet sensor58and the image-fixed sheet sensor59are examples of the sheet detector. The DC motor61is a brushless DC motor and is driven to rotate by direct current. A driving power from the DC motor61is transmitted to the feed roller14, the separation roller15, the pair of pinch rollers16, the drive roller21, the pair of downstream conveying rollers24, and the discharge roller40through a power transmission mechanism (not illustrated). Although only the DC motor61is illustrated inFIG.2, the printer100also includes motor(s) for driving the photosensitive drums34, the transfer rollers35, and developing rollers provided in the developing devices33. In the present embodiment, the DC motor61is an example of a motor. The motor drive circuit60is configured to drive the DC motor61to be rotated under control of the controller50. The motor drive circuit60includes, for example, switching elements, and can control electric power to be supplied to the DC motor61. When driving the DC motor61to be rotated, the controller50outputs a motor ON signal and a clock signal to the motor drive circuit60. The motor ON signal is a signal for commanding drive or non-drive of the DC motor61. The DC motor61is controlled to be driven when the motor ON signal is “ON”, whereas the DC motor61is controlled not to be driven when the motor ON signal is “OFF”. The clock signal is a signal for designating a rotation speed of the DC motor61. When the motor ON signal inputted from the controller50is “ON”, the motor drive circuit60controls power supply to a stator coil (not illustrated) of the DC motor61in accordance with duty cycle of the inputted clock signal, thereby increasing the rotation speed of the DC motor61. Thereafter, when the rotation speed of the DC motor61reaches a predetermined speed, the motor drive circuit60outputs a motor lock signal to the controller50. When the motor lock signal is “ON”, the motor lock signal causes the controller50to output a motor ON signal for maintaining the rotation speed of the DC motor61at a constant speed. The controller50can switch a conveying speed of the sheet M in a printing operation between a full-speed V1 and a half-speed V2 slower than the full-speed V1. Specifically, the controller50can switch settings in the printing operation between a full-speed print setting in which the controller50controls the rotation speed of the DC motor61such that the sheet M is conveyed at the full-speed V1, and a half-speed print setting in which the controller50controls the rotation speed of the DC motor61such that the sheet M is conveyed at the half-speed V2. More specifically, when a common paper such as high-quality paper is designated as the sheet M in a print setting included in a print job, the controller50switches the settings to the full-speed print setting. On the other hand, when thick paper, postcard, envelope, or the like is designated as the sheet M, the controller50switches the settings to the half-speed print setting. Thick paper has a large heat capacity and thus requires a larger amount of heat, and a torque required for conveying thick paper as the sheet M is greater than that for high-quality paper. Accordingly, reducing a printing speed for the sheet M can satisfy both the above requirements (i.e., large amount of heat and large amount of torque). In the half-speed print setting, the duty cycle of the clock signal outputted from the controller50is set to a half value of that in the full-speed printing, whereby the conveying speed of the sheet M is controlled to the half-speed V2. In addition, when a print mode requiring print quality such as “photo printing” is designated in the print setting, the controller50also performs a printing operation under the half-speed print setting. The non-volatile memory54stores therein the duty cycles of the clock signal in the full-speed print setting and the half-speed print setting selected depending on various print settings, and the controller50refers to the non-volatile memory54to select a value of the appropriate duty cycle. In the present embodiment, the full-speed print setting is an example of the first setting, and the half-speed print setting is an example of the second setting. Further, the full-speed V1 is an example of the first conveying speed, and the half-speed V2 is an example of the second conveying speed. In the printer100with the above configuration, a paper jam may occur when the upstream path R2 in the housing10is clogged with the sheet M. The paper jam includes a type that is more likely to recur after the paper jam is resolved, and a type that is less likely to recur. In particular, the paper jam is more likely to recur when a torque shortage, in which drive torque of the DC motor61becomes smaller than a conveying load for conveying the sheet M, has occurred. The torque shortage may also be caused by a change in the ambient temperature. When the ambient temperature becomes equal to or lower than a predetermined value, for example, a conveying load may increase due to an increase in viscosity of lubricant, or drive torque of the DC motor61may decrease, which leads to the torque shortage. Therefore, the paper jam caused by the torque shortage may recur depending on the ambient temperature even after the paper jam is once resolved. On the other hand, when the ambient temperature is sufficient high after the paper jam is resolved, a possibility of recurrence of the paper jam decreases. Thus, in the present embodiment, when resuming conveyance of the sheet M after the paper jam is resolved, the controller50sets the conveying speed of the sheet M to one of the full-speed V1 and the half-speed V2 in accordance with the ambient temperature. FIG.3is a flowchart illustrating process that the controller50executes when the sheet M is conveyed. The process illustrated inFIG.3is executed by the controller50in response to receipt of a print job and a print execution command for the print job from a personal computer as an external device through the input interface56. The controller50also executes a printing operation as process different from that inFIG.3. In the printing operation, the controller50sets a frequency of the clock signal in accordance with contents of the print setting included in the print job input thereto through the input interface56, and turns “ON” the motor ON signal, whereby the rotation speed of the DC motor61is increased. In S11(hereinafter “step” is abbreviated merely as “S”), a value of a determination flag is set to an initial value “0”. The determination flag is information indicative of whether the torque shortage has occurred in the conveyor20at a timing of occurrence of the paper jam. The value “0” of the determination flag indicates that the torque is not in shortage in the conveyor20, whereas the values “1” indicates that the torque is in shortage in the conveyor20. In S12, it is determined that whether the current printing operation has been completed. When the printing operation has been completed and determination made in S12is YES, the controller50ends the process inFIG.3. On the other hand, when the printing operation has not yet been completed and determination made in S12is NO, it is determined in S13whether a paper jam has occurred. Specifically, the presence or absence of the sheet M passing through the upstream path R2 and the downstream path R3 is determined referring to signals outputted from the sheet sensor58and the image-fixed sheet sensor59. When the presence or absence of the sheet M is not changed for a predetermined period of time during conveyance of the sheet M, that is, when conveyance of the sheet M does not advance for a predetermined period of time, a paper jam is determined to have occurred. When determination made in S13is NO, the routine returns to S12. In the present embodiment, the process of S13executed by the controller is an example of the (b) determining. When determination made in S13is YES, the routine advances to S14. In S14, it is determined whether the current conveying speed of the sheet M is the half-speed V2. Specifically, it is determined in S14which of the full-speed print setting in which the sheet M is conveyed at the full-speed V1 or the half-speed print setting in which the sheet M is conveyed at the half-speed V2 is selected. When the conveying speed of the sheet M is the half-speed V2 and determination made in S14is YES, the routine advances to S15. In S15, the controller50controls the user interface55to display a paper jam error thereon, and ends the process ofFIG.3. Specifically, in S15, the user interface55displays thereon a text message and the like indicating that the paper jam has occurred. While the paper jam error is displayed in S15, process including the conveyance of the sheet M is stopped. In this case, since the paper jam occurs even though the conveying speed of the sheet M is the half-speed V2, drive needs to be stopped immediately. On the other hand, when the conveying speed of the sheet M is the full-speed V1 and determination made in S14is NO, the routine advances to S16. In S16, it is determined whether a torque shortage has occurred during conveyance of the sheet M. In the present embodiment, the process of S16executed by the controller50is an example of the (c) determining. FIG.4is a flowchart for explaining details of process executed in S16. First, in S31it is determined whether the DC motor61is in its non-lock state. In the non-lock state, a motor lock signal is not outputted from the motor drive circuit60to the controller50, and therefore the DC motor61is not subjected to constant speed control. Unless the torque is in shortage, a torque fluctuation of the DC motor61is stable, and the motor lock signal is outputted from the motor drive circuit60when the rotation speed of the DC motor61stably reaches a desired value. On the other hand, when the torque is in shortage when the sheet M is conveyed, the torque fluctuation of the DC motor61becomes large and the DC motor61is not driven stably at a desired rotation speed, so that the motor lock signal is not outputted from the motor drive circuit60. When the DC motor61is in its lock state to be subjected to constant speed control and determination made in S31is NO, the routine advances to S35. In S35, the value of the determination flag is set to “0” to indicate that the toque shortage has not occurred. After completion of the process of S35, the routine advances to S36. On the other hand, when determination made in S31is YES, the routine advances to S32. In S32, a temperature signal detected in accordance with the ambient temperature is acquired. Specifically, the temperature sensor57detects the current ambient temperature, and outputs a temperature signal corresponding to the current ambient temperature. The controller acquires the outputted temperature signal. In S33, it is determined whether the temperature signal acquired in S32indicates that the ambient temperature is equal to or lower than a first temperature TH1. The first temperature TH1 is an upper limit value of the temperature having a high possibility of causing a torque shortage at the time of conveyance of the sheet M, and is, for example, 8° C. That is, when the paper jam occurs in a state where the ambient temperature is a value lower than the first temperature TH1, the torque shortage is highly likely to have occurred due to an increase in the conveying load associated with an increase in viscosity of lubricant or a reduction of the drive torque of the DC motor61. In the present embodiment, the first temperature TH1 is an example of the predetermined temperature. When determination made in S33is YES (S33: YES), the routine advances to S34, where the value of the determination flag is set to “1” to indicate that the toque is in shortage. After completion of the process of S34, the routine advances to S36. On the other hand, when the temperature signal indicates that the ambient temperature is a value higher than the first temperature TH1 and determination made in S33is NO, and the routine advances to S35. In S35, the value of the determination flag is set to “0” to indicate that the torque is not in shortage. That is, when the ambient temperature is higher than the first temperature TH1, the paper jam is highly likely to have occurred due to factors other than the torque shortage. In S36, the value of the determination flag set in S34or S35is stored in the non-volatile memory54as history information. At this time, values of a frequency of the clock signal for the DC motor61to rotate at a desired speed after the paper jam is resolved is also stored in the non-volatile memory54together with the value of the determination flag. In other words, in S36, the controller50stores the rotation speed of the DC motor61corresponding to the full-speed V1, the rotation speed of the motor61corresponding to the half-speed V2, and the value of the determination flag into the non-volatile memory54. After completing the process of S36, the routine advances to S17ofFIG.3. In the present embodiment, the process of S36executed by the controller50is an example of the (e) storing. In S17ofFIG.3, the controller50controls the user interface55to display the paper jam error thereon. The paper jam error displayed on the user interface55is the same as the image displayed on the user interface55in S15. Note that a text message and the like indicating that the torque shortage has occurred may be displayed on the user interface55in S17. During display of the paper jam error in S17, process including conveyance of the sheet M is stopped. In the present embodiment, the process of S17executed by the controller is an example of the (d) notifying. Once a user resolves the paper jam, the paper jam error in S17is no longer displayed on the user interface55. The paper jam is resolved by, for example, the user moving the top cover11to the open position and removing the sheet M remaining in the upstream path R2 or the downstream path R3. At this time, the controller50may regard change in the position of the top cover11from the open position to the closed position as one condition that the paper jam has been resolved. When the paper jam has been resolved, the controller50controls the user interface55so that the display panel of the user interface55displays thereon a message prompting the user to input a resume command for resuming conveyance of the sheet M, and waits for the user input. In S18, it is determined whether the resume command to resume conveyance of the sheet M is inputted since the paper jam has been resolved. When determination made in S18is NO, the paper jam has not yet been addressed by the user, and the controller50waits until the paper jam is resolved. When determination made in S18is YES, the routine advances to S19. In S19it is determined whether the value of the determination flag is set to “1” indicating that the torque is in shortage. When determination made in S19is NO, the routine advances to S23in which the conveying speed of the sheet M after the conveyance the sheet M is resumed is set to the full-speed V1. This is because the paper jam is less likely to recur after the paper jam is resolved, unless the torque shortage has occurred at a timing of occurrence of the paper jam. When determination made in S19is YES, the routine advances to S20. In S20a temperature signal corresponding to the ambient temperature is acquired. Specifically, the controller50acquires a temperature signal currently detected by the temperature sensor57. In other words, in S20, the ambient temperature at a timing of resumption of the conveyance of the sheet M after the paper jam is resolved is detected. In S21, it is determined whether the temperature signal acquired in S20is a value indicating that the ambient temperature is equal to or lower than the first temperature TH1. The first temperature TH1 is the same value as used in S33and is, for example, 8° C.FIG.5is a graph for explaining relationship between the conveying speed and the temperature at a timing when the conveyance of the sheet M is resumed. The controller50changes the conveying speed between two stages depending on the value of the temperature signal after the paper jam has been resolved. Specifically, when the temperature signal is a value indicating that the ambient temperature is higher than the first temperature TH1, determination made in S21is NO. Then, the routine advances to S23in which the rotation speed of the DC motor61is controlled so that the conveying speed becomes the full-speed V1. That is, when the ambient temperature is higher than the first temperature TH1, the paper jam is less likely to recur after resumption of conveyance of the sheet M, so that the conveying speed is set to the full-speed V1 in order to suppress an increase in a time required for the printing operation after the conveyance of the sheet M has been resumed. On the other hand, when the temperature signal is a value indicating that the ambient temperature is equal to or lower than the first temperature TH1, determination made in S21is YES, and the routine advances to S22. In S22, the rotation speed of the DC motor61is controlled so that the conveying speed becomes the half-speed V2 slower than the full-speed V1. That is, when the ambient temperature indicated by the temperature signal is equal to or lower than the first temperature TH1, the paper jam caused by the torque shortage is highly likely to recur after conveyance of the sheet M has been resolved. Accordingly, the conveying speed is decreased in order to prevent recurrence of the paper jam. After completion of the process of S22or S23, the routine returns to S12. Then, the process of S13to S23is repeated until the current printing operation is determined to be completed in S12. When the current printing operation is determined to be completed in S12, the routine ofFIG.3is ended. In the present embodiment, the process of S18to S23executed by the controller is an example of the (a) controlling. According to the embodiment described above, the following advantages can be obtained. When determination is made after occurrence of the paper jam that the torque shortage has occurred, and when the temperature signal indicates that the ambient temperature is higher than the first temperature TH1 at a timing when the conveyance of the sheet M is resumed after the paper jam is resolved, the controller50sets the rotation speed of the DC motor61such that the conveying speed of the sheet M becomes the full-speed V1. On the other hand, when the temperature signal indicates that the ambient temperature is equal to or lower the first temperature TH1, the controller50sets the rotation speed of the DC motor61such that the conveying speed of the sheet M becomes the half-speed V2. Through this operation, the conveying speed of the sheet M is decreased in a case where the paper jam is highly likely to recur after the paper jam is resolved than a case where the paper jam is less likely to recur. As a result, recurrence of the paper jam can be prevented, and a time required for the printer100to form an image after the paper jam is resolved can be prevented from unnecessarily increasing caused by decrease in the conveying speed of the sheet M. Since the drive torque of the DC motor61becomes lower as the conveying speed becomes higher, the controller50determines whether the torque is in shortage on condition that the full-speed print setting in which the sheet M is conveyed at the full-speed V1 is being selected. This operation can prevent an unnecessary increase in a time required for forming an image after the paper jam is resolved which is caused by a reduction in the conveying speed. A condition for the controller50determining that the torque is in shortage includes determination that the temperature signal indicates that the ambient temperature is a value equal to or lower than the first temperature TH1. Thus, when the ambient temperature raises after the paper jam is resolved, an unnecessary increase in a time required for forming an image due to a reduction in the conveying speed can be restrained. The controller50determines whether the torque for conveying the sheet M is in shortage based on the torque fluctuation of the DC motor61during a predetermined period of time. This operation allows efficient determination on whether the torque is in shortage by using an existing phenomenon, i.e., the torque fluctuation of the DC motor61. When determining that the torque shortage has not occurred, the controller50sets the rotation speed of the DC motor61such that the conveying speed of the sheet M becomes the full-speed V1 at a timing when the conveyance of the sheet M is resumed after the paper jam is resolved. Accordingly, an unnecessary increase in a time required for printing after the paper jam has been resolved can be restrained. After determining that the torque shortage has occurred and until the print jam is resolved, the controller50controls the user interface55to inform of an error message. With this operation, the user can recognize that the paper jam has occurred, thereby reducing a time required for the user to resolve the paper jam. Modification to First Embodiment In the process in S16ofFIG.3on determining whether the torque is in shortage, the temperature signal may not be employed as a determination condition. In this case, when it is determined that the DC motor61is in the non-lock state in S31ofFIG.4, the routine may advance to S34, and the value of the determination flag may be set to “1”. That is, the process of S32and S33is omitted. In the first embodiment, the conveying speed of the sheet M is set to the half-speed V2 after the process of S22is executed and until the printing operation is determined to be completed inFIG.12. Instead, when the number of sheets to be printed specified by a print job is two or more, after completion of the process of S22, the conveying speed of the sheet M may be changed from the half-speed V2 to the full-speed V1 on condition that the number of sheets M that have been printed exceeds a predetermined number. Second Embodiment Next, a second embodiment of the present disclosure will be described with reference toFIGS.6and7. In the second embodiment, configurations different from those in the first embodiment will be mainly described. Further, in the second embodiment, parts and components similar to the first embodiment are designated with the reference numerals the same as those in the first embodiment in order to avoid duplicating description. In the first embodiment described above, when the ambient temperature is low after the paper jam is resolved, the conveying speed of the sheet M is reduced in one stage to the half-speed V2. In place of this, in the present embodiment, the conveying speed of the sheet M is changed in two stages in accordance with the ambient temperature after of the paper jam is resolved. FIG.6is a flowchart illustrating a process that the controller50executes at a timing when the sheet M is conveyed. The main entity of the operation in this flowchart is the controller50. As in the first embodiment, when it is determined in S13that the paper jam has occurred and determination made in S14is NO, the routine advances to S16, and the torque shortage has occurred or not is determined in S16in the second embodiment. Thereafter, when the value of the determination flag is set to “1” indicating that the torque is in shortage in S19, the routine advances to S20in which the temperature signal is acquired. In S40, it is determined whether the temperature signal is a value indicating that the ambient temperature is equal to or lower than a second temperature TH2. As illustrated inFIG.7, the second temperature TH2 is lower than the first temperature (for example, 8° C.) and higher than the lower limit value of the temperature at which the operation of the printer100is assured. When determination made in S40is YES, the routine advances to S41. In S41, the rotation speed of the DC motor61is set such that the conveying speed of the sheet M becomes a low-speed V3. As illustrated inFIG.7, the low-speed V3 is slower than the half-speed V2 and faster than the lower limit value of a speed at which the sheet M can be conveyed. On the other hand, when determination made in S40is NO, the routine advances to S21, and it is determined in S21whether the temperature signal is a value indicating that the ambient temperature is equal to or lower than the first temperature TH1. When determination made in S21is YES, the routine advances to S22in which the rotation speed of the DC motor61is set such that the conveying speed of the sheet M becomes the half-speed V2. On the other hand, when determination made in S21is NO, the routine advances to S23. In S23, the rotation speed of the DC motor61is set such that the conveying speed of the sheet M becomes the full-speed V1. That is, in the present embodiment, the conveying speed after the paper jam is resolved is changed between three stages depending on the ambient temperature, as illustrated inFIG.7. At this time, when the ambient temperature is equal to or lower than the first temperature TH1, the conveying speed of the sheet M is changed between two stages in accordance with the detected ambient temperature. The present embodiment described above can exhibit advantages the same as those in the first embodiment. Further, when the ambient temperature is equal to or lower than the second temperature TH2 that is lower than the first temperature TH1, the conveying speed is set to the low-speed V3, whereby recurrence of the paper jam caused by the torque shortage can be further prevented. Other Embodiments While the invention has been described in conjunction with various example structures outlined above and illustrated in the figures, various alternatives, modifications, variations, improvements, and/or substantial equivalents, whether known or that may be presently unforeseen, may become apparent to those having at least ordinary skill in the art. Accordingly, the example embodiments of the disclosure, as set forth above, are intended to be illustrative of the invention, and not limiting the invention. Various changes may be made without departing from the spirit and scope of the disclosure. Therefore, the disclosure is intended to embrace all known or later developed alternatives, modifications, variations, improvements, and/or substantial equivalents. Some specific examples of potential alternatives, modifications, or variations in the described invention are provided below: In the above embodiments, when it is determined after the paper jam is resolved that the toque shortage has not occurred, the conveying speed is set to the full-speed V1. However, when it is determined that the toque has not been in shortage, the conveying speed may be set to the half-speed V2 after the paper jam is resolved. In this case, when the value of the determination flag is set to “0” and determination made in S19is NO, the routine advances to S22, and the rotation speed of the DC motor61is controlled so that the conveying speed is set to the half-speed V2. In the above embodiments, the conveying speed of the sheet M is changed between two or three stages after the paper jam is resolved. In place of this configuration, the conveying speed of the sheet M may be changed after the paper jam is resolved between four or more stages depending on the value indicated by the temperature signal. In this case, when the temperature signal acquired after the paper jam is resolved indicates that the ambient temperature is equal to or lower than the first temperature TH1, the rotation speed of the DC motor61is controlled such that the conveying speed of the sheet M becomes slower as the value indicated by the temperature signal is lower. In the above embodiments, a brushless DC motor is used as the motor. However, the motor may be a brush DC motor, or a stepping DC motor provided with an encoder. Alternatively, an AC motor may be employed. Although the temperature sensor57detects the temperature inside the housing10as the ambient temperature in the above embodiments, the temperature sensor57may detect the temperature around the printer100outside the housing10as the ambient temperature. In the above embodiments, a printer is employed as an example of the image forming apparatus. In place of this, the image forming apparatus may be a multifunction peripheral having both functions of reading an image and of performing a printing operation. | 39,369 |
11860564 | The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. DETAILED DESCRIPTION It will be understood that if an element or layer is referred to as being “on,” “against,” “connected to” or “coupled to” another element or layer, then it can be directly on, against, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, if an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, then there are no intervening elements or layers present. Like numbers referred to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements describes as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, term such as “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors herein interpreted accordingly. The terminology used herein is for describing particular embodiments and examples and is not intended to be limiting of exemplary embodiments of this disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Referring now to the drawings, embodiments of the present disclosure are described below. In the drawings for explaining the following embodiments, the same reference codes are allocated to elements (members or components) having the same function or shape and redundant descriptions thereof are omitted below. Next, a description is given of a configuration and functions of a drive transmitter, a drive device, and an image forming apparatus, according to an embodiment of the present disclosure, with reference to drawings. Note that identical parts or equivalents are given identical reference numerals and redundant descriptions are summarized or omitted accordingly. As an image forming apparatus including a drive device to which the present disclosure has been applied, one embodiment of an electrophotographic image forming apparatus (hereinafter, referred to as an image forming apparatus200) will be described below. First, a description is given of the basic configuration of the image forming apparatus200according to the present embodiment. FIG.1is a schematic view of a configuration of the image forming apparatus200according to the present embodiment. The image forming apparatus200includes two optical writing devices1YM and1CK and four process units Y,2M,2C, and2K to form respective toner images of yellow (Y), magenta (M), cyan (C), and black (K). The image forming apparatus200further includes a sheet feed passage30, a pre-transfer sheet conveyance passage31, a bypass sheet feed passage32, a bypass tray33, a pair of registration rollers34, a sheet conveyance belt unit35, a fixing device40, a conveyance direction switching device50, a sheet ejection passage51, a pair of sheet ejecting rollers52, and a sheet ejection tray53. The image forming apparatus200further includes a first sheet feeding tray101, a second sheet feeding tray102, and a sheet re-entry device. Each of the first sheet feeding tray101and the second sheet feeding tray102contains a bundle of recording sheets P that function as recording media. The bundle of recording sheets P includes a recording sheet P that functions as a recording medium. The first sheet feeding tray101includes a first sheet feed roller101aand the second sheet feeding tray102includes a second sheet feed roller102a. As a selected one of the first sheet feed roller101aand the second sheet feed roller102ais driven and rotated, an uppermost recording sheet P placed on top of the bundle of recording sheets P is fed toward the sheet feed passage30. The sheet feed passage30leads to the pre-transfer sheet conveyance passage31that extends to a secondary transfer nip region. The recording sheet P passes through the pre-transfer sheet conveyance passage31immediate before the secondary transfer nip region. After being fed from a selected one of the first sheet feeding tray101and the second sheet feeding tray102, the recording sheet P passes through the sheet feed passage30and enters the pre-transfer sheet conveyance passage31. In addition, the image forming apparatus200further includes a housing in which parts and components for image formation are contained. A bypass tray33is disposed openably and closably on a side of the housing of the image forming apparatus200inFIG.1. The bundle of recording sheets P is loaded on a top face of the bypass tray33when the bypass tray33is open with respect to the housing. The uppermost recording sheet P placed on top of the bundle of recording sheets P is fed toward the pre-transfer sheet conveyance passage31by the sheet feed roller of the bypass tray33. Each of the optical writing devices1YM and1CK includes a laser diode, a polygon mirror, and various lenses. Each of the optical writing devices1YM and1CK drives the laser diode based on image data of an image that is transmitted from a personal computer. Consequently, respective photoconductors3Y,3M,3C, and3K of the process units2Y,2M,2C, and2K are optically scanned, respectively. Specifically, the photoconductors3Y,3M,3C, and3K of the process units2Y,2M,2C, and2K are rotationally driven in the counterclockwise direction inFIG.1. The optical writing device1YM emits laser light beams to the photoconductors3Y and3M while the photoconductors3Y and3M are driving, by deflecting the laser light beams in an axial direction of rotation of the photoconductors3Y and3M. Accordingly, respective surfaces of the photoconductors3Y and3M are optically scanned and irradiated. Accordingly, an electrostatic latent image based on yellow image data is formed on the photoconductor3Y and an electrostatic latent image based on magenta image data is formed on the photoconductors3M. Further, the optical writing device1CK emits laser light beams to the photoconductors3C and3K while the photoconductors3C and3K are driving, by deflecting the laser light beams in an axial direction of rotation of the photoconductors3C and3K. Accordingly, respective surfaces of the photoconductors3C and3K are optically scanned and irradiated. Accordingly, an electrostatic latent image based on cyan image data is formed on the photoconductor3C and an electrostatic latent image based on black image data is formed on the photoconductors3K. The process units2Y,2M,2C, and2K include the drum-shaped photoconductors3Y,3M,3C, and3K, each of which functions as an image carrier (a latent image carrier), respectively. The process units2Y,2M,2C, and2K include respective units disposed around each of the photoconductors3Y,3M,3C, and3K as a single unit, respectively. The process units2Y,2M,2C, and2K are detachably attached to the housing of the image forming apparatus200. The process units2Y,2M,2C, and2K have respective configurations identical to each other except the colors of toners, and therefore are occasionally described in a singular form, without suffixes indicating the toner colors, which are yellow (Y), magenta (M), cyan (C), and black (K). The process unit2(i.e., the process units2Y,2M,2C, and2K) includes the photoconductor3(i.e., the photoconductor3Y,3M,3C, and3K) and a developing device4(i.e., developing devices4Y,4M,4C, and4K) that develops an electrostatic latent image formed on a surface of the photoconductor3into a visible toner image. The process unit2(i.e., the process units2Y,2M,2C, and2K) further includes a charging device5(i.e., charging devices5Y,5M,5C, and5K) and a drum cleaning device6(i.e., drum cleaning devices6Y,6M,6C, and6K). The charging device5uniformly charges the surface of the photoconductor3(i.e., the photoconductors3Y,3M,3C, and3K) while the photoconductor3is rotating. The drum cleaning device6removes transfer residual toner remaining on the surface of the photoconductor3after passing a primary transfer nip region and cleans the surface of the photoconductor3. The image forming apparatus200illustrated inFIG.1is a tandem image forming apparatus in which the four process units2Y,2M,2C, and2K are aligned along a direction of movement of an intermediate transfer belt61that functions as a driven target body having an endless loop. The photoconductor3(i.e., the photoconductors3Y,3M,3C, and3K) is manufactured by a hollow tube made of aluminum, for example, with the front face covered by an organic photoconductive layer having photosensitivity. Note that each of the photoconductors3Y,3M,3C, and3K may include an endless belt. The developing device4(i.e., developing devices4Y,4M,4C, and4K) develops an electrostatic latent image by a two-component developer including magnetic carrier particles and non-magnetic toner. Hereinafter, the two-component developer is simply referred to as a “developer”. Instead of the two-component developer, the developing device4may include a one-component developer that does not include magnetic carrier particles. A toner supplier replenishes corresponding color toner to a toner bottle103(i.e., toner bottles103Y,103M,103C, and103K). The drum cleaning device6(i.e., drum cleaning devices6Y,6M,6C, and6K) in the present embodiment of this disclosure includes a cleaning blade of polyurethane rubber as a cleaning body to be pressed against the photoconductor3. However, the configuration of the drum cleaning device6is not limited to this configuration. In order to enhance the cleaning performance, the image forming apparatus200employs a rotatable fur brush to contact the photoconductor3. The fur brush scrapes a solid lubricant into powder and applies the lubricant powder to the surface of the photoconductor3. An electric discharging lamp is disposed above the photoconductor3. The electric discharging lamp is also included in the process unit2. Further, the electric discharging lamp optically emits light to the photoconductor3to remove electricity from the surface of the photoconductor3after passing through the drum cleaning device6. The electrically discharged surface of the photoconductor3is uniformly charged by the charging device5. Then, the above-described optical writing device1YM starts optical scanning. The charging device5rotates while receiving the charging bias from a power source. Here, instead of the above-described method, the charging device5may employ a scorotron charging system in which a charging operation is performed without contacting the photoconductor3. As described above withFIG.1, the process units2Y,2M,2C, and2K have an identical configuration to each other. A transfer device60is disposed below the process units2Y,2M,2C, and2K. The transfer device60causes the intermediate transfer belt61that is an endless belt wound around multiple support rollers (including rollers63,67,68,69, and71) with tension to contact the photoconductors3Y,3M,3C, and3K. While causing the intermediate transfer belt61to be in contact with the photoconductors3Y,3M,3C, and3K, the intermediate transfer belt61is rotated by rotation of one of the multiple support rollers so that the intermediate transfer belt61endlessly moves in a clockwise direction. By so doing, respective primary transfer nip region for forming yellow, magenta, cyan, and black images are formed between the photoconductors3Y,3M,3C, and3K and the intermediate transfer belt61. In the vicinity of the primarily transfer nip regions, primary transfer roller are disposed in a space surrounded by an inner circumferential surface of the intermediate transfer belt61, that is, in a belt loop. The primary transfer rollers62Y,62M,62C, and62K, each of which functioning a primary transfer body, presses the intermediate transfer belt61toward the photoconductors3Y,3M,3C, and3K. A primary transfer bias is applied by respective transfer bias power supplies to the primary transfer rollers62Y,62M,62C, and62K. Consequently, respective primarily transfer electric fields are generated in the primary transfer nip region to electrostatically transfer respective toner images formed on the photoconductors3Y,3M,3C, and3K onto the intermediate transfer belt61. As the intermediate transfer belt61passes through the primary transfer nip region along the endless rotation in the clockwise direction inFIG.1, the yellow, magenta, cyan, and black toner images are sequentially transferred at the primary transfer nip region and overlaid onto an outer circumferential surface of the intermediate transfer belt61. This transferring operation is hereinafter referred to as primary transfer. A four-color superimposed toner image (hereinafter referred to as “four-color toner image”) is formed on the outer circumferential surface of the intermediate transfer belt61due to the primary transfer by this superimposition. A secondary transfer roller72that functions as a secondary transfer body below the intermediate transfer belt61inFIG.1. The secondary transfer roller72contacts a secondary transfer backup roller68at a position where the secondary transfer roller72faces the secondary transfer backup roller68via the outer circumferential surface of the intermediate transfer belt61, which forms a secondary transfer nip region. By so doing, the secondary transfer nip region is formed between the outer circumferential surface the intermediate transfer belt61and the secondary transfer roller72. A secondary transfer bias is applied by a transfer bias power supply to the secondary transfer roller72. By contrast, the secondary transfer backup roller68disposed inside the belt loop of the intermediate transfer belt61is electrically grounded. As a result, a secondary transfer electric field is formed in the secondary transfer nip region. The pair of registration rollers34is disposed on the right side ofFIG.1. The pair of registration rollers34nips and conveys the recording sheet P to the secondary transfer nip region in synchrony with arrival of the four-color toner image formed on the intermediate transfer belt61so as to further convey the recording sheet P toward the secondary transfer nip region. In the secondary transfer nip region, the four-color toner image formed on the intermediate transfer belt61is secondarily transferred onto the recording sheet P collectively due to action of the secondary transfer electric field and a nip pressure in the secondary transfer nip region. By being mixed with a white color of a surface of the recording sheet P, the four-color toner image is developed to a full-color toner image. Transfer residual toner that has not been transferred onto the recording sheet P in the secondary transfer nip region remains on the outer circumferential surface of the intermediate transfer belt61after the intermediate transfer belt61has passed through the secondary transfer nip region. The transfer residual toner is cleaned by a belt cleaning device75that is in contact with the intermediate transfer belt61. The recording sheet P that has passed through the secondary transfer nip region separates from the intermediate transfer belt61to be conveyed to the sheet conveyance belt unit35. The sheet conveyance belt unit35includes a transfer belt36, a drive roller37, and a driven roller38. The transfer belt36having an endless belt is wound around the drive roller37and the driven roller38with taut and is endlessly rotated in the counterclockwise direction inFIG.1along with rotation of the drive roller37. While nipping the recording sheet P that is conveyed from the secondary transfer nip region on the outer circumferential surface (the stretched surface) of the transfer belt36, the sheet conveyance belt unit35forwards the recording sheet P along with the endless rotation of the transfer belt36toward the fixing device40. The image forming apparatus200further includes a sheet reversing device including the conveyance direction switching device50, a re-entry passage54, a switchback passage55, and a post-switchback passage56. Specifically, after receiving the recording sheet P from the fixing device40, the conveyance direction switching device50switches a direction of conveyance of the recording sheet P, in other words, a direction in which the recording sheet P is further conveyed, between the sheet ejection passage51and the re-entry passage54. When printing an image on a first face of the recording sheet P and not printing on a second face, a single-side printing mode is selected. When performing a print job in the single-side printing mode, a route of conveyance of the recording sheet P is set to the sheet ejection passage51. According to the setting, the recording sheet P having the image on the first face is conveyed toward the pair of sheet ejecting rollers52via the sheet ejection passage51to be ejected to the sheet ejection tray53that is attached to an outside of the image forming apparatus200. When printing images on both first and second faces of a recording sheet P, a duplex printing mode is selected. When performing a print job in the duplex printing mode, after the recording sheet P having fixed images on both first and second faces is conveyed from the fixing device40, a route of conveyance of the recording sheet P is set to the sheet ejection passage51. According to the setting, the recording sheet P having images on both first and second faces is conveyed and ejected to the sheet ejection tray53. By contrast, when performing a print job in the duplex printing mode, after the recording sheet P having a fixed image on the first face is conveyed from the fixing device40, a route of conveyance of the recording sheet P is set to the re-entry passage54. The re-entry passage54is connected to the switchback passage55. The sheet P conveyed to the re-entry passage54enters the switchback passage55. Consequently, when the entire region in the sheet conveying direction of the recording sheet P enters the switchback passage55, the direction of conveyance of the recording sheet P is reversed, so that the recording sheet P is switched back in the reverse direction. The switchback passage55is connected to the post-switchback passage56as well as the re-entry passage54. The recording sheet P that has been switched back in the reverse direction enters the post-switchback passage56. Accordingly, the faces of the recording sheet P is reversed upside down. Consequently, the reversed recording sheet P is conveyed to the secondary transfer nip region again via the post-switchback passage56and the sheet feed passage30. A toner image is transferred onto the second face of the recording sheet P in the secondary transfer nip region. Thereafter, the recording sheet P is conveyed to the fixing device40so as to fix the toner image to the second face of the recording sheet P. Then, the recording sheet P passes through the conveyance direction switching device50, the sheet ejection passage51, and the pair of sheet ejecting rollers52before being ejected on the sheet ejection tray53. FIG.2is a schematic view illustrating a drive device20that drives the intermediate transfer belt61. The drive device20includes a drive motor10and a drive transmitter14. The drive motor10functions as a drive source. The drive transmitter14is mounted on a rotary shaft8of a drive roller67. The drive transmitter14includes a resin gear12and a reinforcement member11. The resin gear12functions as a first member that meshes with a motor gear10aof the drive motor10. The reinforcement member11functions as a second member made of a sheet metal and having a higher rigidity (higher Young modulus) than the resin gear12. The reinforcement member11is made of a sheet metal and is fastened to the resin gear12by screws13(seeFIG.3). Each of the screws13functions as a fastening member. A parallel pin8ais mounted on one end of the rotary shaft8. The resin gear12has slits12cinto which the parallel pin8ais inserted. The reinforcement member11has pin engagement portions11beach having a slit shape to which the parallel pin8ais fitted. The drive transmitter14is assembled to the rotary shaft8so as to rotate integrally with the rotary shaft8by passing the parallel pin8athrough the slits12cof the resin gear12and fitting the parallel pin8ato the pin engagement portion11bof the reinforcement member11. The driving force of the drive motor10is transmitted to the resin gear12of the drive transmitter14via the motor gear10a. Due to this transmission of the driving force, the drive roller67is driven and rotated to rotate the intermediate transfer belt61. Note that the motor gear10ais formed by subjecting a metallic motor shaft to, for example, cutting work. FIGS.3A and3Bare schematic perspective views of the drive transmitter14. Specifically,FIG.3Aillustrates the perspective view of the drive transmitter14viewed from the intermediate transfer belt61andFIG.3Billustrates the perspective view of the drive transmitter14viewed from the drive motor10. The resin gear12of the drive transmitter14includes a gear portion12bon the outer circumference. The gear portion12bfunctions as a drive transmitting portion. The resin gear12includes a hollow tube12ahaving a through-hole12athrough which the rotary shaft8passes. Further, the hollow tube12ahas slits12cradially extending at intervals of 90 degrees in the rotational direction of the resin gear12. The width of each slit12c(i.e., the length of each slit12cin the rotational direction) is greater (wider) than the outer diameter of the parallel pin8a. Note that, instead of the slits12c, cutouts may be formed in the hollow tube12a. The reinforcement member11is fastened to the resin gear12by the screws13, each functioning as a fastening member. The reinforcement member11also includes a through-hole11athrough which the rotary shaft8passes. The through-hole11ahas the pin engagement portions11beach having a slit shape to which the parallel pin8amounted on the rotary shaft8is fitted. The pin engagement portions11bare disposed at intervals of 180 degrees. A large load torque is applied to the intermediate transfer belt61, for example, when a thick paper enters the secondary transfer nip region. Thus, due to application of such large load torque, a gear included in the drive device20is preferably made of metal having a high rigidity (Young's modulus). However, in a case in which the whole gears are made of metal, a hard gear is meshed with another hard gear, which causes an increase in vibration or an increase in noise. Due to such inconvenience, it is not preferable that the whole gears are is made of metal. In the present embodiment, the motor gear10ais a metallic gear and the gear that meshes with the motor gear10ais the resin gear12. This arrangement causes the resin gear12to absorb meshing vibration between the motor gear10amade of metal and the resin gear12, thereby restraining occurrence of vibration and noise. Further, in the present embodiment, the reinforcement member11having a relatively high rigidity is fastened to the resin gear12having a relatively low rigidity, so as to reinforce the resin gear12. As a result, when a large load torque is applied to the resin gear12, distortion (twist) of the resin gear12in the rotational direction is restrained, and a decrease in the rotational accuracy is restrained. In addition, by engaging the parallel pin8awith the reinforcement member11having a relatively high rigidity, the resin gear12is applied with a load torque at a fastening portion at which the reinforcement member11is fastened to the resin gear12. The fastening portion of the reinforcement member11and the resin gear12is near the gear portion12b. Thus, the radial distance between the portion at which the load torque of the resin gear12is applied and the portion at which the driving force of the drive motor10is applied distance is shorter in the configuration according to the present embodiment, than in a configuration in which the resin gear12is engaged with the parallel pin8a. As a result, distortion (twist) of the resin gear12in the rotational direction is prevented, and a decrease in the rotational accuracy is further restrained. In order to restrain a decrease in the rotational accuracy due to rotational runout of the comparative drive transmitter, a comparative drive transmitter has a configuration in which at least one of the inner diameter of the through-hole112aof the resin gear12of the comparative drive transmitter and the inner diameter of the through-hole11aof the reinforcement member11is slightly smaller than the outer diameter of the rotary shaft8, so that the comparative drive transmitter is lightly press-fitted to the rotary shaft8. If the inner diameter dimension of the through-hole112ais far smaller than the outer diameter dimension of the rotary shaft8, the comparative drive transmitter is not assembled to the rotary shaft8by press-fitting. On the other hand, the inner diameter dimension of the through-hole112ais greater than the outer diameter dimension of the rotary shaft8, the comparative drive transmitter is not lightly press-fitted to the rotary shaft8, which generates rotational runout. In order to allow the light press fitting of the drive transmitter, the inner diameter of the through-hole112aneeds to be formed accurately. As a result, the manufacturing cost increases. In order to address this inconvenience, the drive transmitter14according to the present embodiment has the configuration in which the inner diameter of the through-hole112aof the resin gear12is reduced by fastening the reinforcement member11and the resin gear12. By so doing, the drive transmitter14is assembled to the rotary shaft8in a state similar to the state in which the drive transmitter14is lightly press-fitted to the rotary shaft8. Next, a description is given of the drive transmitter according to the present embodiment in detail, with reference to the drawings. FIG.4is a diagram illustrating a schematic configuration of the resin gear12according to the present embodiment. The resin gear12of the present embodiment couples the through-hole112aand the gear portion12b. An opposing wall12dfacing the reinforcement member11is inclined with respect to the fastening direction in which the resin gear12and the reinforcement member11are fastened to each other. The fastening direction is also the axial direction of the rotary shaft8. The opposing wall12dis inclined with respect to a direction away from the reinforcement member11toward the rotary shaft8, in other words, inward in the radial direction. FIG.5Ais a diagram illustrating a state in which the resin gear12and the reinforcement member11prior to fastening of the resin gear12and the reinforcement member11. FIG.5Bis a diagram illustrating a state in which the resin gear12and the reinforcement member11after the fastening of the resin gear12and the reinforcement member11. As illustrated inFIG.5A, since the opposing wall12dof the resin gear12is inclined, when the resin gear12and the reinforcement member11are overlaid in the fastening direction, a space S is generated between the reinforcement member11and the opposing wall12dof the resin gear12in the fastening direction. As described above, in the present embodiment, the opposing wall12dof the resin gear12is inclined with respect to the direction away from the reinforcement member11toward the rotary shaft8(inward in the radial direction). Due to such a configuration, an end portion (radial outer end portion) of the reinforcement member11with respect to the gear portion12bis in contact with the opposing wall12d. As the reinforcement member11is brought closer to the rotary shaft8, the space S between the reinforcement member11and the opposing wall12dof the resin gear12increases. Further, as illustrated inFIG.8A, in the state prior to the fastening of the resin gear12and the reinforcement member11, a predetermined space is provided between the rotary shaft8and the through-hole112aof the resin gear12and another predetermined space is provided between the rotary shaft8and the through-hole11aof the reinforcement member11. As illustrated inFIG.5B, as the reinforcement member11is fastened to the resin gear12with the screws13(each functioning as a fastening member), the opposing wall12dof the resin gear12having a rigidity lower than the reinforcement member11is pressed toward the reinforcement member11by the heads of the screws13as indicated by arrow A. As a result, as the opposing wall12dof the resin gear12moves toward the reinforcement member11, the resin gear12is deformed to decrease the space S. Then, the opposing wall12dof the resin gear12follows the reinforcement member11. As illustrated inFIGS.3A and3B, the hollow tube12ais provided with the (plurality of) slits12c. Due to such a configuration, as the opposing wall12dis deformed to follow the reinforcement member11, the opposite end portion of the slits12cwith respect to the reinforcement member11in the fastening direction (axial direction) is deformed to collapse. As a result, the diameter of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is reduced to press against the rotary shaft8. Further, the opposite end portion of the through-hole112awith respect to the reinforcement member11in the fastening direction (axial direction) is deformed to follow the rotary shaft8. Due to this deformation, the inner circumferential surface of the hollow tube12ais pressed against the rotary shaft8with a certain width in the axial direction from the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction). Due to this configuration, vibration of the drive transmitter14is restrained, and a decrease in the rotational accuracy is restrained. In the present embodiment, the fastening force of the screws13adjust deformation of the resin gear12, thereby adjusting the amount of reduction of the diameter of the hollow tube12a. For example, in a case in which the portion having the smallest inner diameter of the through-hole when the resin gear12is fastened to the reinforcement member11is smaller than the dimension capable of being lightly press-fitted, a reduction in the diameter of the through hole is relaxed by loosening the fastening of the screws13, and the dimension capable of being lightly press-fitted is obtained. Thus, by adjusting the fastening force of the screws13, the inner diameter of the through-hole is adjusted. Due to such a configuration, the drive transmitter14is lightly press-fitted without forming the inner diameter of the through-hole with accuracy, and therefore the manufacturing cost is reduced. Further, after the resin gear12is inserted into the rotary shaft8in a state in which the resin gear12is fastened to the reinforcement member11to such an extent that the resin gear12is not deformed, the screws13are fastened to reduce the inner diameter of the through-hole, and the resin gear12is pressed against the rotary shaft8. As described above, in the present embodiment, the drive transmitter does not need to be assembled to the rotary shaft8by light press-fitting. Due to this configuration, when compared with a configuration in which the drive transmitter is lightly press-fitted to the rotary shaft8, the drive transmitter14is assembled to the rotary shaft8easily. Further, in a case in which the drive transmitter is assembled to the rotary shaft8by light press-fitting, the drive transmitter needs to contact the rotary shaft8by lightly pressing against the rotary shaft8from the viewpoint of the assembly performance. By contrast, in the present embodiment, after being inserted into the rotary shaft8, the resin gear12and the reinforcement member11contact the rotary shaft8with pressure greater than light pressure in order to fasten the resin gear12and the reinforcement member11with the screws13to press against the rotary shaft8. Further, the resin gear12having a lower rigidity than the reinforcement member11and being easily elastically deformed is pressed against the rotary shaft8. Therefore, when compared with a configuration in which the reinforcement member11is pressed against the rotary shaft8, the resin gear12is pressed against the rotary shaft8at the target pressure even with a rough inner diameter dimension. Note that the gear portion12bof the resin gear12has no slits and no space in the rotational direction. Due to such a configuration, it is not likely that deformation of the opposing wall12dincreases or decreases the diameter of a part of the resin gear12. Accordingly, the configuration of the drive transmitter14according to the present embodiment restrain or prevent a meshing failure such as partial contact between the gear portion12bof the resin gear12and the motor gear10aof the drive motor10. Further, in the present embodiment, the end portion of the reinforcement member11with respect to the gear portion12bis in contact with the opposing wall12dprior to the fastening of the resin gear12and the reinforcement member11as illustrated inFIG.5A. Due to this configuration, the gear portion12bof the resin gear12is hardly deformed toward the reinforcement member11. As a result, deformation of the gear portion12bis further restrained. Further, in the present embodiment, the resin gear12has the slits12cformed in the hollow tube12aat intervals of 90 degrees in the rotational direction. This configuration restrains the opposite end portion of the hollow tube12awith respect to the reinforcement member11from collapsing into an oval shape. Accordingly, the through-hole112ais equally pressed against the rotary shaft8in the rotational direction. Next, a description is given of the drive transmitter14according to variations of the present embodiment. Each variation is referred to as “Variation”. Variation 1 FIG.6is a diagram illustrating a schematic configuration of the resin gear12of the drive transmitter14of Variation 1. FIG.7Ais a diagram illustrating a state in which the resin gear12and the reinforcement member11of the drive transmitter14of Variation 1, prior to fastening of the resin gear12and the reinforcement member11. FIG.7Bis a diagram illustrating a state in which the resin gear12and the reinforcement member11of the drive transmitter14of Variation 1, after the fastening of the resin gear12and the reinforcement member11. As illustrated inFIG.6, the resin gear12of the drive transmitter14of Variation 1 includes the opposing wall12dhaving an opposing face112d. The opposing face112dof the opposing wall12dfaces the reinforcement member11and is a sloped face inclined with respect to the fastening direction (axial direction). The opposing face112dis inclined to be farther away from the reinforcement member11toward the rotary shaft8. As in the above-described embodiment, as illustrated inFIG.7A, the drive transmitter14of Variation 1 has the space S between the opposing wall12dof the resin gear12and the reinforcement member11with respect to the rotary shaft8, prior to fastening of the resin gear12and the reinforcement member11. Due to the space S, as illustrated inFIG.7B, the opposing wall12dafter the fastening deforms following the reinforcement member11, the diameter of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is reduced, and then the inner circumferential surface of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is pressed against the rotary shaft8. Variation 2 FIGS.8A and8Bare diagrams, each illustrating a schematic configuration of the drive transmitter14of Variation 2. Specifically,FIG.8Aillustrates a schematic configuration of the drive transmitter14prior to the fastening of the resin gear12and the reinforcement member11, andFIG.8Billustrated a schematic configuration of the drive transmitter14after the fastening of the resin gear12and the reinforcement member11. As illustrated inFIGS.8A and8B, the reinforcement member11of the drive transmitter14of Variation 2 has an opposing face11cthat faces the opposing wall12dof the resin gear12. The opposing face11cof the reinforcement member11is a sloped face inclined to be farther away from the opposing wall12dtoward the rotary shaft8. As in the above-described embodiment, the drive transmitter14of Variation 2 has the space S between the opposing wall12dof the resin gear12and the reinforcement member11with respect to the rotary shaft8, prior to fastening of the resin gear12and the reinforcement member11. Due to the space S, as illustrated inFIG.8B, the opposing wall12dafter the fastening deforms following the reinforcement member11, the diameter of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is reduced, and then the inner circumferential surface of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is pressed against the rotary shaft8. Note that both the opposing face11cof the reinforcement member11facing the opposing wall12dand an opposing face of the opposing wall12dfacing the reinforcement member11may be inclined with respect to the direction to be farther away from the respective opposing members toward the rotary shaft8. As in the above-described embodiment and variation, the drive transmitter14having this configuration has the space S between the opposing wall12dof the resin gear12and the reinforcement member11with respect to the rotary shaft8, prior to fastening of the resin gear12and the reinforcement member11. Further, the diameter of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is reduced, and then the inner circumferential surface of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is pressed against the rotary shaft8, after the fastening of the resin gear12and the reinforcement member11. Variation 3 FIGS.9A and9Bare diagrams, each illustrating a schematic configuration of the drive transmitter14of Variation 3. Specifically,FIG.9Aillustrates a schematic configuration of the drive transmitter14prior to the fastening of the resin gear12and the reinforcement member11, andFIG.9Billustrated a schematic configuration of the drive transmitter14after the fastening of the resin gear12and the reinforcement member11. The drive transmitter14of Variation 3 has projections11dat the end portion of the reinforcement member11with respect to the gear portion12bof the resin gear12. Each projection11dprotrudes toward the opposing wall12dof the resin gear12. As in the above-described embodiment and variations, the drive transmitter14of Variation 3 has the space S between the opposing wall12dof the resin gear12and the reinforcement member11with respect to the rotary shaft8, prior to fastening of the resin gear12and the reinforcement member11. Due to the space S, as illustrated inFIG.8B, the opposing wall12dafter the fastening deforms following the reinforcement member11, the diameter of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is reduced, and then the inner circumferential surface of the hollow tube12aat the opposite end portion of the hollow tube12awith respect to the reinforcement member11in the fastening direction (axial direction) is pressed against the rotary shaft8. Further, as in the above-described embodiment and variations, the drive transmitter of Variation 3 restrains deformation of the gear portion12bof the resin gear12toward the reinforcement member11, prior to the fastening. Due to such a configuration, the gear portion12bof the resin gear12is restrained from deformation. Note that, even though the projection is disposed on the reinforcement member11in Variation 3, the projection may be disposed on the gear portion12bof the opposing wall12dof the resin gear12. Variation 4 FIGS.10A and10Bare diagrams, each illustrating a schematic configuration of the drive transmitter14of Variation 4. Specifically,FIG.10Aillustrates a schematic configuration of the drive transmitter14prior to the fastening of the resin gear12and the reinforcement member11, andFIG.10Billustrated a schematic configuration of the drive transmitter14after the fastening of the resin gear12and the reinforcement member11. As illustrated inFIGS.10A and10B, the resin gear12of the drive transmitter14of Variation 4 has the opposing wall12dthat is inclined with respect to the direction away from the reinforcement member11toward the gear portion12b. Further, in Variation 4, the hollow tube12ais extended over the opposing wall12dtoward the reinforcement member11. The hollow tube12aof Variation 4 has cutouts in the extended portion in the circumferential direction of the resin gear12. The width of each cutout in the extended portion of the hollow tube12aof the resin gear12is greater than the width of each pin engagement portion11bof the reinforcement member11. Due to such a configuration, when the parallel pin8ais engaged with the pin engagement portion11b, the hollow tube12ais prevented from contacting the parallel pin8afrom the rotational direction. As illustrated inFIG.10A, the drive transmitter14of Variation 4 has the space S near the gear portion12bprior to the fastening of the resin gear12and the reinforcement member11. Further, as illustrated inFIG.10A, the reinforcement member11faces the opposing wall12dof the resin gear12so that the hollow tube12aof the resin gear12passes through the through-hole11aof the reinforcement member11. Further, the radial outer end portion of the reinforcement member11is in contact with the inner circumferential surface of the gear portion12bhaving a cylindrical shape. That is, prior to the fastening of the first member and the second member, the reinforcement member11and the resin gear12contact with each other in the fastening direction at a contact portion located radially away from the fastening portion. In Variation 4, as the reinforcement member11is fastened to the resin gear12with the screws13, the opposing wall12dfollows the reinforcement member11, deforming the resin gear12. By so doing, as illustrated inFIG.10B, the diameter of the hollow tube12aat the end portion of the resin gear12with respect to the reinforcement member11is reduced. Accordingly, the inner circumferential surface of the hollow tube12aat the end portion with respect to the reinforcement member11is pressed against the rotary shaft8. In addition, since the opposing wall12dof the resin gear12near the gear portion12bdeforms largely, it is likely that the gear portion12bat the end portion of the gear portion12bof the resin gear12with respect to the reinforcement member11is deformed to reduce the diameter of the gear portion12b, according to the deformation of the resin gear12. However, in Variation 4, the radial outer end portion of the reinforcement member11is in contact with the inner circumferential surface of the gear portion12bhaving a cylindrical shape. Due to this configuration, the reinforcement member11prevents the end portion of the gear portion12bof the resin gear12with respect to the reinforcement member11, from deforming to reduce the diameter of the gear portion12bof the resin gear12. Variation 5 FIGS.11A and11Bare diagrams, each illustrating a schematic configuration of the drive transmitter14of Variation 5. Specifically,FIG.11Aillustrates a schematic configuration of the drive transmitter14prior to the fastening of the resin gear12and the reinforcement member11, andFIG.11Billustrated a schematic configuration of the drive transmitter14after the fastening of the resin gear12and the reinforcement member11. As illustrated inFIGS.11A and11B, the resin gear12of the drive transmitter14of Variation 5 has a groove12ein the opposing face of the opposing wall12dof the resin gear12with respect to the reinforcement member11. As illustrated inFIG.11A, the groove12efunctions as a space between the opposing wall12dof the resin gear12and the reinforcement member11, prior to the fastening of the resin gear12and the reinforcement member11. Further, screw through-holes are formed in the bottom of the groove12eso that the screws13pass through the screw through-holes. In addition, the space S is provided in a portion at which the resin gear12and the reinforcement member11are fastened to each other with the screws13. The hollow tube12ain Variation 5 has the shape similar to the hollow tube12ain Variation 4. In Variation 5, as the reinforcement member11is fastened to the resin gear12with the screws13, the radial center portion of the opposing wall12dthat is pressed by the heads of the screws13deforms to dent toward the reinforcement member11. Due to the deformation of the opposing wall12d, as illustrated inFIG.11B, the end portion of the hollow tube12awith respect to the reinforcement member11is deformed to reduce the diameter of the resin gear12, and the inner circumferential surface at the end portion of the hollow tube12awith respect to the reinforcement member11is pressed against the rotary shaft8. FIGS.12A,12B, and12Care diagrams, each illustrating a configuration example of slits. Note that the configurations of the drive transmitter of the above-described embodiment and variations have slits at intervals of 90 degrees in the rotational direction, as illustrated inFIG.12B. However, the number and positions of slits are not limited to the configurations of the above-described embodiment and variations. For example, as illustrated inFIG.12A, the slits may be provided at intervals of 180 degrees in the rotational direction. Further, as illustrated inFIG.12C, the slits may be provided at intervals of 60 degrees in the rotational direction. Having a greater number of slits is preferable to weaken the rigidity of the hollow tube, so that the diameter at one end of the hollow tube is easily reduced. Further, having a greater number of slits is more preferable to reduce the diameter of the hollow tube in a state in which one end of the hollow tube is maintained in a rounder shape, and therefore the one end of the hollow tube12ais evenly pressed against the rotary shaft. Further, in the configuration illustrated inFIG.12C, the resin gear and the reinforcement member are fastened to each other with screws at six (6) positions. By so doing, the hollow tube12ais preferably deformed more evenly. Further, the drive device described above drives the intermediate transfer belt61but the configuration of the drive device is not limited to the configuration. The drive transmitter according to the present disclosure may be applied to, for example, a drive device that drives a photoreceptor, a drive device that drives a secondary transfer roller, a drive device that drives a fixing roller, a drive device that drives a pair of conveyance rollers. The configurations described above are examples, and aspects of the present disclosure provide respective effects as follows. Aspect 1 The drive transmitter (for example, the drive transmitter14) includes a first member (for example, the resin gear12) and a second member (for example, the reinforcement member11). The first member includes an opening (for example, the through-hole112a) through which a rotary shaft (the rotary shaft8) passes, a wall (for example, the opposing wall12d) disposed orthogonal to an axial direction of the first member, and a drive transmitting portion (for example, the gear portion12b) by which a driving force is transmitted. The second member has a rigidity greater than the first member and is configured to be fastened to the wall of the first member. A space (for example, the space S) is configured to be between the second member and the wall of the first member, with the first member and the second member being overlaid in a fastening direction of the second member, prior to fastening of the first member and the second member. At least a part of the wall of the first member defining the space with the second member is configured to shift toward the second member to reduce a diameter of at least a part of the opening of the first member at the fastening of the first member and the second member. According to this configuration, by fastening the first member to the second member, the first member is deformed to reduce the diameter of at least a part of the opening. Therefore, the fastening force of the fastening member (for example, the screws13) is adjusted to adjust the inner diameter dimension of the opening. For example, in a case in which the portion having the smallest diameter of the inner diameter of the opening when fastening the first member to the second member is smaller than the dimension capable of being lightly press-fitted, by loosening fastening of the fastening member, the diameter of the through-hole is relaxed to make the measurements to be lightly press-fitted. Accordingly, adjustment of the fastening force adjusts the inner diameter of the opening (through-hole). Therefore, without forming the inner diameter of the opening with accuracy, the drive transmitter14is lightly press-fitted so that the manufacturing cost is decreased. Aspect 2 According to Aspect 1, the wall (for example, the opposing wall12d) of the first member (for example, the resin gear12) has a first face (for example, the opposing face112d) that faces the second member in the fastening direction, the second member (for example, the reinforcement member11) has a second face (for example, the opposing face11c) that faces the wall of the first member in the fastening direction, and at least one of the first face and the second face has a slanted face inclining with respect to the fastening direction. According to this configuration, as described in Variation 2 and Variation 3, the first member is deformed to reduce the diameter of at least a part of the through-hole11aat the fastening of the first member and the second member. Aspect 3 According to Aspect 2, the slanted face inclines to be farther away from the second member (for example, the reinforcement member11) or the wall (for example, the opposing wall12d) of the first member (for example, the resin gear12), toward the rotary shaft (for example, the rotary shaft8). According to this configuration, as described in the embodiments above, the space (for example, the space S) prior to the fastening of the first member and the second member is generated with respect to the rotary shaft8, and therefore deformation of the drive transmitting portion (for example, the gear portion12b) is restrained at the fastening of the first member and the second member. Aspect 4 According to Aspect 2 or Aspect 3, the wall (for example, the opposing wall12d) of the first member (for example, the resin gear12) is inclined with respect to the fastening direction. According to this configuration, the opposing face is brought to be inclined. Aspect 5 According to any one of Aspects 1 to 4, the space (for example, the space S) is given at a fastening portion at which the second member (for example, the reinforcement member11) is fastened to the first member (for example, the resin gear12). According to this configuration, as described in Variation 5, the first member is deformed to narrow the space with the fastening force of the fastening member (for example, the screws13), and therefore the diameter of the opening (for example, the through-hole112a) is reduced. Aspect 6 According to any one of Aspects 1 to 5, the first member (for example, the resin gear12) and the second member (for example, the reinforcement member11) are fastened at a fastening portion, and the second member and the wall (for example, the opposing wall12d) of the first member contact with each other in the fastening direction at a contact portion located radially away from the fastening portion, prior to the fastening of the first member and the second member. According to this configuration, as described in Variation 3, deformation of the drive transmitting portion (for example, the gear portion12b) is restrained. Aspect 7 According to any one of Aspects 1 to 6, the first member (for example, the resin gear12) includes a hollow tube (the hollow tube12a) having the opening (for example, the through-hole112a), and the hollow tube of the first member has a slit (the slits12c) or a cutout. According to this configuration, as described in the above-described embodiments, the diameter of the opening is easily reduced at the fastening of the first member and the second member (for example, the reinforcement member11). Aspect 8 According to any one of Aspects 1 to 7, the second member (for example, the reinforcement member11) has a through-hole (for example, the through-hole11a) through which the rotary shaft (for example, the rotary shaft8) passes. Aspect 9 In Aspect 9, a drive device (for example, the drive device20) includes a drive source (for example, the drive motor10), and the drive transmitter (for example, the drive transmitter14) according to any one of Aspects 1 to 8. The drive transmitter is configured to transmit a drive force of the drive source. According to this configuration, an increase in the cost of the drive device is restrained. Aspect 10 In Aspect 10, an image forming apparatus (for example, the image forming apparatus200) includes the drive device according to Aspect 9 and a driven target body (for example, the intermediate transfer belt61) configured to be drive by the drive device. According to this configuration, a reduction in cost of the image forming apparatus is enhanced. The present disclosure is not limited to specific embodiments described above, and numerous additional modifications and variations are possible in light of the teachings within the technical scope of the appended claims. It is therefore to be understood that, the disclosure of this patent specification may be practiced otherwise by those skilled in the art than as specifically described herein, and such, modifications, alternatives are within the technical scope of the appended claims. Such embodiments and variations thereof are included in the scope and gist of the embodiments of the present disclosure and are included in the embodiments described in claims and the equivalent scope thereof. The effects described in the embodiments of this disclosure are listed as the examples of preferable effects derived from this disclosure, and therefore are not intended to limit to the embodiments of this disclosure. The embodiments described above are presented as an example to implement this disclosure. The embodiments described above are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, replacements, or changes can be made without departing from the gist of the invention. These embodiments and their variations are included in the scope and gist of this disclosure and are included in the scope of the invention recited in the claims and its equivalent. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above. | 56,939 |
11860565 | DESCRIPTION OF THE EMBODIMENTS Exemplary embodiments of the present invention are described below with reference to the drawings. Note that, for example, the dimensions, the materials, the shapes, and relative arrangements of structural components described in the embodiments below are each one example, and do not limit the scope of the present invention. Structure of Image Forming Apparatus Next, an image forming apparatus100according to an embodiment is described with reference toFIG.7. FIG.7is a vertical sectional conceptual view of the image forming apparatus according to an embodiment of the present invention. Specifically,FIG.7shows a cross section of the image forming apparatus100including process cartridges according to the present embodiment. The image forming apparatus100of the present embodiment is a full-color laser beam printer using an in-line method or an intermediate transfer method. The image forming apparatus100is capable of forming a full-color image on a transfer material (such as a recording sheet, a plastic sheet, or a cloth) in accordance with image information. The image information is input to an image reading device connected to a main body of the image forming apparatus, or to the main body of the image forming apparatus from a host device of, for example, a personal computer connected to the main body of the image forming apparatus so as to be capable of communication therewith. The image forming apparatus100includes process cartridges7each constituting a corresponding one of a plurality of image forming devices (SY, SM, SC, SK). The image forming devices are each provided for a corresponding one of yellow (Y), magenta (M), cyan (C), and black (K), and can each form an image of the corresponding one of the colors (Y, M, C, K). In the present embodiment, the image forming devices (SY, SM, SC, SK) are disposed in a row in a direction intersecting a vertical direction. Note that the process cartridges7are each attached to and detached from the image forming apparatus100via a mounting device, such as a mounting guide or a positioning member, provided at the main body of the image forming apparatus. In the present embodiment, the color process cartridges7all have the same shape, and each of yellow (Y) toner, magenta (M) toner, cyan (C) toner, and black (K) toner is stored inside a corresponding one of the color process cartridges7. Although, in the present embodiment, the process cartridges are described as being attached and detached, developing units (developing devices) may be configured so as to be attached to and detached from the main body of the image forming apparatus. Photosensitive drums1are each rotationally driven by a driving device (driving source) that is not shown. A scanner unit (exposure device)30is disposed near the photosensitive drums1. The scanner unit30is an exposure unit that irradiates the photosensitive drums1with laser and forms electrostatic latent images on the photosensitive drums1on the basis of image information. The surface potential (bright potential) of each photosensitive drum1after exposure in the present embodiment is set to become −100 V. An intermediate transfer belt31, serving as an intermediate transfer body, for transferring the toner images on the photosensitive drums1to a transfer material P (recording material) are disposed so as to oppose four photosensitive drums1. The endless intermediate transfer belt31, serving as an intermediate transfer body, contacts all of the photosensitive drums1, and moves by circulating (rotates) in the direction of the illustrated arrow B (counterclockwise direction). Four primary transfer rollers32, serving as primary transfer devices, are disposed in parallel on an inner peripheral surface side of the intermediate transfer belt31so as to oppose a corresponding one of the photosensitive drums1. A bias having a polarity opposite to a normal charge polarity of toner is applied to the primary transfer rollers32from a primary transfer bias power supply (high-voltage power supply), serving as a primary transfer bias applying device (not shown). Therefore, the toner images on the photosensitive drums1are transferred (primary-transferred) to the intermediate transfer belt31. A secondary transfer roller33, serving as a secondary transfer device, is disposed on an outer peripheral surface side of the intermediate transfer belt31. A bias having a polarity opposite to a normal charge polarity of toner is applied to the secondary transfer roller33from a secondary transfer bias power supply (high-voltage power supply), serving as a secondary transfer bias applying device (not shown). Therefore, the toner images on the intermediate transfer belt31are transferred (secondary-transferred) to the transfer material P. For example, when forming a full-color image, the above-described process is successively performed at the image forming units SY, SM, SC, and SK, and the toner images of the respective colors are successively superimposed upon each other on the intermediate transfer belt31and are primary-transferred thereto. Then, in synchronism with the movement of the intermediate transfer belt31, the transfer material P is conveyed to a second transfer portion. Due to the action of the second transfer roller33in contact with the intermediate transfer belt31via the transfer material P, the toner images of the four colors on the intermediate transfer belt31are secondary-transferred all together to the transfer material P. The transfer material P to which the toner images have been transferred is conveyed to a fixing device34, serving as a fixing unit. The toner images are fixed to the transfer material P by applying heat and pressure to the transfer material P at the fixing device34. Structure of Process Cartridges The overall structure of the process cartridges7that are mounted on the image forming apparatus according to an embodiment is described. FIG.6is a vertical sectional conceptual view of a process cartridge according to an embodiment of the present invention. Specifically,FIG.6shows a main cross section of a process cartridge7of the present embodiment when seen in a longitudinal direction (rotational axis direction) of a photosensitive drum1. Note that, in the present embodiment, the structures and the operations of the process cartridges7for the respective colors are substantially the same, except that the type (color) of developer that they store differs. The process cartridge7includes a photosensitive unit12including, for example, the photosensitive drum1, and a developing unit3(developing device) including, for example, a developing roller4(developer carrying member). The photosensitive drum1is rotatably attached to the photosensitive unit12via a bearing (not shown). The photosensitive drum1is subjected to a driving force of a driving motor, serving as a driving device (driving source) that is not shown, and is thus rotationally driven at a speed of 300 mm/sec in the direction of illustrated arrow A in accordance with an image forming operation. A charging member (charging roller)2for charging the photosensitive drum1and a cleaning member6are disposed at the photosensitive unit12so as to contact a peripheral surface of the photosensitive drum1. The charging member2is configured to be rotated by the rotation of the photosensitive drum1, and can be applied with a voltage by a voltage applying device (not shown). The charging roller2is formed by successively stacking a conductive elastic layer and a high-resistance layer on a metal core in a length of 232 mm in a longitudinal direction. Specifically, a conductive elastic layer made of urethane rubber having a thickness of approximately 3 mm is formed around a metal core having a diameter of 6 mm and a length of 240 mm in the longitudinal direction. Then, a high-resistance layer in which carbon black is dispersed in urethane rubber having a thickness of a few μm is formed on the elastic layer. Two end portions of the metal core of the charging roller2are rotatably supported by a conductive supporting member, and, further, the supporting member is urged so as to be moved by a spring member in the direction of the photosensitive drum1. Therefore, the charging roller2is pressure-contacted against the photosensitive drum1by a predetermined pressing force in opposition to the elasticity of the conductive elastic layer, so that a charging nip portion is formed. The charging roller2contacts the photosensitive drum1and is rotated by the rotation of the photosensitive drum1. Then, a direct current voltage is applied to the charging roller2via the metal core by a power supply, and the surface of the photosensitive drum1is uniformly charged. In the present embodiment, a charging bias is applied so that the surface potential of the photosensitive drum1during the formation of an image becomes −500 V. The cleaning blade6has a structure in which a SUS sheet metal and a rubber tip that is elastic at an end (free end) of the sheet metal press-contact each other in a length of 250 mm in a longitudinal direction. An end of the rubber tip of the cleaning blade6contacts the photosensitive drum1at a desired angle and a desired inroad amount (distance). In order to ensure good cleaning performance, the contact pressure of the cleaning blade6with respect to the surface of the photosensitive drum1is approximately 80 g/cm. Residual toner on the surface of the photosensitive drum1is removed by such a cleaning blade structure. The developing roller4, serving as a developer carrying member, that contacts the photosensitive drum1and that rotates in the direction of illustrated arrow D (counterclockwise direction) is provided at the developing unit3. The developing roller4is a semiconductive elastic body formed of a low-hardness rubber material, such as silicone or urethane, or a low-hardness rubber material foam, or a combination thereof. The low-hardness rubber material and the foam have a conducting agent, such as carbon, dispersed therein, and have a volume resistivity of 102Ωcm to 1010Ωcm. The elastic body has an outside diameter of 20 mm, and a length of 235 mm in a longitudinal direction. The developing roller4contacts the photosensitive drum1with a required contact pressure. In the present embodiment, the developing roller4and the photosensitive drum1rotate so that their surfaces move in the same direction (from the bottom toward the top in the present embodiment) at an opposing portion (contact portion). During the formation of an image, a voltage (hereunder referred to as a developing bias) is applied to the developing roller4by a voltage applying device (not shown). In the present embodiment, the developing bias during the formation of an image is such that a direct current voltage of −350 V is applied. A developer supply roller5(hereunder simply referred to as “supply roller”), serving as a device for supplying and collecting a developer, that rotates in the direction of illustrated arrow E (counterclockwise direction) is disposed at the developing unit3. Here, the supply roller5contacts a peripheral surface of the developing roller4. The supply roller5is an elastic roller formed from, for example, an elastic body, and, in the present embodiment, an insulating sponge roller having an outside diameter of 16 mm and a length of 220 mm in a longitudinal direction is disposed where it contacts the developing roller4. The supply roller5and the developing roller4rotate so that their surfaces move in opposite directions at an opposing portion (contact portion). A developing blade8(restricting member) that contacts the peripheral surface of the developing roller4, and scrapes, makes uniform, and restricts the thickness of a toner layer is provided at the developing unit3. The developing blade8includes a thin plate81(blade portion) and a supporting plate82(supporting portion), both of which extend in a longitudinal direction A1described below. One end81a(fixed end) of the thin plate81in a transverse direction A2orthogonal to the longitudinal direction is fixed to the supporting plate82, and the other end81bis a free end and includes a contact portion810(described below). The thin plate81is formed of SUS (stainless steel) processed into the form of a leaf spring and has a thickness of 80 μm and a length of 230 mm in the longitudinal direction. The contact portion810that is positioned at the free end (the other end81b) of the thin plate81contacts the developing roller4with a required contact pressure. Toner supplied to the developing roller4has its layer thickness restricted by the developing blade8, and a thin toner layer to which an electric charge is applied by triboelectric charging is formed on the developing roller4. The toner is supplied to a developing region as the developing roller4rotates. Toner still being carried by the developing roller4without contributing to a developing operation is removed from the developing roller4due to rubbing by the supply roller5. Part of the toner that has been removed is supplied again, together with new toner supplied to the supply roller5, to the developing roller4by the supply roller5, and the remaining toner is returned into and collected inside a toner storage chamber13. In this way, in the present embodiment, the supply roller5has two functions, that is, the function of supplying toner and the function of collecting toner. The toner storage chamber13, serving as a developer storage chamber, stores a developer (toner) having a particle diameter of 7 μm. A rotatably supported toner conveying member11is provided inside the toner storage chamber13. Note than, in the present embodiment, the developer is a nonmagnetic one-component developer. The toner conveying member11stirs the toner stored inside the toner storage chamber13, and conveys the toner toward a developing chamber14where the developing roller4and the supply roller5are provided. The developing chamber14has a developing opening having a length of 226 mm in a longitudinal direction as an opening portion for conveying the toner to the outside of the developing unit3. The developing roller4is assembled to the developing unit3in an arrangement that closes the developing opening. Note that the present embodiment is applicable to a structure serving as a developer container (toner cartridge) having only the toner storage chamber13and the toner conveying member11and being attached to and detached from the main body of the apparatus. Structure of Restricting Member Next, the developing blade8(restricting member) according to an embodiment is described in detail. FIG.2Ais a sectional conceptual view of the developing blade according to an embodiment of the present invention in a transverse direction.FIG.2Bis a perspective conceptual view of the developing blade in a longitudinal direction. In the present embodiment, as shown inFIG.2A, the developing blade8includes the supporting plate82formed from processed stainless steel, and the thin plate81(blade portion) processed in the form of a leaf spring. The thin plate81is integrated with the supporting plate82(supporting portion) by YAG laser welding. As shown inFIG.2B, in the present embodiment, part of an “end edge portion” (not shown) on a side to be brought into contact with the developing roller is scraped off on an end (free end) side of the thin plate81of the developing blade8by a polishing operation. A portion (second portion) formed by the scraping is formed in the entire region in the longitudinal direction A1. As shown inFIG.2A, the polishing operation is such that, with respect to the end edge portion, a scraping amount (distance) in the transverse direction A2is Ps and a scraping amount in a thickness direction A3is Pt. Specifically, in the present embodiment, the developing blade8includes the thin plate81that extends in the longitudinal direction A1and the supporting plate82that supports the one end81aof the thin plate81in the transverse direction A2orthogonal to the longitudinal direction. The developing blade8includes the contact portion810that is provided at the other end81b(the other end portion), which is the free end of the thin plate81, in the transverse direction A2, and that is provided for contacting the surface of the developing roller4. More specifically, in the present embodiment, in a cross section orthogonal to the longitudinal direction A1, the contact portion810has a first portion811(first surface) that is positioned on one side where the contact portion810contacts the developing roller4, and a third portion813(third surface) orthogonal to the first portion811. Before a polishing operation, the first portion (first surface) and the third portion (third surface) intersect each other and form the aforementioned “end edge portion”. By the polishing operation, a second portion812(second surface) that connects the first portion and the third portion is formed. That is, in the cross section orthogonal to the longitudinal direction A1, the contact portion810has the first portion811and the second portion812. Note that the first portion811extends in a first direction A21along the transverse direction A2and contacts the developing roller4. On the other hand, the second portion812extends further toward the free end from an end D1of the first portion811, and extends in a second direction A22intersecting the first direction A21. An imaginary line B1that is orthogonal to the first direction A21and that passes through an end D2of the second portion812intersects an extension line B2of the first portion811in the first direction A21(intersection point D3). A region demarcated by connecting the intersection point D3, the end D1of the first portion811, and the end D2of the second portion812by straight lines (three imaginary lines) can be defined as a “take—in region (first region)”. In the present embodiment, in the longitudinal direction A1, the area of a take—in region S2on an end portion side ES of the thin plate81is smaller than the area of a take—in region S1on a central side CS positioned inwardly of the end portion side. Next, a method of polishing the thin plate81is described usingFIG.3. FIGS.3A to3Deach illustrate a method of polishing the developing blade according to an embodiment of the present invention. As shown inFIG.3A, the thin plate81before being joined to the supporting plate82is fixed to a base94by being interposed between the base94and a holding member93. A polishing film92wound around a rubber roller91contacts an end portion of the thin plate81while the polishing film92is subjected to a load. In the present embodiment, a wrapping film sheet having a granularity of #800is used for the polishing film92, and a load of 500 g is applied to the rubber roller91. The polishing film92on the rubber roller91is disposed in a fixed state. As shown inFIGS.3B,3C, and3D, due to the base94moving toward the left and right in a longitudinal direction F and a direction G, the end portion of the thin plate81is rubbed against the polishing film92and is finely scraped. Since the thin plate81is micromachined by polishing, occurrence of a cut piece when forming the shape is very slight. The scraping amount of the thin plate81is proportional to the rubbing distance with respect to the polishing film92. The larger the rubbing distance, the larger the scraping amount, and the smaller the rubbing distance, the smaller the scraping amount. That is, by controlling the movement amount of the base94, the rubbing amount between the thin plate81and the polishing film92can be changed longitudinally, and, thus, the scraping amount can be controlled. As in the present embodiment, when the scraping amount of two end portions of the thin plate81is to be reduced, the rubbing distance of the two end portions against the polishing film92is made smaller than the rubbing distance of the other portions against the polishing film92. Next, the relationship of a longitudinal arrangement of each member is described usingFIG.4. FIG.4shows dimensional relationships in the longitudinal direction between members that constitute the developing unit according to an embodiment of the present invention. With respect to a length L4 (235 mm) in the longitudinal direction of the developing roller4, the supply roller5contacts the surface of the developing roller4in a range (region CS1 on the central side CS) of a length L1 (220 mm) in the longitudinal direction, and the developing blade8is disposed in contact with the surface of the developing roller4in a range of a length L3 (230 mm) in the longitudinal direction. A developing chamber frame15filled with toner has an opening (developing opening) having a length L5 (226 mm) (opening width) in the longitudinal direction, and the developing roller4is disposed so as to face the developing opening. Therefore, the developing roller4can carry toner in a length in the longitudinal direction corresponding to the developing opening width (L5). In order to prevent the supply roller5from being deformed or damaged due to an end portion of the supply roller5rotating while being in contact with an inner wall of the developing opening, 3 mm long gaps in the longitudinal direction are provided, one on each side between the supply roller5and the developing chamber frame. Although toner with which the developing roller4is coated is conveyed to a developing region to perform a developing operation, toner that was not used for the developing operation is removed by the supply roller5, and part of the removed toner is supplied again, together with new toner inside the developing chamber14, to the developing roller4by the supply roller5. Since the supply roller5does not contact the developing roller4in each region having a longitudinal width L2 (on the end portion side ES), the supply roller5does not perform a removing operation in these regions. Therefore, a toner coating portion of the developing roller4at each L2 position is such that the toner coating gradually accumulates, and, thus, a toner coating layer at each portion having the longitudinal width L2 becomes thick. Therefore, the toner coating portion of each portion having the longitudinal width L2 can no longer have a normal electric charge, as a result of which toner scattering or toner dripping occurs. The matters above can be mitigated by changing the cross-sectional areas of the shapes of the toner take—in regions (S1, S2), formed by the thin plate81and the developing roller4, at a longitudinal-contact-position L1 portion (a region on the central side CS) of the supply roller5and at each longitudinal-non-contact-position-L2 portion (region on the end portion side ES) of the supply roller5. Next, the scraping amount of the developing blade is described in detail with reference toFIGS.1A to1D. FIG.1Ais a conceptual view that illustrates a transverse-direction scraping amount in the longitudinal direction of the developing blade according to the embodiment of the present invention. FIG.1Bis an enlarged conceptual view of an end portion side shownFIG.1A. FIG.1Cis a conceptual view that illustrates a thickness-direction scraping amount in the longitudinal direction of the developing blade. FIG.1Dis an enlarged conceptual view of an end portion side shownFIG.1C. That is,FIGS.1A to1Dare graphs of longitudinal scraping amounts of a contact surface portion of the thin plate81of the developing blade8that contacts the developing roller4according to the present embodiment. More specifically,FIG.1Ashows a transverse-direction scraping amount Ps of the thin plate81, andFIG.1Bshows the transverse-direction scraping amount Ps of a portion extending from 0 mm to 25 mm in the longitudinal direction, that is, one end portion inFIG.1A. FIG.1Cshows a thickness-direction scraping amount Pt of the thin plate81.FIG.1Dshows the thickness-direction scraping amount Pt of a portion extending from 0 mm to 25 mm in the longitudinal direction, that is, one end portion inFIG.1C. Note that a laser microscope VK-X200 (manufactured by Keyence Corp.) is used for measuring the scraping amounts. In the present embodiment, as shown inFIGS.1B and1D, scraping amounts Ps1 and Pt1 of the longitudinal-contact-position-L1 portion (region on the central side CS1), where the supply roller5contacts the developing roller4, differs from scraping amounts Ps2 and Pt2 of a longitudinal-non-contact-position-L2 portion (region on the end portion side ES), where the supply roller5does not contact the developing roller4. Specifically, Ps2 (40 μm<Ps2<15 μm) is smaller than Ps1 (approximately 40 μm), and Pt2 (15 μm<Pt2<3 μm) is smaller than Pt1 (15 μm). In other words, when performing a polishing operation, on the central side CS, the scraping amount in the transverse direction A2is Ps1, and the scraping amount of the thin plate81in the thickness direction A3is Pt1. On the end portion side ES, the scraping amount in the transverse direction A2is Ps2. The scraping amount of the thin plate81in the thickness direction A3is Pt2. Here, the contact portion810of the thin plate81is constituted so as to satisfy the relationship Ps1>Ps2 and the relationship Pt1>Pt2. Such a structure provides the following effects. FIG.5Aillustrates a contact state on the central side between the developing roller and the developing blade in the developing unit according to the embodiment of the present invention.FIG.5Billustrates a contact state on the end portion side between the developing roller and the developing blade. Specifically,FIG.5Ashows a vertical cross section of the contact portion of the developing blade8at the L1 portion, where the supply roller5contacts the developing roller4, in the longitudinal direction.FIG.5Bshows a vertical cross section of the contact portion of the developing blade8at the L2 portion, where the supply roller5does not contact the developing roller4, in the longitudinal direction. Note that the take—in region S1(hatched portion) shown inFIG.5Aand the take—in region S2(hatched portion) shown inFIG.5Bare each a region that is interposed between the developing roller4and the developing blade8and that is provided for taking in toner that flows in a movement direction of the surface of the developing roller. The cross-sectional areas of the take—in regions (shapes) in the present embodiment are calculated by using the transverse-direction scraping amount Ps and the thickness-direction scraping amount Pt. In particular, as shown inFIG.1B, on the end portion side ES, the scraping amount Ps2 in the transverse direction A2decreases toward an outer side in the longitudinal direction A1. As shown inFIG.1D, on the end portion side ES, the scraping amount Pt2 in the thickness direction A3decreases toward the outer side in the longitudinal direction A1. In the present embodiment, the cross-sectional area of the take—in region S1(shape) at the contact position of the supply roller5inFIG.5is approximately 300 μm2, and the cross-sectional area of the take—in region S2(shape) at the non-contact position of the supply roller5is approximately 50 μm2. In other words, in the present embodiment, in the longitudinal direction A1, the end portion side ES is situated outward of an end portion CS11 of the region CS1, which contacts the developing roller4, of the supply roller5. In the longitudinal direction A1, the central side CS is situated inward of the end portion CS11 of the region CS1, which contacts the developing roller4, of the supply roller5. By causing the area of the take—in region S1corresponding to the central side to be larger than the area of the take—in region S2corresponding to the end portion side, toner that is normally supplied by the supply roller5is stably supplied to the take—in regions of the developing blade8. Toner that was not used in a developing operation is removed by the supply roller5. On the other hand, by causing the cross-sectional area of the toner take—in region S2(shape) of the longitudinal-non-contact-position-L2 portion, where the supply roller5does not contact the developing roller4, to be small, a restricting force can be increased. Therefore, although toner tends to accumulate due to the supply roller5not performing a removing operation at the L2 portion, where the supply roller5does not contact the developing roller4, an increase in thickness of a toner coating layer can be suppressed by the restricting force of the developing blade8due to the take—in region (shape) of the developing blade8being small. Evaluation Tests Using the developing unit3and the image forming apparatus100according to the present embodiment, the uniformity in a longitudinal direction of a toner coating on the developing roller4was checked. The evaluation condition was continuous printing of images on up to 10000 A4-sized sheets at a printing ratio of 2%, and the evaluation environment was 25° C./50% RH. Next, the embodiment of the present invention is compared with a comparative example shown inFIGS.8A and8BandFIGS.9A to9D. FIG.8Ais a sectional view in a transverse direction of a developing unit according to a comparative example of the present invention.FIG.8Bis a perspective view in a longitudinal direction of the developing unit according to the comparative example. FIG.9Aillustrates a transverse-direction scraping amount in a longitudinal direction of a developing blade according to the comparative example of the present invention.FIG.9Bis an enlarged conceptual view of an end portion side shown inFIG.9A. FIG.9Cillustrates a thickness-direction scraping amount in the longitudinal direction of the developing blade according to the comparative example.FIG.9Dis an enlarged conceptual view of an end portion side shown inFIG.9C. In the comparative example of the present embodiment, as shown inFIGS.9A to9D, a longitudinal scraping amount of a thin plate81of a developing blade8is uniform. Specifically,FIG.9Ashows a transverse-direction scraping amount Ps in an entire longitudinal region of the thin plate81.FIG.9Bshows the transverse-direction scraping amount Ps of a portion extending from 0 mm to 25 mm in the longitudinal direction, that is, one end portion inFIG.9A.FIG.9Cshows a thickness-direction scraping amount Pt in a thickness direction of the thin plate81.FIG.9Dshows the thickness-direction scraping amount Pt of a portion extending from 0 mm to 25 mm in the longitudinal direction, that is, one end portion inFIG.9C. As a result, the developing unit3including the developing blade8using the thin plate81according to the embodiment had a uniform toner coating layer on the developing roller in the longitudinal direction even after the evaluation ended, and toner scattering or toner dripping did not occur. This is because the restricting force increased due to the cross-sectional area of the take—in region S2(shape) of the developing blade8at the L2 portion, which is situated at the non-contact position of the supply roller5, being small. On the other hand, in a developing unit3including the developing blade8using the thin plate81according to the comparative example, toner scattering or toner dripping occurred when printing was performed on 6000 sheets. This is because a supply roller5does not perform a removing operation on an L2 portion, which is situated at a non-contact position of the supply roller5that does not contact a developing roller4, and because the restricting force of the developing blade8is weak, as a result of which a toner coating layer becomes thick and toner cannot have a normal electric charge. Although, in the present embodiment, the thin plate81is formed of SUS, which is stainless steel, the thin plate81may be a phosphor bronze plate having the same shape or a thin plate that is laminated with a resin covering member, such as a polyamide elastomer covering member, as long as the contact portion thereof that contacts the developing roller4can be micromachined. If the scraping amounts of a contact portion (Ps1, Pt1) of the thin plate81that contacts the supply roller5and a non-contact portion (Ps2, Pt2) of the thin plate81that does not contact the supply roller5have the relationship Ps1>Ps2 and Pt1>Ps2, effects can be provided. As absolute values of the scraping amounts, the scraping amounts of the thin plate81are adjusted as appropriate so that the scraping amounts become toner coating amounts on the developing roller4required as a system. As described above, at the thin plate81of the developing blade8, the scraping amounts of the thin plate81at a non-contact position of the supply roller5can be made smaller than the scraping amounts of the thin plate81at a longitudinal position that the supply roller5contacts. Therefore, an increase in the thickness of a toner coating layer at the non-contact position of the supply roller5can be suppressed. In addition, the amount of cut pieces at the time of processing that are produced during manufacturing can be reduced by forming a fine scraping shape by polishing. Accordingly, according to the present invention, the cross-sectional area of the take—in region (shape) of the developing blade8at the non-contact position where the supply roller5does not contact the developing roller can be made smaller than the cross-sectional area of the take—in region (shape) of the developing blade at the contact position where the supply roller contacts the developing roller. Therefore, a toner coating on the developing roller can be made uniform in the longitudinal direction, and, thus, occurrence of toner scattering or toner dripping can be suppressed. Materials to be discarded at the time of manufacturing can be minimized due to micromachining by a polishing operation. The present invention can be summarized as follows. (1) A developing device (3) according to the present invention includesa developer carrying member (4) configured to carry a developer; anda restricting member (8) configured to restrict a thickness of a layer of the developer on the developer carrying member. The Restricting Member Includesa blade portion (81) that extends in a longitudinal direction (A1); anda supporting portion (82) that supports a first end portion of the blade portion in a transverse direction (A2) orthogonal to the longitudinal direction. A contact portion (810) for contacting a surface of the developer carrying member is provided at a second end portion opposite to the first end portion in the transverse direction, which includes a free end, of the blade portion in the transverse direction. In a cross section orthogonal to the longitudinal direction, the contact portion includesa first portion (811) that extends in a first direction (A21) along the transverse direction and that contacts the developer carrying member, anda second portion (812) that extends toward the free end from an end (D1) of the first portion on a side of the free end in a second direction (A22) intersecting the first direction so that the second portion is farther away from the surface of the developer carrying member as goes toward the free end. When a region demarcated by connecting an intersection point, the end (D1) of the first portion, and an end (D2) of the second portion on a side of the free end in the second direction by a straight line is defined as a first region, the intersection point being where an imaginary line (B1) that is orthogonal to the first direction and that passes through the end (D2) of the second portion intersects an extension line (B2) of the first portion in the first direction, in the longitudinal direction of the blade portion, an area of the first region on a longitudinal end portion of the blade portion is smaller than an area of the first region on a longitudinal central portion of the blade portion. (2) In the developing device according to the present invention, the second portion (812) may be formed by performing a polishing operation on the second end portion of the blade portion (81). (3) In the developing device according to the present invention, in performing the polishing operation,when, on the longitudinal central portion of the blade portion (81), a scraping amount in the transverse direction (A2) is Ps1 and a scraping amount in a thickness direction (A3) of the blade portion (81) is Pt1, andwhen, on the longitudinal end portion of the blade portion (81), a scraping amount in the transverse direction (A2) is Ps2 and a scraping amount in the thickness direction (A3) of the blade portion is Pt2, Ps1>Ps2, and Pt1>Pt2. (4) In the developing device according to the present invention, on the longitudinal end portion of the blade portion (81), the scraping amount Ps2 in the transverse direction (A2) decreases toward an outer side in the longitudinal direction (A1). (5) In the developing device according to the present invention, on the longitudinal end portion of the blade portion (81), the scraping amount Pt2 in the thickness direction (A3) decreases toward the outer side in the longitudinal direction (A1). (6) The developing device according to the present invention may include a supply member (5) configured to contact the developer carrying member (4) and to supply a developer to the developer carrying member. In the longitudinal direction (A1), the longitudinal end portion is positioned outside a region in which the supply member contacts the developer carrying member (4), and in the longitudinal direction (A1), the longitudinal central portion is positioned inside the region in which the supply member contacts the developer carrying member (4). (7) In the developing device according to the present invention, the developing device (3) can be configured to be attached to and detached from an image forming apparatus (100). (8) A restricting member (8) according to the present invention includesa blade portion (81) that extends in a longitudinal direction (A1); anda supporting portion (82) that supports a first end portion of the blade portion in a transverse direction (A2) orthogonal to the longitudinal direction. A contact portion (810) for contacting a surface of the developer carrying member is provided at a second end portion opposite to the first end portion in the transverse direction, which includes a free end, of the blade portion in the transverse direction. In a cross section orthogonal to the longitudinal direction, the contact portion includesa first portion (811) that extends in a first direction (A21) along the transverse direction and that contacts the developer carrying member, anda second portion (812) that extends toward the free end from an end (D1) of the first portion on a side of the free end in a second direction (A22) intersecting the first direction so that the second portion is farther away from the surface of the developer carrying member as goes toward the free end.When a region demarcated by connecting an intersection point, the end (D1) of the first portion, and an end (D2) of the second portion on a side of the free end in the second direction by a straight line is defined as a first region, the intersection point being where an imaginary line (B1) that is orthogonal to the first direction and that passes through the end (D2) of the second portion intersects an extension line (B2) of the first portion in the first direction,in the longitudinal direction of the blade portion, an area of the first region on a longitudinal end portion of the blade portion is smaller than an area of the first region on a longitudinal central portion of the blade portion. (9) A process cartridge (7) according to the present invention includesan image carrying member (1) configured to carry an image; andthe developing device (3) or the restricting member (8), and the process cartridge is attached to and detached from an image forming apparatus (100). (10) An image forming apparatus (100) according to the present invention includesa fixing member (34); andthe developing device (3), the restricting member (8), or the process cartridge (7), andthe image forming apparatus forms an image on a recording material (P). According to the present invention, the uniformity in a longitudinal direction of a developer layer that is carried by the developer carrying member is increased while the efficiency of use of a raw material at the time of manufacturing is increased. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2021-106016, filed Jun. 25, 2021, which is hereby incorporated by reference herein in its entirety. | 40,982 |
11860566 | DESCRIPTION OF THE EMBODIMENT In the following, embodiments according to the present invention will be described with reference to the drawings. First Embodiment An outline of an image forming apparatus100according to a first embodiment will be described usingFIG.2. The image forming apparatus100is a monochromatic printer for forming a monochromatic image on a sheet S on the basis of image information received from an external device. As the sheet S which is a recording material, it is possible to use various sheets different in size and material, including paper such as plain paper or thick paper; a plastic film; a cloth; a sheet material such as coated paper subjected to surface treatment; a special-shaped sheet material such as an envelope or index paper; and the like. As shown inFIG.2, the image forming apparatus100includes an image forming portion101of an electrophotographic type in which an image is formed on the sheet S, and a sheet feeding mechanism (6,8,12) for feeding and conveying the sheet S. The image forming portion101includes a photosensitive drum1as an image bearing member, a charging roller2as a charging means, an exposure device3as an exposure means, a developing device4as a developing means, a transfer roller5as a transfer means, a brush member11, and a fixing device9as a fixing means. The photosensitive drum1is an electrophotographic photosensitive member molded in a drum shape. The charging roller2is a charging member of a contact constitution example type in which the charging roller2contacts the photosensitive drum1. A contact portion between the charging roller2and the photosensitive drum1is a charging portion P2(charging position) where charging of a surface of the photosensitive drum1is carried out. The developing device4includes a developing roller41, a supplying roller42, a stirring member43, a developing blade44, and a toner accommodating portion45. The developing roller41is a developing member or a developer carrying member for supplying toner T to a developing portion P4(developing position), where the developing roller41and the photosensitive drum oppose each other, by being rotated while carrying the toner T. In this embodiment, a so-called contact development type in which a toner layer carried on the developing roller41contacts the surface of the photosensitive drum1in the developing portion P4is used. The developing roller41is disposed at an opening of the toner accommodating portion45provided in a position opposing the photosensitive drum1. The supplying roller42supplies (applies) the toner T in the toner accommodating portion45to the developing roller41. The stirring member43is disposed in the toner accommodating portion45and stirs the toner T in the toner accommodating portion45by being rotated. The toner accommodating portion45is a container for accommodating the toner T as a developer. The developing blade44is contacted from an inside of the toner accommodating portion45to a surface of the developing roller41, rotating toward the developing portion P4, with a predetermined pressing force (pressure). The developing blade44is formed of, as a main component, a material (for example, iron or copper) which is on a positive polarity side (non-normal polarity side) relative to a main component (binder resin) in a charging series. By this, the toner T is triboelectrically charged to a normal polarity (normal charge polarity) by being rubber with the developing blade44. The transfer roller5is disposed in contact with the surface of the photosensitive drum1. A nip where the transfer roller5and the photosensitive drum1oppose each other is a transfer portion5where the toner image is transferred from the photosensitive drum1onto the sheet S. The brush member11is disposed downstream of the transfer portion P5and upstream of the charging portion P2with respect to a rotational direction R1of the photosensitive drum1. The brush member11is disposed in contact with the surface of the photosensitive drum1in a predetermined contact condition. Details of the brush member11will be described later. The fixing device9includes a fixing roller or a flexible fixing film as a first rotational member, a pressing roller as a second rotatable member contacting the first rotatable member with a predetermined pressing force, and a heating means for heating the image on the sheet S through the first rotatable member. As the heating means, a halogen lamp generating radiant heat or a heater substrate in which a pattern of a heat generating resistor is formed on a ceramic substrate can be used. The sheet feeding mechanism includes a cassette6, a feeding (conveying) roller pair8, and a discharging roller pair12. The cassette6is a stacking portion in which sheets S are stacked. The feeding roller pair8is a feeding member for feeding the sheet S, fed from the cassette6, to the transfer portion P5. The discharging roller pair12is a discharging member for discharging the sheet S on which the image is formed by the image forming portion101. In the following, an outline of an image forming operation by the image forming apparatus100will be described. When an execution instruction of the image forming operation is provided to the image forming apparatus100, the photosensitive drum1is rotationally driven in the clockwise direction inFIG.2, and the surface of the photosensitive drum1is electrically charged uniformly by the charging roller2. The exposure device3exposes the surface of the photosensitive drum1to light by irradiating the surface of the photosensitive drum1with laser light L on the basis of the image information received from the external device. By this, an electrostatic latent image is written (formed) on the surface of the photosensitive drum1. In this embodiment, the reverse development type is employed. For that reason, the charging roller2charges the surface of the photosensitive drum1to a dark portion potential Vd of a negative polarity by being supplied with a voltage (charging voltage) of the negative polarity which is the same as the normal polarity of the toner T. After the charging, a light portion potential Vl of a region (image region) in which the photosensitive drum surface is exposed to light by the exposure device3is lower than the dark portion potential Vd. In a constitution example of this embodiment, Vd=−700 (V) and Vl=−100 (V) are set. In the developing device4, the toner T accommodated in the toner accommodating portion45is uniformized by the stirring member43, and is supplied to the developing roller41by the supplying roller42. The toner T carried on the developing roller41is not only triboelectrically charged to the normal polarity by being rubbed with the developing blade44but also regulated in a predetermined layer thickness during passing through the developing blade44. By rotation of the developing roller41, the toner T charged to the normal polarity is supplied to the developing portion P4. Then, a voltage (developing V) of the normal polarity which is the same as the normal polarity of the toner T is applied to the developing roller41, so that the toner T is transferred onto the photosensitive drum1depending on a potential distribution on the surface of the photosensitive drum1. By this, the electrostatic latent image on the surface of the photosensitive drum1is developed and visualized as a toner image. The toner image formed on the surface of the photosensitive drum1is fed to the transfer portion P5in a state in which the toner image is carried on the photosensitive drum1. In parallel to the above-described process, the sheets S and fed one by one from the cassette6by an unshown feeding unit, and then the sheet S is conveyed to the transfer portion P5by the feeding (conveying) roller pair8. A voltage (transfer voltage) of a positive polarity opposite to the normal polarity of the toner T is applied to the transfer roller5, so that the toner image is transferred from the photosensitive drum1onto the sheet S in the transfer portion P5. The sheet S passed through the transfer portion P5is conveyed to the fixing device9. In the fixing device9, the image on the sheet S is heated and fixed by the first rotatable member heated by the heating means while nipping and conveying the sheet S in the nip between the first rotatable member and the second rotatable member. The sheet S passed through the fixing device9is discharged to an outside of the image forming apparatus100by the discharging roller pair12. (Cleaner-Less Brush Type) Next, an operation peculiar to a cleaner-less brush type using the brush member11will be described. In this embodiment, a simultaneous development and cleaning type in which residual toner which was not transferred onto a toner image receiving member (sheet S) in the transfer portion P5is collected by the developing roller41when the residual toner reaches the developing portion P4next time is employed. In the simultaneously development and cleaning type, the residual toner collected by the developing roller41is stirred together with another toner in the toner accommodating portion45and then is used for the development again. In the simultaneous development and cleaning type, the residual toner which was not transferred onto the toner image receiving member in the transfer portion P5is collected by the developing roller41, and therefore, the brush member11basically permits passing of the residual toner therethrough. For that reason, the “brush member” in this embodiment is different from a brush member as a cleaning device (drum cleaner) for the purpose of removing the residual toner from the photosensitive drum1. Incidentally, in the simultaneous development and cleaning type, the cleaning device for collecting the residual toner is not disposed, and therefore, such a type is called a cleaner-less brush type in some cases. In the cleaner-less brush type, in the simultaneous development and cleaning type, the brush member11for scattering the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P5is disposed downstream of the transfer portion P5and upstream of the charging portion P2with respect to the rotational direction R1. By disposing the brush member11, a state in which the residual toner locally exists in a large amount on the photosensitive drum1can be alleviated. When the residual toner locally exists in the large amount on the photosensitive drum1, there is a possibility that image defects are caused by improper charging due to contamination of the charging roller2with the residual toner and by improper collection of the residual toner in the developing portion P4. On the other hand, in the cleaner-less brush type, the residual toner is scattered by the brush image11, and behavior of the residual toner in the charging portion P2and the developing portion P4is uniformized, so that the above-described inconveniences can be suppressed. (Operation Peculiar to Cleaner-Less Brush Type) In the cleaner-less brush type, the residual toner passed through the contact portion between the brush member11and the photosensitive drum1reaches the charging portion P2. To the charging roller2, the charging voltage of a same polarity as the normal polarity of the toner is applied, and therefore, of the residual toner particles, toner particles charged to the normal polarity pass through the charging portion2while being pressed against the photosensitive drum1. On the other hand, of the residual toner particles, toner particles charged to the non-normal polarity or toner particles of which charge amount is close to zero are partially deposited on the charging roller2in the charging portion P2. When the residual toner is deposited and accumulated on the charging roller2, uniform charging of the photosensitive drum1is prevented, so that the image defect due to the improper charging becomes apparent. In this embodiment, in order to alleviate a degree of deposition of the residual toner on the charging roller2, a peripheral spaced difference between the charging roller2and the photosensitive drum1is set. Specifically, a peripheral speed of the charging roller2is set at a value higher than a peripheral speed of the photosensitive drum1by 5% or more. Further, in order to charge the residual toner to the normal polarity by friction between the charging roller2and the photosensitive drum1, materials of a surface layer of the charging roller2and a surface layer of the photosensitive drum1are selected. That is, in the charging series, the materials of the surface layer of the charging roller2and the surface layer of the photosensitive drum1are positioned in a higher rank (positive polarity side, non-normal polarity side) than the toner. By constituting the peripheral speed difference and the materials as described above, in the charging portion P2, the residual toner is charged to the normal polarity by friction with the charging roller2or the photosensitive drum1, so that deposition of the residual toner on the charging roller2can be suppressed. The residual toner passed through the charging portion P2reaches the developing portion P4with rotation of the photosensitive drum1. Of the residual toner particles carried on the photosensitive drum1in a non-image region (non-exposure region), the toner particles charged to the normal polarity are transferred onto the developing roller41and are collected in the toner accommodating portion45by a potential difference between the dark portion potential Vd and the developing voltage. On the other hand, of the residual toner particles carried on the photosensitive drum1in an image region (exposure region), the toner particles charged to the normal polarity are not transferred onto the developing roller41and remain on the photosensitive drum1by a potential difference between the light portion potential Vl and the developing voltage. In this case, the toner particles are sent as a part of the toner image, obtained by developing the electrostatic latent image, to the transfer portion P5. Incidentally, a voltage value of the developing voltage has the same polarity as the normal polarity of the toner, and is higher than the light portion potential Vl and is lower than the dark portion potential Vd. Ideally, of the residual toner particles, the toner particles charged to the non-normal polarity and the toner particles of which charge amount is close to zero are changed in polarity to the normal polarity, and thus are collected by the developing roller41in the developing portion P4without being deposited on the charging roller2. However, when the residual toner charged to the non-normal polarity in a large amount enters the charging portion P2, the residual toner which is not changed in polarity to the normal polarity in the charging portion P2is liable to be deposited on the charging roller2. Further, when the residual toner which is not changed in polarity to the normal polarity in the charging portion P2reaches the developing portion P4, the residual toner passes through the developing portion P4without being collected by the developing roller41. In this case, there is a possibility that contamination of the transfer roller5with the residual toner and image defect (white background fog) such that a thin toner image is formed on a white background portion (non-image region) occurs. In the following, constituent elements of the image forming apparatus100will be specifically described. (Brush Member) First, the brush member11in this embodiment will be described. As shown inFIG.2, the brush member11contacts the surface of the photosensitive drum1in a position downstream of the transfer portion P5and upstream of the charging portion P2with respect to the rotational direction R1of the photosensitive drum1. That is, the image forming apparatus100includes the brush member11disposed downstream of the transfer member and upstream of the developing member with respect to the rotational direction of the image bearing member and contacting the surface of the image bearing member. In the following, a region where the brush member11contacts the photosensitive drum1is referred to as a “brush contact portion”. Part (a) ofFIG.3is a front view of the brush member11in a single body state (as viewed from one side with respect to a short (-side) direction). The single body state is a state in which the brush member11is not mounted in the image forming apparatus100, i.e., a state in which an external force does not act on the brush member11. Part (b) ofFIG.3is a sectional view of the brush member11in the single body state cut along a flat plane perpendicular to a longitudinal direction of the brush member11. Part (c) ofFIG.3is a sectional view of the brush member11in a state in which the brush member11is contacted to the photosensitive drum1. As shown in parts (a) to (c) ofFIG.3, the brush member11includes a base cloth11bas a base portion and an electroconductive thread (yarn)11aas a bristle material (fiber) supported by the base cloth11b. The base cloth11bis formed of a synthetic resin fiber containing carbon black as an electroconductive agent. The electroconductive thread11ais formed of, for example, a nylon fiber in which the electroconductive agent is added, and is textured and planted on the base cloth11b. The material of the electroconductive thread11ais not limited to nylon, but other synthetic resin fibers such as rayon may be used. The brush member11is a member extending thin and long in a predetermined direction. In the following, an extension direction is referred to as a longitudinal direction LD of the brush member11, and a direction along the base cloth11band perpendicular to the longitudinal direction LD is referred to as a short direction SD of the brush member11. In the state in which the external force does not act on the brush member11(part (b) ofFIG.3), the electroconductive thread11aprojects in a direction (direction normal to the base cloth11b) substantially perpendicular to both the longitudinal direction LD and the short direction SD. As shown in part (c) ofFIG.3, the brush member11is disposed in an attitude such that the longitudinal direction LD is substantially parallel to a rotational axis direction of the photosensitive drum1. As shown in part (b) ofFIG.3, a distance from the base cloth11cto a free end of the electroconductive thread11ain the brush member11in the single body state is bristle height L1. The bristle height L1of the brush member11in this embodiment is 5.75 mm. The brush member11is supported by a supporting member11c, mounted in the image forming apparatus100at a predetermined position, to which the base cloth11bis fixed by a fixing means such as a double-side tape. The position of the supporting member11cis set so that free ends of the electroconductive threads11aenter the photosensitive drum1. For this reason, the brush member11is in a state in which the free ends of the electroconductive threads11aare pressed against the surface of the photosensitive drum1and are flexed (bent). In this embodiment, a fixing surface of the base cloth11cto the supporting member11cis disposed substantially parallel to the surface of the photosensitive drum1, so that a distance (clearance) between the supporting member11cand the photosensitive drum1is substantially constant. That is, as viewed in the longitudinal direction LD, a rectilinear line passing from a rotational axis of the photosensitive drum1through a center position of the base cloth11bwith respect to the short direction SD is perpendicular to the fixing surface of the supporting member11c. Further, in this embodiment, a minimum distance from the base cloth11bof the brush member11, fixed to the supporting member11c, to the photosensitive drum1is taken as L2. In this embodiment, a difference between L2and L1is defined as a maximum penetration amount of the brush member11into the photosensitive drum1. However, L2<L1holds. In this embodiment, the maximum penetration amount of the brush member11into the photosensitive drum1is, for example, 1.2 mm. Further, in this embodiment, as shown in part (b) ofFIG.3, as regards the brush member11in the single body state, a length of the brush member11with respect to the short direction SD, which is a short(-side) width L3is, for example, 4 mm. As shown in part (c) ofFIG.3, in the state in which the brush member11is pressed against the photosensitive drum1, an occupied width of the electroconductive threads11awith respect to the short direction SD is about 5 mm to about 6 mm. Further, in this embodiment, a length L4of the brush member11with respect to the longitudinal direction LD is 216 mm. The length L4is set so that with respect to the longitudinal direction LD, the brush member11is capable of contacting an entire area of an image forming region (toner image formable region, maximum region of the latent image formed by the exposure device3) on the photosensitive drum1. Further, in this embodiment, a thickness of the electroconductive threads11ais for example, 2 denier and a density of the electroconductive threads11ais, for example, 240 kF/inch2. The thickness and density of the electroconductive threads11aare capable of being appropriately changed as long as the electroconductive threads11asatisfy a function required for the brush member11. As an example, it is preferable that the thickness of the electroconductive threads11ais denier or more and 6 denier or less and that the density of the electroconductive threads11ais 150 kF/inch2and 350 kF/inch2. Incidentally, 1 kF/inch2represents a planting density of 1000 fibers per square inch. Incidentally, in the case of nylon electroconductive threads11aused in this embodiment, when 1 to 6 denier which is a unit of formula for direct yarn count is converted into a fiber diameter, 1 to 6 denier corresponds to about 10 μm to about 30 μm. For this reason, in the case where the bristle material other than nylon is used as the brush member, it is possible to use a bristle material with a thickness of 1 denier or more and 6 denier or less in terms of direct yarn count and of 10 μm or more and 30 μm or less in term of the fiber diameter. “1 denier or more and 6 denier or less” can be said as “1.1 decitex or more and 6.7 decitex or less”. Further, 1 inch2is about 6.45 cm2, and therefore, “150 kF/inch2or more and 350 kF/inch2” can be said as “23 kF/mm2or more and 54 kF/mm2”. In this embodiment, the brush member11is constituted so as to permit the passing of the residual toner while scattering the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P6. For that reason, in the case where the electroconductive threads11aare excessively thick, there is a possibility that the residual toner cannot be uniformly scattered and passes through the brush contact portion in a stripe shape and thus leads to stripe-shaped contamination of the charging roller2with the residual toner. Further, in the case where the electroconductive threads11aare excessively high in density, the residual toner is blocked by the brush contact portion, and thus not only constitutes an obstacle to collection of the residual toner by the developing roller41but also contaminates an inside of the image forming apparatus by being dropped or scattered from the photosensitive drum1. Further, the brush member11is constituted so as to triboelectrically charge the residual toner in the brush contact portion. For that reason, in the case where the electroconductive threads11aare excessively thin, there is possibility that the electroconductive threads11aare readily flexed even when the residual toner contacts the electroconductive threads11aand are escaped from the residual toner, and thus the toner particles are not rolled and the residual toner is not sufficiently triboelectrically charged. Further, in the case where the electroconductive threads11aare excessively low in density, a frequency of collision with the electroconductive threads11abecomes low, so that there is a possibility that the residual toner cannot be sufficiently triboelectrically charged. In the above description, from the viewpoints of a function of scattering the residual toner and a function of triboelectrically charging the residual toner, preferred ranges of the thickness and the density of the electroconductive threads11awere described, but depending on functions required for the brush member11, details of the thickness, the density, the material, the bristle height, and the like can be appropriately changed. Incidentally, the brush member11in this embodiment may have a function of blocking a foreign matter (for example, paper powder) other than the residual toner in the brush contact portion. (Developer) In this embodiment, as the developer, the toner T which is one-component developer of which normal polarity is the negative polarity. For that reason, in the following description of this embodiment, the “negative polarity” is synonymous with the normal polarity of the toner T and the “positive polarity” is synonymous with the non-normal polarity of the toner T unless otherwise specified. The toner T contains a binder resin and a colorant and may further contain a parting agent, a charge control agent, and an external additive as desired. As the binder resin, styrene-acrylic resin and polyester resin, which are lower in rank of the charging series (negative polarity side) than nylon and rayon are, can be preferably used. That is, a main component (binder resin) of the toner T may desirably be positioned on the positive polarity side (lower rank) relative to the material of fibers (bristle materials) of the brush member11in the charging series. In this embodiment, as the binder resin of the toner, the styrene-acrylic resin is employed. As the colorant, a known colorant can be used. For example, a dye and a pigment are cited. As the parting agent, a known charge control agent can be used. The charge control agent has an acid value or a hydroxy value and may preferably have the negative polarity which is equivalent to or more than the binder resin. As the external additive, a known external additive can be used. For example, silica, alumina, titania, titanium composite oxide, and the like are cited. The colorant and the parting agent may preferably be included in the binder resin so as not to have the influence on the charge polarity of the toner particle surface. Further, as the toner, a polymerization toner formed by a polymerization method can be employed. The toner T with a particle size (volume-average particle size) of 4-10 μm, preferably 6-8 μm may preferably be used. In this embodiment, spherical toner prepared by the polymerization method and with a particle size of 7 μm is used. Further, the toner T in this embodiment is so-called non-magnetic one-component developer which does not contain a magnetic component and which is carried on the developing roller41principally by an intermolecular force or an electrostatic force (mirror force). However, as the developer, a one-component developer consisting of toner containing the magnetic component may be used. Further, as the developer, a two-component developer constituted by non-magnetic toner and a magnetic carrier may be used. In the case where the magnetic developer is used, as a developer carrying member, for example, a cylindrical developing sleeve in which a magnet is provided is used. Further, to the developer, in addition to the toner and the carrier, an additive (for example, a wax or silica fine particles) for adjusting flowability, charging performance, and the like of the toner may be added. (Photosensitive Drum) The photosensitive drum1is prepared by successively laminating an undercoat layer, a charge generation layer, and a charge transport layer on a cylindrical electroconductive supporting member (core metal) as a lowermost layer. The charge transport layer is formed by coating and drying point prepared by mixing principally a charge transporting material and the binder resin in a solvent. As the principally used charge transporting material, a known charge transporting material can be used. For example, various triarylamine compounds and hydrazone compounds are cited. Further, as the binder resin, for example, a polycarbonate resin, a polyarylate resin, and the like are cited. A predominant portion by the triboelectric charge with the toner is the charge transport layer as a surface layer (outermost layer) and is the binder resin occupying most thereof. Here, the cited polycarbonate resin or polyarylate resin are positioned on the non-normal polarity side (upper rank) relative to the styrene-acrylic resin which is the binder resin of the toner T in the charging series. That is, the outermost layer of the image bearing member may preferably be formed of a material capable of triboelectrically charging the toner T to the normal polarity in the case where the outermost layer is rubbed with the resin which is the main component of the toner T. In this embodiment, the polycarbonate resin was selected as the binder resin of the outermost layer. Further, in this embodiment, as the photosensitive drum1, a cylindrical photosensitive drum with an outer diameter of 24 mm. Depending on the outer diameter of the photosensitive drum1. A manner of contact (pressing) of the brush member11(for example, the above-described penetration amount or an angle described later in a third embodiment) is appropriately changed. (Charging Roller) The charging roller2in this embodiment will be described. The charging roller2includes a core metal as an electroconductive supporting member, a 2 mm-thick elastic layer provided on an outer periphery thereof, and a 25 mm-thick resin layer as a surface layer provided on an outer periphery of the elastic layer. A surface of the surface layer is a surface contacting the photosensitive drum1and causing electric discharge on the photosensitive drum1. The elastic layer is formed of an electron-conductive rubber material. The electron-conductive rubber material is, for example, a material in which carbon black is dispersed as electroconductive particles (electron-conductive agent) in a binder polymer which itself does not assume electroconductivity and in which an electric resistance is adjusted. As the binder polymer, a known binder polymer used in the electroconductive elastic layer of the charging roller for the electrophotographic apparatus can be used. For example, a hydrin rubber, a butadiene rubber, and the like are cited. In this embodiment, the hydrin rubber was selected. A kind of the carbon black mixed in the elastic layer is not particularly limited so long as the carbon black is electroconductive carbon black capable of imparting electroconductivity to the elastic layer. Further, to the elastic layer, as a compounding agent, general-purpose agents such as a filler, a processing aid, a cross-linking aid, a cross-linking retardant, a softening agent, a dispersing agent, a colorant, and the like may be added as desired. As a resin of the surface layer, a resin material positioned on the non-normal polarity side (upper rank) relative to the main component (binder resin) of the toner T in the charging series. For example, the surface layer is formed by coating the outer periphery of the elastic layer with a resin material, for example, polycarbonate urethane, possessing electroconductivity. The residual toner can be triboelectrically charged to the normal polarity in the charging portion P2by forming the third layers of the charging roller2and the photosensitive drum1with the above-described materials and by setting the peripheral speed difference between the charging roller2and the photosensitive drum1as described above. Further, to the surface layer of the charging roller2, roughening particles with a polarity such that triboelectrically chargeability is not impaired can be added. For example, there is also a method in which polycarbonate urethane resin similar to the polycarbonate urethane resin of the surface layer is formed in particles and in which the particles are dispersed. That is, the charging roller2is not required to closely contact the surface of the photosensitive drum1in the charging portion P2, so that a constitution in which the charging roller2contacts the surface of the photosensitive drum1in a mountain portion of unevenness formed by the roughening particles can be employed. (Transfer Roller) The transfer roller5is a roller-type transfer member disposed opposed to the photosensitive drum1. The transfer roller5is pressed against the photosensitive drum1at a predetermined pressure. The transfer roller5of this embodiment is an elastic roller of 12 mm in outer diameter, in which a sponge rubber of an electroconductive nitrile-butadiene hydrin rubber type is formed around a core metal. (Contact Condition of Brush Member) In this embodiment, the brush member11imparts the electric charge to the residual toner through the triboelectric charge while scattering the residual toner on the photosensitive drum1. At this time, in order to impart the negative electric charge to the residual toner through the triboelectric charge, as the material of the bristle material (electroconductive threads11a) of the brush member11, a material positioned on the positive polarity side (upper rank) relative to the main component of the toner T in the charging series is used. Further, a contact pressure between the brush member11and the photosensitive drum1in the brush contact portion is ensured so that the electroconductive threads11arub the residual toner with a sufficient force. As regards the charging series, the main charge of the toner T in this embodiment is the styrene-acrylic resin. The bristle material of the brush member11may desirably be a material, such as nylon or rayon, relative to which the styrene-acrylic resin is positioned on the negative polarity side (lower rank) and a difference in charging series therebetween is large. In this embodiment, the nylon resin was selected as the main component of the electroconductive threads11aas described above. Polyester fibers and acrylic fibers are not desired as the material of the electroconductive threads11asince the styrene-acrylic resin is positioned the positive polarity side relative thereto in the charging series and the difference in charging series is also small. However, in the case where the main component of the toner T is different, the polyester fibers or the acrylic fibers can be used as the material of the electroconductive threads11ain some instances. Incidentally, the surface layer of the photosensitive drum1is capable of having the influence on the triboelectric charge of the toner T in the brush contact portion. For that reason, the main component of the surface layer of the photosensitive drum1may preferably be a material which is positioned on the positive polarity side relative to the main component of the toner T in the charging series. In this embodiment, as described above, the main component of the surface layer of the photosensitive drum1is the polycarbonate resin. A contact condition of the brush member11in the brush contact portion will be further described. In order to study physical properties (parameters) showing the contact condition of the brush member11, samples 1 to 4 of 4 levels different in thickness and density of the bristle material of the brush member11were prepared. The sample 1 is a brush member11of which bristle material is thick and is low in density. The sample 2 is a brush member11of which bristle material is thin and at a medium level in density. The sample 3 is a brush member11of which bristle material is thin and is high in density. The sample 4 is a brush member11of which bristle material is at a medium level in thickness and is low in density. Then, the brush member11of each of the samples was contacted to the photosensitive drum1, and then a peak pressure and a maximum contact area ratio in the brush contact portion were calculated in the following methods. Incidentally, the peak pressure is a maximum value of an average contact pressure in a region of a width of 1 mm of the brush contact portion with respect to the short direction, and the maximum contact area ratio is a contact area ratio between the brush member11and the photosensitive drum1in the region of the width of 1 mm in which the peak pressure is obtained. A calculating method of the peak pressure is as follows. As shown inFIG.4, with use of a compression test jig for a compact table-top tester (“EZTest” manufactured by Shimadzu Corporation), normal reaction when a pressing plate was pressed into the brush member11while adjusting a flow or bristles (fibers) of the brush member11placed horizontally was measured and then a relationship between the penetration amount and the normal reaction was obtained. On the other hand, as shown inFIG.5, a glass plate was press-contacted to the brush member11so as to uniformize the flow of the bristles of the brush member11while moving the glass plate in the horizontal direction, and a contact width of the brush member11with respect to the short direction SD was measured by observation through a microscope. In the case where the brush member11is prepared so that the density and the thickness of the bristle material thereof are uniform, the peak pressure can be calculated by (formula 1) to (formula 3) below. First, in a state in which an object is press-contacted to the brush member11with a predetermined penetration amount, an average (average pressure) of the contact pressure in the brush contact portion can be represented by the (formula 1). In the (formula 1), the normal reaction and the contact width are values measured in a state in which the pressing plate (FIG.4) or the glass plate (FIG.5) is press-contacted to the brush member11with the predetermined penetration amount. (average pressure)=(normal reaction)/((contact width)×(longitudinal width))(gf/mm2) (formula 1) In an actual brush contact portion between the brush member11and the photosensitive drum1, the contact pressure becomes material in a portion where the penetration amount of the brush member11into the photosensitive drum1is largest. A maximum value of this contact pressure is referred to as the peak pressure. The peak pressure is calculated by the (formula 3) with use of an average penetration amount (formula 2) obtained from a material penetration amount and a minimum penetration amount of the brush member11. (average penetration amount)=((material penetration amount)+(minimum penetration amount))/2(mm) (formula 2) (peak pressure)=(average pressure)×(maximum penetration amount)/(average penetration amount)(gf/mm2) (formula 3) The above-described calculating method means in actuality that the contact pressure applied to the surface of the photosensitive drum1drawing an arc as shown inFIG.6is linearly approximated. Specifically, it is assumed that the brush member11of 4 mm in short(-side) width L3is contacted to the photosensitive drum1of 24 mm in diameter from immediately above the photosensitive drum1so that the penetration amount becomes 1.2 mm in a center portion with respect to the short direction SD. In this case, the maximum penetration amount is 1.2 mm, the minimum penetration amount is 1.03 mm, and the average penetration amount is 1.115 mm, so that the peak pressure can be calculated using the (formula 3). The contact area ratio was discriminated by a color tint between a portion (contact portion) where the bristle material of the brush member11contacts the glass plate and a portion (non-contact portion) where the bristle material of the brush member11does not contact the glass plate when the brush member11is contacted to the glass plate as shown inFIG.5. Part (a) ofFIG.7is an actual photograph observed through a microscope, and part (b) ofFIG.7is an image obtained by subjecting the photograph of part (a) ofFIG.7to binarization so that the contact portion becomes white and the non-contact portion becomes black. The contact area ratio is a ratio of an area of the contact portion to an area of an observation object (i.e., a ratio obtained by dividing the number of white pixels by the number of all the pixels). The maximum contact area ratio is obtained in a position where the peak pressure is obtained, that is, in the center portion with respect to the short direction SD. In summary, as regards the image forming apparatus including the brush member11as in this embodiment, the peak pressure and the maximum contact area ratio can be checked in the following procedure. 1) The outer diameter of the photosensitive drum1, the bristle height L1and the short width L3of the brush member11, and the shortest distance L2from the base cloth11bof the brush member11to the surface of the photosensitive drum1are measured, and the maximum penetration amount (L1-L2) is calculated (see, parts (b) and (c) ofFIG.3). 2) From the outer diameter of the photosensitive drum1, the short width L3and the maximum penetration amount of the brush member11which are measured in the above-described 1), the minimum penetration amount is calculated on the basis of a geometrical relationship ofFIG.6. 3) From the maximum penetration amount and the minimum penetration amount which are acquired by the above-described 1) and 2), the average penetration amount is calculated on the basis of the (formula 2). 4) A compression test of the brush member11is conducted by a method ofFIGS.4and5with use of the average penetration amount calculated in the above-described 3), so that the average pressure is acquired on the basis of the (formula 1). 5) By using the maximum penetration amount, the average penetration amount, and the average pressure which are acquired by the above-described 1), 3), and 4), the peak pressure is calculated on the basis of the (formula 3). 6) In the maximum penetration amount acquired in the above-described 1), the brush member11is press-contacted to the glass plate by the method ofFIG.5, and then the contact surface is observed, so that the maximum contact area ratio is calculated. Further, in order that the peak pressure and the maximum contact area ratio of the brush member11are mode desired values, parameters such as the outer diameter of the photosensitive drum1, the bristle height L1and the short width L3of the brush member11, and the above-described shortest distance L2are appropriately changed, so that the peak pressure and the maximum contact area ratio may only be required to be checked in the above-described procedure. A graph in which in the above-described method, for each of the samples, the peak pressure and the maximum contact area ratio are calculated in a plurality of different conditions and in which values of the peak pressure are plotted on the ordinate and values of the maximum contact area ratio are plotted on the abscissa isFIG.1. InFIG.1, points indicated by black marks represent that image defect did not occur, and points indicated by white marks represent that the image defect occurred. As shown inFIG.1, in the case where the peak pressure is less than 0.7 gf/mm2and the maximum contact area ratio is less than 18%, the image defect occurred. This would be considered because in the case where the peak pressure is excessively low, as a result that a degree of contact of the bristle material of the brush member11with the toner particles becomes weak, the action for triboelectrically charging the residual toner to the normal polarity becomes insufficient. Further, this would be considered because in the case where the maximum contact area ratio is excessively low, as a result that a frequency of the contact of the toner particles with the bristle material of the brush member11becomes low in a region which is a region where the triboelectric charge most easily progresses and in which the peak pressure is applied, the action for triboelectrically charging the residual toner to the normal polarity becomes in sufficient. In either case, when the residual toner reaches the charging portion P2without being sufficiently supplied with the electric charges of the positive polarity in the brush contact portion, the residual toner charged to the non-normal polarity or the residual toner of which charge amount is close to zero is deposited on the charging roller2, so that contamination of the charging roller2progresses. Further, in the case where the peak pressure is higher than 3.5 gf/mm2and in the case where the maximum contact area ratio is higher than 74%, the image defect occurred. This would be considered because in either case of the excessively high peak pressure and the excessively high maximum contact area ratio, a part of the brush contact portion is in a state in which the residual toner cannot pass through the pair of the brush contact portion and thus the residual toner concentratedly pass through a portion (portion where the contact pressure or the density of the bristle material is relatively low through which the residual toner is capable of passing. In this case, the surface of the photosensitive drum1passed through the brush contact portion is in a state in which the residual toner is deposited in a stripe shape (linear shape extending in the rotational direction), so that the charging roller2is contaminated with the residual toner in the stripe shape. Incidentally, in a region where the peak pressure or the maximum contact area ratio is particularly high, the residual toner is blocked by the brush member11, so that there is a possibility that not only collection of the residual toner by the developing roller41is obstructed but also the blocked toner is scattered and an inside of the image forming apparatus is contaminated with the scattered toner. Accordingly, it is desired that the brush member11is constituted so that the peak pressure and the maximum contact area ratio in the brush contact portion fall within the following region enclosed by a broken line ofFIG.1.(peak pressure): 0.7 gf/mm2or more and 3.5 gf/mm2or less(maximum contact area ratio): 18% or more and 74% or less By this, in the cleaner-less brush type in which the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P5is scattered by the brush member11, an electric charge distribution of the residual toner can be stabilized by the normal polarity. In other words, the electric charge distribution of the residual toner after passing through the brush contact portion can be made a distribution which has a peak on the normal polarity side (negative polarity side) of the toner T and which is sharp compared with the electric charge distribution of the residual toner before entering the brush contact portion. Incidentally, in the region enclosed by the broken line, the above-described image defect does not readily occur in a central portion than in a peripheral portion. For that reason, it is preferable that the brush member11is constituted so that the peak pressure and/or the maximum contact area ratio further falls within the following ranges.(peak pressure): 1.4 gf/mm2or more and 2.8 gf/mm2or less(maximum contact area ratio): 32% or more and 60% or less Here, 1 gf nearly equals to 9.8×mN (milli-newton), so that “0.7 gf/mm2or more and 3.5 gf/mm2or less” can be said as “6.9 mN/mm3or more and 34 mN/mm2or less”. Similarly, “1.4 gf/mm2or more and 2.8 gf/mm2or less” can be said as “14 mN/mm2or more and 28 mN/mm2or less”. Further, in this embodiment, as an index indicating whether or not the brush member11uniformly contacts the photosensitive drum1, Clark-Evans index is introduced. The Clark-Evans index represents a tendency as to whether in the case where a plurality of points are distributed in a certain flat surface region, the points are distributed locally and concentratedly or are distributed with a distance mutually. A calculating method of the Clark-Evans index will be described. First, when a distance from a point i to a nearest adjacent point is di and the number of the points is n, an average value (average nearest adjacent distance) W of distances from each of points to an associated nearest adjacent point is represented by the following formula (numerical formula 1). W=1n∑i=1ndi(numericalformula1) Here, as an evaluation criterion, the case where the points are randomly distributed on a flat surface with an area S (in accordance with uniform Poisson distribution) will be considered. In this case, an expected value E (W) of the average nearest adjacent distance W is represented by the following formula (numerical formula 2). E(W)≈12n/S(numericalformula2) In order to compare the cases where the numbers of the points and densities are different from each other, a value w obtained by standardizing the average nearest adjacent distance W with the expected value E (W) as represented by the following formula (numerical formula 3) is referred to as the Clark-Evans index. w=WE(W)(numericalformula3) In order to acquire the Clark-Evans index of the brush member11, as shown inFIG.5, the brush member11is pressed against the glass plate surface, and the brush contact portion is observed through the glass plate surface from a side opposite from the side where the brush member11is pressed against the glass plate surface. In the brush contact portion, when the free ends of the bristle material in a certain area (100 mm2) are represented by points, a distribution of the points as shown in part (c) ofFIG.7is obtained. From this distribution of the points, the Clark-Evans index is calculated using the above-described formulas (numerical formulas 1 to 3). Incidentally, a part of the bristle material contacts the glass (plate) surface even at a portion closer to a base than the free ends are. The action by which the toner T is triboelectrically charged by such a bristle material would be considered that contribution of a portion (most pressing point) where the bristle material most strongly contacts the glass surface is large. However, the distribution of the most pressing point of the bristle material is mostly common to a distribution of the free ends of the bristle material, and property as the distribution is substantially unchanged. For that reason, in this embodiment, the Clark-Evans index is calculated from the free end distribution of each bristle material contacting the glass surface. As regards the Clark-Evans index, w=1 holds in the case of a random distribution, w<1 holds in the case of concentrated distribution, and w>1 holds in the case of regular distribution. An extreme example of the regular distribution is a lattice-shaped distribution over an entire area of an observation object. An extreme example of the concentrated distribution is a distribution such that the points are concentrated at a single portion or several portions in the area of the observation object. A result of acquisition of the Clark-Evans index for actual samples of the brush member11is as follows.Sample 1: w=1.01Sample 2: w=1.13Sample 3: w=1.15Sample 4: w=1.07Sample obtained by intentionally twisting sample 2 in bundle: w=0.7 From a property of the Clark-Evans index, when w>1 holds, it can be said that the free ends of the bristle materials of the brush member11are loosened without making a bundle. On the other hand, when w<1 holds, it is suggested that the bristle materials of the brush member11makes the bundle (aggregation, lump) due to some cause. In order to cause the brush member to normally function, it is required that the bristle materials of the brush member11contact the surface of the photosensitive drum1in a loosened state without making the bundle. In this embodiment, the brush member11is constituted so that the Clark-Evans index is 1 or more (w≥1). This condition can be said as a condition for ensuring that the bristle materials do not make the bundle due to some cause. Incidentally, depending on the constitution of the brush member11, it would be also considered that even in the brush contact portion, a value of the Clark-Evans index is different place by place. In that case, the Clark-Evans index at a portion (place where the contact pressure is the peak pressure) where the penetration amount of the brush member11is largest is 1 or more. This is because at the portion where the penetration amount is largest, the bundle of the bristle materials is liable to be formed by a force received by the bristle materials. (Qualitative Description of Phenomenon) How a phenomenon changes depending on the calculated peak pressure and the calculated maximum contact area ratio will be described. Schematic views when the peak pressure or the maximum contact area ratio is insufficient (reference example 1) are shown inFIGS.8and9.FIG.8is the schematic view showing principal members of the image forming portion101.FIG.9is an enlarged view of a part ofFIG.8, in which the electric charges of the toner are represented by three kinds of the positive polarity (+), the negative polarity (−), and weak electric charge (0). As shown inFIG.9, as the residual toner, toner particles having a broad electric charge distribution enter the brush member11. In this example, the peak pressure or the maximum contact area ratio of the brush member11is insufficient and thus the triboelectric chargeability is low, and therefore, the residual toner passes through the brush contact portion while maintaining the broad electric charge distribution. Thereafter, when the residual toner reaches the charging portion P2, as described above, particularly, the toner charged to the positive polarity causes improper charging due to deposition thereof on the charging roller2and causes white background fog due to collection failure thereof by the developing roller41in some instances. Further, when the toner with the weak electric charge is large in amount, the toner cannot sufficiently impart the electric charge of the negative polarity in the charging portion P2in some cases, and in that case, the toner with the weak electric charge is not readily collected by the developing roller41. Next, a phenomenon of the case (reference example 2) where the peak pressure or the maximum contact area ratio is excessively high will be described. Part (a) ofFIG.10is a schematic view of a model test in which inFIG.5, the glass plate as a model is strongly pressed against the brush member11(maximum penetration amount: 3 mm) and then the toner is supplied. With movement of the glass plate, the toner is supplied, so that a state in which a flow of the toner T is created is observed. When the peak pressure or the maximum contact area ratio is high, the brush member11is strongly pressed against the photosensitive drum1. As in a place indicated by X in part (a) ofFIG.10, at a portion where of the brush contact portion, at a portion where the bristle materials (electroconductive threads11a) makes a bundle (aggregate), the brush member11is more strongly pressed against the photosensitive drum1, so that passing of the toner T is strongly restricted. On the other hand, in a place indicated by Y, the electroconductive threads do not make the bundle, and a flow of the toner T is concentrated, so that the toner T passes through the place in a large amount. At x of part (b) ofFIG.10, on the surface of the photosensitive drum1passed through the brush contact portion, an amount and electric charges of the toner deposited on a portion corresponding to the place indicated by X in part (a) ofFIG.10are schematically shown. At y of part (b) ofFIG.10, on the surface of the photosensitive drum1passed through the brush contact portion, an amount and electric charges of the toner deposited on a portion corresponding to the place indicated by Y in part (a) ofFIG.10are schematically shown. In the place (X, x) where the toner T does not readily passes through the place, the toner T passes through the place while being strongly rubbed with the brush member11, and therefore, the electric charges of the negative polarity are imparted to most of the toner T. On the other hand, in the place (Y, y) where the toner concentratedly passes through the place, a part of the toner T is liable to pass through the place without being strongly rubbed with the brush member11, and the toner amount increases. Therefore, the residual toner deposited in the stripe shape on the place (Y, y) where the toner concentratedly passes through the place is deposited on the charging roller2, so that contamination of the constitution example roller2with the residual toner is liable occur in the stripe shape. FIGS.11and12are schematic views showing this embodiment.FIG.11is the schematic view showing principal members of the image forming portion101.FIG.12is an enlarged view of a part ofFIG.11, in which similarly as inFIG.9, the electric charges of the toner are shown by being divided into the three kinds. Compared withFIGS.8and9, a contact condition is set so that the brush member11is strongly pressed against the photosensitive drum1and a state in which the peak pressure and the maximum contact area ratio are excessively high (FIG.10) is not formed. As shown inFIG.12, the brush member11in this embodiment is set to have an appropriate peak pressure and an appropriate maximum contact area ratio, and therefore, the residual toner with the broad electric charge distribution is triboelectrically charged by being rubbed with the bristle materials (electroconductive threads11a) of the brush member11when the residual toner passes through the brush contact portion. Further, the brush member11is constituted so that the peak pressure and the maximum contact area ratio do not become excessively high and the Clark-Evans index satisfies w≥, and therefore, the residual toner passed through the brush contact portion is not concentrated in the stripe shape. For this reason, contamination of the charging brush2with the residual toner and collection failure of the residual toner by the developing roller41are not readily caused, so that an image quality can be maintained at a high level for a long term. Second Embodiment A second embodiment of the present invention will be described. This embodiment is different from the first embodiment in that a voltage is applied to the brush member11. In the following, elements to which reference numerals or symbols common to the first and second embodiments have the substantially same constitutions and functions as those described in the first embodiment, and a portion different from the first embodiment will be principally described. As shown in a schematic view ofFIG.13, in the cleaner-less brush type, toner particles are entangled and caught in the neighborhood of a base of the bristle materials (electroconductive threads11a) of the brush member11in some instances. The toner T caught by this portion is basically pushed out toward a downstream of the rotational direction of the photosensitive drum1by that new residual toner successively reaches the brush contact portion. However, compared with the toner T passing through the brush contact portion while being rolled by being rubbed with a free end portion of the bristle material of the brush member11, the toner pushed out after being caught by the base portion of the bristle material tends to have an insufficient amount of the electric charges of the normal polarity (negative polarity). Therefore, in this embodiment, in order to urge the toner T toward a region (toward the photosensitive drum1side) where the bristle material of the brush member11and the surface of the photosensitive drum1are in contact with each other, the voltage is applied to the brush member11. In this embodiment, during image formation, the surface of the photosensitive drum1is charged to the dark portion potential Vd of −700 V in the charging portion P2. The image region on the photosensitive drum1is exposed to light by the exposure device3to have the light portion potential Vl of −100 V. Then, the photosensitive drum surface passes through the transfer portion P5where a transfer voltage of +1000 V is applied to the transfer roller5, so that the dark portion potential Vd becomes about −200 V, and the light portion potential Vl becomes about −50 V. Accordingly, the surface potential of the photosensitive drum1reaching the brush contact portion during the image formation becomes about −20 V to about −200 V. As shown inFIG.14, to the brush member11, a brush power source E11as a voltage applying means is electrically connected. During the image formation, to the brush member11, by the brush power source E11, a predetermined brush voltage E is applied. The predetermined brush voltage E is a potential of the same polarity as the normal polarity of the toner T relative to the surface potential (particularly polarity after passing through the transfer portion where the dark portion potential is higher than the light portion potential) of the photosensitive drum1reaching the brush contact portion during the image formation. In this embodiment, to the brush member11, a voltage of −400 V is applied. Of the residual toner entering the brush contact portion, the toner T charged to the negative polarity (normal polarity) is urged toward the photosensitive drum1side electrostatically in the brush contact portion by a potential difference between the brush voltage E (−400 V) and the surface potential (−50 V to −200 V) of the photosensitive drum1. By this, the toner T charged to the negative polarity is pressed against the photosensitive drum1and is rolled while contacting the bristle material of the brush member11and the surface of the photosensitive drum1, so that the negative toner T is sufficiently triboelectrically charged. By this, the toner T is sufficiently triboelectrically charged in the brush contact portion, so that the electric charge distribution of the residual toner can be stabilized in the normal polarity, and thus it is possible to suppress an occurrence of inconveniences such as the contamination of the charging roller2with the residual toner and the collection failure (improper collection) of the residual toner by the developing roller41. Here, the brush voltage E is set at a value to a degree such that electric discharge does not occur between the brush member11and the photosensitive drum1. This is because when unnecessary electric discharge is carried out, the photosensitive drum1is contaminated with a discharge product and deterioration of the photosensitive drum1is accelerated. In this embodiment, similarly as the first embodiment, the brush member11is disposed substantially in parallel to the surface of the photosensitive drum1. Accordingly, as shown inFIG.14, a contact pressure between the brush member11and the photosensitive drum1becomes maximum (peak value) in a center portion of the brush contact portion with respect to the short direction. Here, of the residual toner entering the brush contact portion, the toner T charged to the positive polarity (non-normal polarity) is liable to be attracted toward the base side of the bristle material during entrance thereof into the brush contact portion. The toner T attracted to the base side of the bristle material is moved to a downstream by being pushed out by the toner T newly supplied from an upstream side, and at that time, the moved toner T is triboelectrically charged to the negative polarity by being rubbed with the bristle material. Thus, the toner T changed in polarity to the negative polarity during movement through the inside of the brush member11is pressed against the photosensitive drum1by the brush voltage E, and is triboelectrically charged by being rolled while contacting the bristle material of the brush member11and the surface of the photosensitive drum1. However, the change in polarity of the toner T, attracted to the base side of the bristle material, to the negative polarity is delayed, there is a possibility that the toner T in a state in which a charge amount thereof is insufficient passes through the brush contact portion. Therefore, in order to quickly change the polarity of the toner, attracted to the base side of the bristle material, to the negative polarity, in this embodiment, it is desirable that a lower limit is set for each of the peak pressure and the contact area ratio of the brush member11in a most upstream position of the brush contact portion with respect to the rotational direction R1of the photosensitive drum1. As an example, setting was made so that the penetration amount of the brush member11in the most upstream position of the brush contact portion is 1.2 mm and so that the penetration amount of the brush member11in a center position of the brush contact portion with respect to the short direction (rotational direction R1) of the brush contact portion. By this, in the most upstream position of the brush contact portion, the following relationships are satisfied. (contact pressure)≥0.7 gf/mm2 (contact area ratio)≥18% Conditions of the peak pressure, the maximum contact area ratio, and the Clark-Evans index are similar to those in the first embodiment. When the contact pressure and the contact area ratio in the most upstream position of the brush contact portion are set as described above, the polarity of the toner T can be quickly changed to the negative polarity at an upstream portion of the brush contact portion as shown inFIG.15, so that the toner T can be made less accumulated at the base portion of the bristle material of the brush contact portion. Then, the toner T is sufficiently triboelectrically charged in the brush contact portion, so that the electric charge distribution of the residual toner can be further stabilized in the normal polarity. Incidentally, in this embodiment, the toner T is pressed against the photosensitive drum1side by the action of the brush voltage E, so that even in the case where the brush member11is not necessarily be positioned on the positive polarity side (non-normal polarity side) relative to the toner T in the charging series, the polarity of the toner T can be changed to the negative polarity in the brush contact portion. However, a constitution in which the brush member11is positioned on the positive side relative to the toner T in the charging series is advantageous for changing the polarity of the toner T to the negative polarity. Further, in this embodiment, the principal action of the brush voltage E was described from the viewpoint that the toner T entering the inside of the brush member11is pressed toward the photosensitive drum1side. The present invention is not limited thereto. The change in polarity of the toner T to the negative polarity may be accelerated by injecting (supplying) the electric charge of the normal polarity into the toner T through the brush member11under application of the brush voltage E to the brush member11. Further, by the application of the brush voltage E, the action of pressing the toner T against the photosensitive drum1and the action of injecting the electric charge into the toner T may be taken simultaneously. Third Embodiment A third embodiment of the present invention will be described. This embodiment is different from the second embodiment in that the brush member11is disposed so as to be inclined relative to the photosensitive drum1. In the following, elements to which reference numerals or symbols common to the first to third embodiments have the substantially same constitutions and functions as those described in the first or second embodiment, and a portion different from the first or second embodiment will be principally described. FIGS.16and17are schematic views showing this embodiment.FIG.16is the voltage showing principle members of the image forming portion101.FIG.17is an enlarged view of a part ofFIG.16, in which similarly as inFIG.9, the electric charges of the toner T are indicated by being divided into the three kinds. In this embodiment, the brush member11is disposed in an inclined state relative to the photosensitive drum1so that with respect to the rotational direction R1of the photosensitive drum1, the contact pressure and the contact area ratio in a most upstream portion of the brush contact portion become the peak pressure and the maximum contact area ratio, respectively. A tangential line TL of the photosensitive drum1is a tangential line of the photosensitive drum1at a point1aof intersection between a rectilinear line perpendicularly drawn from a center position of the brush member11with respect to the short direction SD in the short direction SD (in a direction in which the base cloth11bextends as viewed in the longitudinal direction). In this embodiment, a direction in which the brush member11is inclined is an inclination direction in which the base cloth11bof the brush member11is spaced away from the tangential line TL toward a downstream of the rotational direction R1. An angle between the short direction D of the brush member11and the tangential line TL is referred to as an inclination angle. In this embodiment, it was suitable that the inclination angle is set at 12 degrees and the brush member11is disposed so that a penetration amount (maximum penetration amount) in the most upstream portion of the brush contact portion becomes 1.2 mm. Also, in this embodiment, in the most upstream position of the brush contact portion, the following relationships are satisfied. (contact pressure)≥0.7 gf/mm2 (contact area ratio)≥18% Incidentally, the inclination angle and the penetration amount can be appropriately changed depending on an outer diameter of the photosensitive drum1, a necessary peak pressure, and a necessary contact area ratio. Conditions of the peak pressure, the maximum contact area ratio, and the Clark-Evans index are similar to those in the first embodiment. In this embodiment, the toner T entering the brush contact portion is pressed against the photosensitive drum1side by the action of the brush voltage E similarly as in the second embodiment. By this, the toner T is sufficiently triboelectrically charged in the brush contact portion, so that the electric charge distribution of the residual toner can be stabilized in the normal polarity, and thus it is possible to suppress an occurrence of inconveniences such as the contamination of the charging roller2with the residual toner and the collection failure (improper collection) of the residual toner by the developing roller41. In addition, in this embodiment, as shown inFIG.17, a constitution in which the contact pressure and the contact area ratio become maximum in the most upstream portion of the brush contact portion is employed. For that reason, of the residual toner entering the brush contact portion, the polarity of the toner T charged to the positive polarity (non-normal polarity) can be quickly changed to the negative polarity by triboelectrically charging the bristle material of the brush member11, so that the electric charge distribution of the residual toner can be further stabilized in the normal polarity. In this embodiment, a constitution in which the contact pressure and the contact area ratio become maximum in the most upstream portion of the brush contact portion by disposing the brush member11in the inclined state was employed. The present invention is not limited thereto. For example, by employing a constitution such that the bristle height of the brush member11becomes shorter from one side (upstream side of the rotational direction R1) toward the other side (downstream side of the rotational direction R1) with respect to the short direction SD, the contact pressure and the contact area ratio may be made maximum in the most upstream portion of the brush contact portion. In the above-described embodiments, the constitution including the charging roller2which is the charging member of the contact charging type was described, but a charging member of a type (for example, corona discharging type) other than the contact charging type may be used. Even in such a case, by applying the constitutions described in the above-described embodiments, at least two improper collection of the residual toner by the developing roller41can be suppressed. Further, in the above-described embodiments, the constitution of the direct transfer type in which the toner image is directly transferred from the photosensitive drum1(image bearing member) onto the sheet (recording material) as the toner image receiving member was described, but the present invention may be applied to an image forming apparatus of an intermediary transfer type. In the case of the intermediary transfer type, the transfer member refers to, for example, a transfer roller (primary transfer roller) for primary-transferring the toner image from the photosensitive drum1as the image bearing member onto the intermediary transfer member as the toner image receiving member. As the intermediary transfer member, an endless belt member stretched by a plurality of rollers can be used. The toner image primary-transferred on the intermediary transfer member is secondary-transferred from the intermediary transfer member onto the sheet (recording material) by a secondary transfer means such as a secondary transfer roller for forming a secondary transfer nip between itself and the intermediary transfer member. Even in a constitution of such an intermediary transfer type, effects similar to the effects of the above-described embodiments can be obtained by replacing the transfer roller in each of the above-described embodiments with the primary transfer roller. According to the present invention, the electric charge distribution of the residual toner can be stabilized in the normal polarity. Fourth Embodiment An outline of an image forming apparatus100according to a fourth embodiment will be described usingFIGS.19and20.FIG.19is a schematic view of the image forming apparatus100.FIG.20is an enlarged view of an image forming portion101provided in the image forming apparatus100. The image forming apparatus100is a monochromatic printer for forming a monochromatic image on a sheet S on the basis of image information received from an external device. As the sheet S which is a recording material, it is possible to use various sheets different in size and material, including paper such as plain paper or thick paper; a plastic film; a cloth; a sheet material such as coated paper subjected to surface treatment; a special-shaped sheet material such as an envelope or index paper; and the like. As shown inFIGS.19and20, the image forming apparatus100includes the image forming portion101of an electrophotographic type in which an image is formed on the sheet S, and a sheet feeding mechanism (6,8,12) for feeding and conveying the sheet S. The image forming portion101includes a photosensitive drum1as an image bearing member, a charging roller2as a charging means, an exposure device3as an exposure means, a developing device4as a developing means, a transfer roller5as a transfer means, a brush member11, and a fixing device9as a fixing means. Of the image forming portion101, a latent image unit1A including the photosensitive drum1, the charging roller2, and the brush member11, and the developing device4as a developing voltage are constituted as a cartridge C detachably mountable to an image forming apparatus main assembly100A. The photosensitive drum1is an electrophotographic photosensitive member molded in a drum shape. The photosensitive drum1has the drum shape (cylindrical shape) of, for example, 24 mm in diameter, and is rotationally driven at a peripheral speed (process speed) of 100 mm/sec during image formation. The charging roller2is a charging member of a contact constitution example type in which the charging roller2contacts the photosensitive drum1. A contact portion between the charging roller2and the photosensitive drum1is a charging portion P2(charging position) where charging of a surface of the photosensitive drum1is carried out. The developing device4includes a developing roller41, a supplying roller42, a stirring member43, a developing blade44, and a toner accommodating portion45. The developing roller41is a developing member or a developer carrying member for supplying toner T to a developing portion P4(developing position), where the developing roller41and the photosensitive drum oppose each other, by being rotated while carrying the toner T. In this embodiment, a so-called contact development type in which a toner layer carried on the developing roller41contacts the surface of the photosensitive drum1in the developing portion P4is used. The developing roller41is disposed at an opening of the toner accommodating portion45provided in a position opposing the photosensitive drum1. The supplying roller42supplies (applies) the toner T in a supplying chamber45aof the toner accommodating portion45to the developing roller41. The stirring member43is disposed in the toner accommodating portion45and stirs the toner T in the toner accommodating portion45by being rotated, and supplies the toner T into the supplying chamber45a. The toner accommodating portion45is a container for accommodating the toner T as a developer. The developing blade44is contacted from an inside of the toner accommodating portion45to a surface of the developing roller41, rotating toward the developing portion P4, with a predetermined pressing force (pressure). The developing blade44is formed of, as a main component, a material (for example, iron or copper) which is on a positive polarity side (non-normal polarity side) relative to a main component (binder resin) in a charging series. By this, the toner T is triboelectrically charged to a negative polarity (normal (charge) polarity) by being rubber with the developing blade44. The developing roller41is, for example, a roller which includes an electroconductive rubber layer (elastic layer) and which has a diameter of 12 mm. The supplying roller42is a roller including a sponge-like outer layer with a diameter of 10 mm, for example. The transfer roller5is disposed in contact with the surface of the photosensitive drum1. A nip where the transfer roller5and the photosensitive drum1oppose each other is a transfer portion5where the toner image is transferred from the photosensitive drum1onto the sheet S. The brush member11is disposed downstream of the transfer portion P5and upstream of the charging portion P2with respect to a rotational direction R1of the photosensitive drum1. The brush member11is disposed in contact with the surface of the photosensitive drum1in a predetermined contact condition. Details of the brush member11will be described later. The fixing device9includes a fixing roller9aor a flexible fixing film as a first rotational member, a pressing roller9bas a second rotatable member contacting the first rotatable member with a predetermined pressing force, and a heating means for heating the image on the sheet S through the first rotatable member. As the heating means, a halogen lamp generating radiant heat or a heater substrate in which a pattern of a heat generating resistor is formed on a ceramic substrate can be used. The sheet feeding mechanism includes a cassette6, a feeding roller7, a feeding (conveying) roller pair8, and a discharging roller pair12. The cassette6is a stacking portion in which sheets S are stacked. The feeding roller7is a feeding member for feeding the sheets one by one from the cassette6. The feeding roller pair8is a feeding member for feeding the sheet S, fed from the cassette6, to the transfer portion P5. The discharging roller pair12is a discharging member for discharging the sheet S on which the image is formed by the image forming portion101. In the following, an outline of an image forming operation by the image forming apparatus100will be described. When an execution instruction of the image forming operation is provided to the image forming apparatus100, the photosensitive drum1is rotationally driven in a predetermined rotational direction R1inFIG.20, and the surface of the photosensitive drum1is electrically charged uniformly by the charging roller2. The charging roller2is rotated in a rotational direction R2in which the charging roller2is rotated together with the photosensitive drum1in the charging portion P2. The exposure device3exposes the surface of the photosensitive drum1to light by irradiating the surface of the photosensitive drum1with laser light L through a window portion3abetween the latent image unit1A and the developing device4on the basis of the image information received from the external device. By this, an electrostatic latent image is written (formed) on the surface of the photosensitive drum1. In this embodiment, the reverse development type is employed. For that reason, the charging roller2charges the surface of the photosensitive drum1to a dark portion potential Vd of a negative polarity by being supplied with a voltage (charging voltage) of the negative polarity which is the same as the normal polarity of the toner T. After the charging, a light portion potential V1of a region (image region) in which the photosensitive drum surface is exposed to light by the exposure device3is lower than the dark portion potential Vd. In a constitution example of this embodiment, for example, the photosensitive drum surface is charged to Vd=−500 (V) with use of a DC charging voltage of −1100 V. In the developing device4, the toner T accommodated in the toner accommodating portion45is uniformized by the stirring member43, and is supplied to the developing roller41by the supplying roller42. The toner T carried on the developing roller41is not only triboelectrically charged to the normal polarity by being rubbed with the developing blade44but also regulated in a predetermined layer thickness during passing through the developing blade44. By rotation of the developing roller41, the toner T charged to the normal polarity is supplied to the developing portion P4. Then, a voltage (developing V) of the normal polarity which is the same as the normal polarity of the toner T is applied to the developing roller41, so that the toner T is transferred onto the photosensitive drum1depending on a potential distribution on the surface of the photosensitive drum1. By this, the electrostatic latent image on the surface of the photosensitive drum1is developed and visualized as a toner image. The developing voltage is −350 V, for example. Further, the developing roller41is rotated at a peripheral speed (for example, 140 mm/sec) faster than the peripheral speed of the photosensitive drum1in a rotational direction R4in which the developing roller41is rotated together with the photosensitive drum1in the developing portion P4. The toner image formed on the surface of the photosensitive drum1is fed to the transfer portion P5in a state in which the toner image is carried on the photosensitive drum1. In parallel to the above-described process, the sheets S and fed one by one from the cassette6by the feeding roller7, and then the sheet S is conveyed to the transfer portion P5by the feeding (conveying) roller pair8. A voltage (transfer voltage) of a positive polarity opposite to the normal polarity of the toner T is applied to the transfer roller5, so that the toner image is transferred from the photosensitive drum1onto the sheet S in the transfer portion P5. The transfer voltage is +1000 V, for example. The sheet S passed through the transfer portion P5is conveyed to the fixing device9. In the fixing device9, the image on the sheet S is heated and fixed by the first rotatable member heated by the heating means while nipping and conveying the sheet S in the nip between the first rotatable member and the second rotatable member. The sheet S passed through the fixing device9is discharged to a discharge tray13provided at an upper portion of the image forming apparatus100by the discharging roller pair12. (Cleaner-Less Brush Type) Next, a cleaner-less brush type using the brush member11will be described. In this embodiment, a simultaneous development and cleaning type in which residual toner which was not transferred onto a toner image receiving member (sheet S) in the transfer portion P5is collected by the developing roller41when the residual toner reaches the developing portion P4next time is employed. In the simultaneously development and cleaning type, the residual toner collected by the developing roller41is stirred together with another toner in the toner accommodating portion45and then is used for the development again. In the simultaneous development and cleaning type, the residual toner which was not transferred onto the toner image receiving member in the transfer portion P5is collected by the developing roller41, and therefore, the brush member11basically permits passing of the residual toner therethrough. For that reason, the “brush member” in this embodiment is different from a brush member as a cleaning device (drum cleaner) for the purpose of removing the residual toner from the photosensitive drum1. Incidentally, in the simultaneous development and cleaning type, the cleaning device for collecting the residual toner is not disposed, and therefore, such a type is called a cleaner-less brush type in some cases. In the cleaner-less brush type, in the simultaneous development and cleaning type, the brush member11for scattering the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P5is disposed downstream of the transfer portion P5and upstream of the charging portion P2with respect to the rotational direction R1. By disposing the brush member11, a state in which the residual toner locally exists in a large amount on the photosensitive drum1can be alleviated. When the residual toner locally exists in the large amount on the photosensitive drum1, there is a possibility that image defects are caused by improper charging due to contamination of the charging roller2with the residual toner and by improper collection of the residual toner in the developing portion P4. On the other hand, in the cleaner-less brush type, the residual toner is scattered by the brush image11, and behavior of the residual toner in the charging portion P2and the developing portion P4is uniformized, so that the above-described inconveniences can be suppressed. Further, a brush voltage power source E11(FIG.20) for applying a bias voltage (brush voltage) of the same polarity as the normal polarity of the toner T to the brush member11may preferably be disposed. By applying the brush voltage to the brush member11, the toner T charged to the non-normal polarity can be held on the brush member11while permitting passing of the toner T of the normal polarity. Further, the toner T of the non-normal polarity held on the brush member11is moved toward the charging portion P2while being carried on the surface of the photosensitive drum1when the toner T of the non-normal polarity held by the brush member11is changed in polarity to the normal polarity by friction with the bristle material of the brush member11. The brush voltage applied to the brush member11in this embodiment has a magnitude to a degree such that the electric discharge does not occur between the brush member11and the photosensitive drum1. However, by applying the brush voltage to the brush member11, the electric charge of the normal polarity is injected from the brush member11to the residual toner, so that the polarity of the residual toner may be changed to the normal polarity. Incidentally, a constitution in which the brush voltage is not applied to the brush member11. Even in that case, the toner T is triboelectrically charged in the brush contact portion, so that the polarity of the toner T can be changed to the normal polarity. Further, in the constitution in which the brush voltage is not applied to the brush member11, the brush member11can be a member electrically connected to a ground potential. (Operation Peculiar to Cleaner-Less Brush Type) In the cleaner-less brush type, the residual toner passed through the contact portion between the brush member11and the photosensitive drum1reaches the charging portion P2. To the charging roller2, the charging voltage of a same polarity as the normal polarity of the toner is applied, and therefore, of the residual toner particles, toner particles charged to the normal polarity pass through the charging portion2while being pressed against the photosensitive drum1. On the other hand, of the residual toner particles, toner particles charged to the non-normal polarity or toner particles of which charge amount is close to zero are partially deposited on the charging roller2in the charging portion P2. When the residual toner is deposited and accumulated on the charging roller2, uniform charging of the photosensitive drum1is prevented, so that the image defect due to the improper charging becomes apparent. In this embodiment, in order to alleviate a degree of deposition of the residual toner on the charging roller2, a peripheral spaced difference between the charging roller2and the photosensitive drum1is set. Specifically, a peripheral speed of the charging roller2is set at a value higher than a peripheral speed of the photosensitive drum1by 5% or more, preferably be 10% or more. Further, in order to charge the residual toner to the normal polarity by friction between the charging roller2and the photosensitive drum1, materials of a surface layer of the charging roller2and a surface layer of the photosensitive drum1are selected. That is, in the charging series, the materials of the surface layer of the charging roller2and the surface layer of the photosensitive drum1are positioned in a higher rank (positive polarity side, non-normal polarity side) than a main component (binder resin) of the toner. By constituting the peripheral speed difference and the materials as described above, in the charging portion P2, the residual toner is charged to the normal polarity by friction with the charging roller2or the photosensitive drum1, so that deposition of the residual toner on the charging roller2can be suppressed. The residual toner passed through the charging portion P2reaches the developing portion P4with rotation of the photosensitive drum1. Of the residual toner particles carried on the photosensitive drum1in a non-image region (non-exposure region), the toner particles charged to the normal polarity are transferred onto the developing roller41and are collected in the toner accommodating portion45by a potential difference between the dark portion potential Vd and the developing voltage. On the other hand, of the residual toner particles carried on the photosensitive drum1in an image region (exposure region), the toner particles charged to the normal polarity are not transferred onto the developing roller41and remain on the photosensitive drum1by a potential difference between the light portion potential Vl and the developing voltage. In this case, the toner particles are sent as a part of the toner image, obtained by developing the electrostatic latent image, to the transfer portion P5. Incidentally, a voltage value of the developing voltage has the same polarity as the normal polarity of the toner, and is higher than the light portion potential Vl and is lower than the dark portion potential Vd. Ideally, of the residual toner particles, the toner particles charged to the non-normal polarity and the toner particles of which charge amount is close to zero are changed in polarity to the normal polarity, and thus are collected by the developing roller41in the developing portion P4without being deposited on the charging roller2. However, when the residual toner charged to the non-normal polarity in a large amount enters the charging portion P2, the residual toner which is not changed in polarity to the normal polarity in the charging portion P2is liable to be deposited on the charging roller2. Further, when the residual toner which is not changed in polarity to the normal polarity in the charging portion P2reaches the developing portion P4, the residual toner passes through the developing portion P4without being collected by the developing roller41. In this case, there is a possibility that contamination of the transfer roller5with the residual toner and image defect (white background fog) such that a thin toner image is formed on a white background portion (non-image region) occurs. In the following, constituent elements of the image forming apparatus100will be specifically described. (Brush Member) First, the brush member11in this embodiment will be described. As shown inFIGS.19and20, the brush member11contacts the surface of the photosensitive drum1in a position downstream of the transfer portion P5and upstream of the charging portion P2with respect to the rotational direction R1of the photosensitive drum1. That is, the image forming apparatus100includes the brush member11disposed downstream of the transfer member and upstream of the developing member with respect to the rotational direction of the image bearing member and contacting the surface of the image bearing member. In the following, a region where the brush member11contacts the photosensitive drum1is referred to as a “brush contact portion”. Part (a) ofFIG.21is a perspective view of the latent image unit1A, part (b) ofFIG.21is a sectional view of the latent image unit1A in a plane perpendicular to a rotational axis of the photosensitive drum1. Part (c) ofFIG.21is a schematic view showing a state of the brush contact portion as viewed in an arrow4C direction of part (b) ofFIG.21from an upstream side of the rotational direction R1of the photosensitive drum1. The brush member11is fixed to a bearing surface14aprovided on a frame14of the latent image unit1A for rotatably supporting the photosensitive drum1and the charging roller2, and is supported by the frame14. Part (a) ofFIG.22is a sectional view of the brush member11in a single body state cut along a flat plane perpendicular to a longitudinal direction of the brush member11. The single body state is a state in which the brush member11is not mounted on the frame14of the latent image unit1A as a supporting member, i.e., a state in which an external force does not act on the brush member11. Part (b) ofFIG.22is a sectional view of the brush member11in a state in which the brush member11is contacted to the photosensitive drum1. As shown in parts (a) and (b) ofFIG.22, the brush member11includes a base cloth11bas a base portion and an electroconductive thread (yarn)11aas a bristle material (fiber) supported by the base cloth11b. The base cloth11bis formed of a synthetic resin fiber containing carbon black as an electroconductive agent. The electroconductive thread11ais formed of, for example, a nylon fiber (pile yarn) in which the electroconductive agent is added, and is textured and planted on the base cloth11b. The material of the electroconductive thread11ais not limited to nylon, but other synthetic resin fibers such as fibers of rayon, acrylic resin, or polyester resin may be used. The brush member11is a member extending thin and long in a predetermined direction. In the following, an extension direction is referred to as a longitudinal direction LD (see, also, part (a) ofFIG.21) of the brush member11, and a direction along the base cloth11band perpendicular to the longitudinal direction LD is referred to as a short direction SD of the brush member11. In the single body state in which the external force does not act on the brush member11(part (a) ofFIG.22), the electroconductive thread11aprojects in a direction (direction normal to the base cloth11b) substantially perpendicular to both the longitudinal direction LD and the short direction SD. As shown in part (a) ofFIG.22, a distance from the base cloth11cto a free end of the electroconductive thread11ain the brush member11in the single body state is bristle height L1. The bristle height L1of the brush member11in an example of this embodiment is 5.75 mm. The brush member11is fixed to the frame14(part (b) ofFIG.21) of the latent image unit1A by a fixing means such as a double-side tape. The bearing surface of the frame14for the brush member11is set so that free ends of the electroconductive threads11aenter the photosensitive drum1. For this reason, the brush member11is in a state in which the free ends of the electroconductive threads11aare pressed against the surface of the photosensitive drum1and are flexed (bent). As shown in part (c) ofFIG.21, the brush member11is disposed in an attitude such that the longitudinal direction LD thereof is substantially parallel to the rotational axis direction of the photosensitive drum1. On the other hand, in part (b) ofFIG.21, a schematic view of the brush member11disposed in an attitude such that the short direction SD thereof is substantially parallel to the surface of the photosensitive drum1is shown, but an angle of the brush member11disposed is not limited thereto. In this embodiment, the brush member11is disposed in the inclined state relative to the surface of the photosensitive drum1. That is, the bearing surface14a(and the base cloth11bsupported by the bearing surface14a) of the brush member11is disposed is the inclined state relative to a tangential direction of the photosensitive drum1so as to be spaced from the surface of the photosensitive drum1toward a downstream of the rotational direction R1of the photosensitive drum1. Definition of the inclination angle of the brush member11and a range of the inclination angle will be described later. A minimum distance from the base cloth11bof the brush member11, fixed to the bearing surface14a, to the photosensitive drum1is taken as L2. In this embodiment, a difference between L2and L1(L1-L2) is a maximum penetration amount of the brush member11into the photosensitive drum1. However, L2<L1holds. In the example of this embodiment, the maximum penetration amount of the brush member11into the photosensitive drum1is, for example, 1.58 mm. Further, in this embodiment, as shown in part (b) ofFIG.21and part (a) ofFIG.22, as regards the brush member11in the single body state, a length (length in a range in which the electroconductive threads11aare planted) of the brush member11with respect to the short direction SD, which is a short(-side) width L3is, for example, 4 mm in the example of this embodiment. The short width L3may preferably be 3 mm or more for maintaining a performance of the brush member11for a long term. As shown in part (b) ofFIG.22, in the state in which the brush member11is pressed against the photosensitive drum1, an occupied width of the electroconductive threads11awith respect to the short direction SD is somewhat broadened. Further, in the example of this embodiment, a longitudinal width L4(part (c) ofFIG.21) which is a length of the brush member11with respect to the longitudinal direction LD is 216 mm. The longitudinal width L4is set so that with respect to the longitudinal direction LD, the brush member11is capable of contacting an entire area of an image forming region (toner image formable region, maximum region of the latent image formed by the exposure device3) on the photosensitive drum1. Further, in the example of this embodiment, a thickness of the electroconductive threads11ais for example, 2 denier and a density of the electroconductive threads11ais 240 kF/inch2. Here, 1 kF/inch2is a density of 1000 fibers per square centimeter. The thickness and density of the electroconductive threads11aare capable of being appropriately changed as long as the electroconductive threads11asatisfy a function required for the brush member11. As an example, it is preferable that the thickness of the electroconductive threads11ais denier or more and 6 denier or less and that the density of the electroconductive threads11ais 150 kF/inch2and 350 kF/inch2. Incidentally, 1 kF/inch2represents a planting density of 1000 fibers per square inch. Incidentally, in the case of nylon electroconductive threads11aused in this embodiment, when 1 to 6 denier which is a unit of formula for direct yarn count is converted into a fiber diameter, 1 to 6 denier corresponds to about 10 μm to about 30 μm. For this reason, in the case where the bristle material other than nylon is used as the brush member, it is possible to use a bristle material with a thickness of 1 denier or more and 6 denier or less in terms of direct yarn count and of 10 μm or more and 30 μm or less in term of the fiber diameter. “1 denier or more and 6 denier or less” can be said as “1.1 decitex or more and 6.7 decitex or less”. Further, 1 inch2is about 6.45 cm2, and therefore, “150 kF/inch2or more and 350 kF/inch2” can be said as “23 kF/mm2or more and 54 kF/mm2”. In this embodiment, the brush member11is constituted so as to permit the passing of the residual toner while scattering the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P6. For that reason, in the case where the electroconductive threads11aare excessively thick, there is a possibility that the residual toner cannot be uniformly scattered and passes through the brush contact portion in a stripe shape and thus leads to stripe-shaped contamination of the charging roller2with the residual toner. Further, in the case where the electroconductive threads11aare excessively high in density, the residual toner is blocked by the brush contact portion, and thus not only constitutes an obstacle to collection of the residual toner by the developing roller41but also contaminates an inside of the image forming apparatus by being dropped or scattered from the photosensitive drum1. Further, the brush member11has a function of triboelectrically charging the residual toner in the brush contact portion. For that reason, in the case where the electroconductive threads11aare excessively thin, there is possibility that the electroconductive threads11aare readily flexed even when the residual toner contacts the electroconductive threads11aand are escaped from the residual toner, and thus the toner particles are not rolled and the residual toner is not sufficiently triboelectrically charged. Further, in the case where the electroconductive threads11aare excessively low in density, a frequency of collision with the electroconductive threads11abecomes low, so that there is a possibility that the residual toner cannot be sufficiently triboelectrically charged. In the above description, from the viewpoints of a function of scattering the residual toner and a function of triboelectrically charging the residual toner, preferred ranges of the thickness and the density of the electroconductive threads11awere described, but depending on functions required for the brush member11, details of the thickness, the density, the material, the bristle height, and the like can be appropriately changed. Incidentally, the brush member11in this embodiment may have a function of blocking a foreign matter (for example, paper powder) other than the residual toner in the brush contact portion. (Developer) In this embodiment, as the developer, the toner T which is one-component developer of which normal polarity (normal charge polarity) is the negative polarity. For that reason, in the following description of this embodiment, the “negative polarity” is synonymous with the normal polarity of the toner T and the “positive polarity” is synonymous with the non-normal polarity of the toner T unless otherwise specified. The toner T contains a binder resin and a colorant and may further contain a parting agent, a charge control agent, and an external additive as desired. As the binder resin, styrene-acrylic resin and polyester resin, which are lower in rank of the charging series (negative polarity side) than nylon and rayon constituting the electroconductive threads11aof the brush member11are, can be preferably used. That is, a main component (binder resin) of the toner T may desirably be positioned on the positive polarity side (lower rank) relative to the material of the bristle materials of the brush member11in the charging series. In this embodiment, as the binder resin of the toner, the styrene-acrylic resin is employed. As the colorant, a known colorant can be used. For example, a dye and a pigment are cited. As the parting agent, a known charge control agent can be used. The charge control agent has an acid value or a hydroxy value and may preferably have the negative polarity which is equivalent to or more than the binder resin. As the external additive, a known external additive can be used. For example, silica, alumina, titania, titanium composite oxide, and the like are cited. The colorant and the parting agent may preferably be included in the binder resin so as not to have the influence on the charge polarity of the toner particle surface. Further, as the toner, a polymerization toner formed by a polymerization method can be employed. The toner T with a particle size (volume-average particle size) of 4-10 μm, preferably 6-8 μm may preferably be used. In this embodiment, spherical toner prepared by the polymerization method and with a particle size of 7 μm is used. Further, the toner T in this embodiment is so-called non-magnetic one-component developer which does not contain a magnetic component and which is carried on the developing roller41principally by an intermolecular force or an electrostatic force (mirror force). However, as the developer, a one-component developer consisting of toner containing the magnetic component may be used. Further, as the developer, a two-component developer constituted by non-magnetic toner and a magnetic carrier may be used. In the case where the magnetic developer is used, as a developer carrying member, for example, a cylindrical developing sleeve in which a magnet is provided is used. Further, to the developer, in addition to the toner and the carrier, an additive (for example, a wax or silica fine particles) for adjusting flowability, charging performance, and the like of the toner may be added. (Photosensitive Drum) The photosensitive drum1is prepared by successively laminating an undercoat layer, a charge generation layer, and a charge transport layer on a cylindrical electroconductive supporting member (core metal) as a lowermost layer. The charge transport layer is formed by coating and drying point prepared by mixing principally a charge transporting material and the binder resin in a solvent. As the principally used charge transporting material, a known charge transporting material can be used. For example, various triarylamine compounds and hydrazone compounds are cited. Further, as the binder resin, for example, a polycarbonate resin, a polyarylate resin, and the like are cited. A predominant portion in the triboelectric charge with the toner is the charge transport layer as a surface layer (outermost layer) and is the binder resin occupying most thereof. Here, the cited polycarbonate resin or polyarylate resin are positioned on the non-normal polarity side (upper rank) relative to the styrene-acrylic resin which is the binder resin of the toner T in the charging series. That is, the outermost layer of the image bearing member may preferably be formed of a material capable of triboelectrically charging the toner T to the normal polarity in the case where the outermost layer is rubbed with the resin which is the main component of the toner T. In this embodiment, the polycarbonate resin was selected as the binder resin of the outermost layer. Further, in this embodiment, as the photosensitive drum1, a cylindrical photosensitive drum with an outer diameter of 24 mm. Depending on the outer diameter of the photosensitive drum1. A manner of contact (pressing) of the brush member11(for example, the above-described penetration amount or an angle described later in a sixth embodiment) is appropriately changed. (Charging Roller) The charging roller2in this embodiment will be described. The charging roller2includes a core metal as an electroconductive supporting member, a 2 mm-thick elastic layer provided on an outer periphery thereof, and a 25 mm-thick resin layer as a surface layer provided on an outer periphery of the elastic layer. A surface of the surface layer is a surface contacting the photosensitive drum1and causing electric discharge on the photosensitive drum1. The elastic layer is formed of an electron-conductive rubber material. The electron-conductive rubber material is, for example, a material in which carbon black is dispersed as electroconductive particles (electron-conductive agent) in a binder polymer which itself does not assume electroconductivity and in which an electric resistance is adjusted. As the binder polymer, a known binder polymer used in the electroconductive elastic layer of the charging roller for the electrophotographic apparatus can be used. For example, a hydrin rubber, a butadiene rubber, and the like are cited. In this embodiment, the hydrin rubber was selected. A kind of the carbon black mixed in the elastic layer is not particularly limited so long as the carbon black is electroconductive carbon black capable of imparting electroconductivity to the elastic layer. Further, to the elastic layer, as a compounding agent, general-purpose agents such as a filler, a processing aid, a cross-linking aid, a cross-linking retardant, a softening agent, a dispersing agent, a colorant, and the like may be added as desired. As a resin of the surface layer of the charging roller2, a resin material positioned on the non-normal polarity side (upper rank) relative to the main component (binder resin) of the toner T in the charging series. For example, the surface layer is formed by coating the outer periphery of the elastic layer with a resin material, for example, polycarbonate urethane, possessing electroconductivity. The residual toner can be triboelectrically charged to the normal polarity in the charging portion P2by forming the third layers of the charging roller2and the photosensitive drum1with the above-described materials and by setting the peripheral speed difference between the charging roller2and the photosensitive drum1as described above. Further, to the surface layer of the charging roller2, roughening particles with a polarity such that triboelectrically chargeability is not impaired can be added. For example, there is also a method in which polycarbonate urethane resin similar to the polycarbonate urethane resin of the surface layer is formed in particles and in which the particles are dispersed. That is, the charging roller2is not required to closely contact the surface of the photosensitive drum1in the charging portion P2, so that a constitution in which the charging roller2contacts the surface of the photosensitive drum1in a mountain portion of unevenness formed by the roughening particles can be employed. (Transfer Roller) The transfer roller5is a roller-type transfer member disposed opposed to the photosensitive drum1. The transfer roller5is pressed against the photosensitive drum1at a predetermined pressure. The transfer roller5of this embodiment is an elastic roller of 12 mm in outer diameter, in which a sponge rubber of an electroconductive nitrile-butadiene hydrin rubber type is formed around a core metal. (Contact Condition of Brush Member) In this embodiment, the brush member11imparts the electric charge to the residual toner through the triboelectric charge while scattering the residual toner on the photosensitive drum1. At this time, in order to impart the negative electric charge to the residual toner through the triboelectric charge, as the material of the bristle material (electroconductive threads11a) of the brush member11, a material positioned on the positive polarity side (upper rank) relative to the main component of the toner T in the charging series is used. Further, a contact pressure between the brush member11and the photosensitive drum1in the brush contact portion is ensured so that the electroconductive threads11arub the residual toner with a sufficient force. As regards the charging series, the main charge of the toner T in this embodiment is the styrene-acrylic resin. The bristle material of the brush member11may desirably be a material, such as nylon or rayon, relative to which the styrene-acrylic resin is positioned on the negative polarity side (lower rank) and a difference in charging series therebetween is large. In this embodiment, the nylon resin was selected as the main component of the electroconductive threads11aas described above. Polyester fibers and acrylic fibers are not desired as the material of the electroconductive threads11asince the styrene-acrylic resin is positioned the positive polarity side relative thereto in the charging series and the difference in charging series is also small. However, in the case where the main component of the toner T is different, the polyester fibers or the acrylic fibers can be used as the material of the electroconductive threads11ain some instances. Incidentally, the surface layer of the photosensitive drum1is capable of having the influence on the triboelectric charge of the toner T in the brush contact portion. For that reason, the main component of the surface layer of the photosensitive drum1may preferably be a material which is positioned on the positive polarity side relative to the main component of the toner T in the charging series. In this embodiment, as described above, the main component of the surface layer of the photosensitive drum1is the polycarbonate resin. A contact condition of the brush member11in the brush contact portion will be further described. In order to study physical properties (parameters) showing the contact condition of the brush member11, samples 1 to 4 of 4 levels different in thickness and density of the bristle material of the brush member11were prepared. The sample 1 is a brush member11of which bristle material is thick and is low in density. The sample 2 is a brush member11of which bristle material is thin and at a medium level in density. The sample 3 is a brush member11of which bristle material is thin and is high in density. The sample 4 is a brush member11of which bristle material is at a medium level in thickness and is low in density. Then, the brush member11of each of the samples was contacted to the photosensitive drum1, and then a peak pressure and a maximum contact area ratio in the brush contact portion were calculated in the following methods. Incidentally, the peak pressure is a maximum value of an average contact pressure in a region of a width of 1 mm of the brush contact portion with respect to the short direction, and the maximum contact area ratio is a contact area ratio between the brush member11and the photosensitive drum1in the region of the width of 1 mm in which the peak pressure is obtained. A calculating method of the peak pressure is as follows. As shown in part (a) ofFIG.24, with use of a compression test jig for a compact table-top tester (“EZTest” manufactured by Shimadzu Corporation), normal reaction when a pressing plate71was pressed into the brush member11while adjusting a flow or bristles (fibers) of the brush member11placed horizontally was measured and then a relationship between the penetration amount and the normal reaction was obtained. On the other hand, as shown in part (b) ofFIG.24, a glass plate72was press-contacted to the brush member11so as to uniformize the flow of the bristles of the brush member11while moving the glass plate in a horizontal direction D, and a contact width73of the brush member11with respect to the short direction SD was measured by observation of the brush contact portion through a microscope from a side opposite from the glass plate72. The horizontal direction D is one side of the short direction SD corresponding to the rotational direction R1of the photosensitive drum1. In the case where the brush member11is prepared so that the density and the thickness of the bristle material thereof are uniform, the peak pressure can be calculated by the above-described (formula 1) to (formula 3). First, in a state in which an object is press-contacted to the brush member11with a predetermined penetration amount, an average (average pressure) of the contact pressure in the brush contact portion can be represented by the (formula 1). In the (formula 1), the normal reaction and the contact width are values measured in a state in which the pressing plate71or the glass plate72is press-contacted to the brush member11with the predetermined penetration amount. In an actual brush contact portion between the brush member11and the photosensitive drum1, the contact pressure becomes material in a portion where the penetration amount of the brush member11into the photosensitive drum1is largest. A maximum value of this contact pressure is referred to as the peak pressure. The peak pressure is calculated by the (formula 3) with use of the average penetration amount (formula 2) obtained from a material penetration amount and a minimum penetration amount of the brush member11. The above-described calculating method means in actuality that the contact pressure applied to the surface of the photosensitive drum1drawing an arc as shown inFIG.6is linearly approximated. Specifically, it is assumed that the brush member11of 4 mm in short(-side) width L3is contacted to the photosensitive drum1of 24 mm in diameter from immediately above the photosensitive drum1so that the penetration amount becomes 1.2 mm in a center portion with respect to the short direction SD. In this case, the maximum penetration amount is 1.2 mm, the minimum penetration amount is 1.03 mm, and the average penetration amount is 1.115 mm, so that the peak pressure can be calculated using the (formula 3). Incidentally, in the case where the density, the thickness, and the like of the brush member11are not set uniformly with respect to the short direction SD, the brush member11is cut for each unit length (for example, 1 mm) thereof with respect to the short direction SD, and then normal is measured, so that a contact pressure in each unit length region (range) is obtained. Then, an average of acquired values of the contact pressure is taken as an average pressure, and a maximum value is taken as a peak pressure. In order to calculate the discrimination was made by a color tint between a portion (contact portion) where the bristle material of the brush member11contacts a glass plate and a portion (non-contact portion) where the bristle material of the brush member11does not contact the glass plate when the brush member11is contacted to a glass plate g as shown inFIG.25. Part (a) ofFIG.7is an actual photograph observed through a microscope, and part (b) ofFIG.7is an image obtained by subjecting the photograph of part (a) ofFIG.7to binarization so that the contact portion becomes white and the non-contact portion becomes black. The contact area ratio is a ratio of an area of the contact portion to an area of an observation object. In general, with respect to the short direction SD, in a position where the peak pressure is obtained, the contact area ratio becomes maximum (maximum contact area ratio). In summary, as regards the image forming apparatus including the brush member11as in this embodiment, the peak pressure and the maximum contact area ratio can be checked in the following procedure.1) The outer diameter of the photosensitive drum1, the bristle height L1and the short width L3, and the longitudinal width L4of the brush member11, and the shortest distance L2from the base cloth11bof the brush member11to the surface of the photosensitive drum1are measured, and the maximum penetration amount (L1-L2) is calculated (see, parts (b) and (c) ofFIG.20).2) From the outer diameter of the photosensitive drum1, the short width L3and the maximum penetration amount of the brush member11which are measured in the above-described 1), the minimum penetration amount is calculated on the basis of a geometrical relationship ofFIG.6.3) From the maximum penetration amount and the minimum penetration amount which are acquired by the above-described 1) and 2), the average penetration amount is calculated on the basis of the (formula 2).4) A compression test of the brush member11is conducted by a method of parts (a) and (b) ofFIG.24with use of the average penetration amount calculated in the above-described 3), so that the average pressure is acquired on the basis of the (formula 1).5) By using the maximum penetration amount, the average penetration amount, and the average pressure which are acquired by the above-described 1), 3), and 4), the peak pressure is calculated on the basis of the (formula 3).6) In the maximum penetration amount acquired in the above-described 1), the brush member11is press-contacted to the glass plate by the method ofFIG.25, and then the contact surface is observed, so that the maximum contact area ratio is calculated. Further, in order that the peak pressure and the maximum contact area ratio of the brush member11are mode desired values, parameters such as the outer diameter of the photosensitive drum1, the bristle height L1and the short width L3of the brush member11, and the above-described shortest distance L2are appropriately changed, so that the peak pressure and the maximum contact area ratio may only be required to be checked in the above-described procedure. A graph in which in the above-described method, for each of the samples, the peak pressure and the maximum contact area ratio are calculated in a plurality of different conditions and in which values of the peak pressure are plotted on the ordinate and values of the maximum contact area ratio are plotted on the abscissa isFIG.1. InFIG.1, points indicated by black marks represent that image defect did not occur, and points indicated by white marks represent that the image defect occurred. As shown inFIG.1, in the case where the peak pressure is less than 0.7 gf/mm2and the maximum contact area ratio is less than 18%, the image defect occurred. This would be considered because in the case where the peak pressure is excessively low, only a part of the bristle material of the brush member11contacts the photosensitive drum1, and therefore, the action for triboelectric charging the residual toner to the normal polarity becomes in sufficient. Further, it would be considered because also in the case where the maximum contact area ratio is excessively low, as a result that a frequency of contact of the toner particles with the bristle material of the brush member becomes low, the action for triboelectrically charging the residual toner to the normal polarity becomes insufficient. In either case, when the residual toner reaches the charging portion P2without being sufficiently supplied with the electric charges of the positive polarity in the brush contact portion, the residual toner charged to the non-normal polarity or the residual toner of which charge amount is close to zero is deposited on the charging roller2, so that contamination of the charging roller2progresses. x Further, in the case where the peak pressure is higher than 3.5 gf/mm2and in the case where the maximum contact area ratio is higher than 74%, the image defect occurred. This would be considered because in either case of the excessively high peak pressure and the excessively high maximum contact area ratio, a part of the brush contact portion is in a state in which the residual toner cannot pass through the pair of the brush contact portion and thus the residual toner concentratedly pass through a portion (portion where the contact pressure or the density of the bristle material is relatively low through which the residual toner is capable of passing. In this case, the surface of the photosensitive drum1passed through the brush contact portion is in a state in which the residual toner is deposited in a stripe shape (linear shape extending in the rotational direction), so that the charging roller2is contaminated with the residual toner in the stripe shape. Incidentally, in a region where the peak pressure or the maximum contact area ratio is particularly high, the residual toner is blocked by the brush member11, so that there is a possibility that not only collection of the residual toner by the developing roller41is obstructed but also the blocked toner is scattered and an inside of the image forming apparatus is contaminated with the scattered toner. Accordingly, it is desired that the brush member11is constituted so that the peak pressure and the maximum contact area ratio in the brush contact portion fall within the following region enclosed by a broken line ofFIG.1.(peak pressure): 0.7 gf/mm2or more and 3.5 gf/mm2or less(maximum contact area ratio): 18% or more and 74% or less By this, in the cleaner-less brush type in which the residual toner deposited on the surface of the photosensitive drum1passed through the transfer portion P5is scattered by the brush member11, an electric charge distribution of the residual toner can be stabilized by the normal polarity. In other words, the electric charge distribution of the residual toner after passing through the brush contact portion can be made a distribution which has a peak on the normal polarity side (negative polarity side) of the toner T and which is sharp compared with the electric charge distribution of the residual toner before entering the brush contact portion. Incidentally, in the region enclosed by the broken line, the above-described image defect does not readily occur in a central portion than in a peripheral portion. For that reason, it is preferable that the brush member11is constituted so that the peak pressure and/or the maximum contact area ratio further falls within the following ranges.(peak pressure): 1.4 gf/mm2or more and 2.8 gf/mm2or less(maximum contact area ratio): 32% or more and 60% or less Here, 1 gf nearly equals to 9.8×mN (milli-newton), so that “0.7 gf/mm2or more and 3.5 gf/mm2or less” can be said as “6.9 mN/mm3or more and 34 mN/mm2or less”. Similarly, “1.4 gf/mm2or more and 2.8 gf/mm2or less” can be said as “14 mN/mm2or more and 28 mN/mm2or less”. Further, in this embodiment, as an index indicating whether or not the brush member11uniformly contacts the photosensitive drum1, Clark-Evans index is introduced. The Clark-Evans index represents a tendency as to whether in the case where a plurality of points are distributed in a certain flat surface region, the points are distributed locally and concentratedly or are distributed with a distance mutually. A calculating method of the Clark-Evans index will be described. First, when a distance from a point i to a nearest adjacent point is di and the number of the points is n, an average value (average nearest adjacent distance) W of distances from each of points to an associated nearest adjacent point is represented by the above-described formula (numerical formula 1). Here, as an evaluation criterion, the case where the points are randomly distributed on a flat surface with an area S (in accordance with uniform Poisson distribution) will be considered. In this case, an expected value E (W) of the average nearest adjacent distance W is represented by the above-described formula (numerical formula 2). In order to compare the cases where the numbers of the points and densities are different from each other, a value w obtained by standardizing the average nearest adjacent distance W with the expected value E (W) as represented by the above-described formula (numerical formula 3) is referred to as the Clark-Evans index. In order to acquire the Clark-Evans index of the brush member11, as shown in part (a) ofFIG.23, the brush member11is pressed against the glass plate surface, and the brush contact portion is observed through the glass plate surface from a side opposite from the side where the brush member11is pressed against the glass plate surface. In the brush contact portion, when the free ends of the bristle material in a certain area (100 mm2) are represented by points, a distribution of the points as shown in part (b) ofFIG.23is obtained. From this distribution of the points, the Clark-Evans index is calculated using the above-described formulas (numerical formulas 1 to 3). Incidentally, a part of the bristle material contacts the glass (plate) surface even at a portion closer to a base than the free ends are. The action by which the toner T is triboelectrically charged by such a bristle material would be considered that contribution of a portion (most pressing point) where the bristle material most strongly contacts the glass surface is large. However, the distribution of the most pressing point of the bristle material is mostly common to a distribution of the free ends of the bristle material, and property as the distribution is substantially unchanged. For that reason, in this embodiment, the Clark-Evans index is calculated from the free end distribution of each bristle material contacting the glass surface. As regards the Clark-Evans index, w=1 holds in the case of a random distribution, w<1 holds in the case of concentrated distribution, and w>1 holds in the case of regular distribution. An extreme example of the regular distribution is a lattice-shaped distribution over an entire area of an observation object. An extreme example of the concentrated distribution is a distribution such that the points are concentrated at a single portion or several portions in the area of the observation object. A result of acquisition of the Clark-Evans index for actual samples of the brush member11is as follows.Sample 1: w=1.01Sample 2: w=1.13Sample 3: w=1.15Sample 4: w=1.07Sample obtained by intentionally twisting sample 2 in bundle: w=0.7 From a property of the Clark-Evans index, when w>1 holds, it can be said that the free ends of the bristle materials of the brush member11are loosened without making a bundle. On the other hand, when w<1 holds, it is suggested that the bristle materials of the brush member11makes the bundle (aggregation, lump) due to some cause. In order to cause the brush member to normally function, it is preferable that the bristle materials of the brush member11contact the surface of the photosensitive drum1in a loosened state without making the bundle. In this embodiment, the brush member11is constituted so that the Clark-Evans index is 1 or more (w≥1). This condition can be said as a condition for ensuring that the bristle materials do not make the bundle due to some cause. Incidentally, depending on the constitution of the brush member11, it would be also considered that even in the brush contact portion, a value of the Clark-Evans index is different place by place. In that case, the Clark-Evans index at a portion (place where the contact pressure is the peak pressure) where the penetration amount of the brush member11is largest is 1 or more. This is because at the portion where the penetration amount is largest, the bundle of the bristle materials is liable to be formed by a force received by the bristle materials. (Arrangement of Brush Member) Next, arrangement of the brush member11relative to the photosensitive drum1will be described. In this embodiment, the brush member11is disposed in the inclined state so that a contact pressure of the brush member11to the photosensitive drum1is higher on a brush leading end (brush front end) side than on a brush trailing end (voltage rear end) side. Here, the “brush leading end” refers to an end portion of the brush member11on an upstream side with respect to the rotational direction R1of the photosensitive drum1, and the “brush trailing end” refers to an end portion of the brush member11on a downstream side with respect to the rotational direction R1of the photosensitive drum1. In the following, the arrangement of the brush member11will be specifically described usingFIG.18.FIG.18is a schematic view for illustrating the arrangement of the brush member11relative to the photosensitive drum1.FIG.18shows a state in which the brush member11and the photosensitive drum1are viewed in the rotational axis direction (the longitudinal direction LD of the brush member11) of the photosensitive drum1. Incidentally, the bristle materials (electroconductive threads11a) of the brush member11are shown virtually in a state (single body state) in which the bristle materials extend without interfering with the photosensitive drum1. InFIG.18, the rotational axis of the photosensitive drum1is represented by “O”. A rectilinear line (first rectilinear line) extending from the rotational axis O through a center position of the base cloth11bof the brush member11with respect to the short direction SD is represented by “m”. A rectilinear line perpendicular to the rectilinear line m is represented by “Lt”. The rectilinear line Lt is a rectilinear line parallel to a tangential line t of the surface of the photosensitive drum1in an intersection point p between the rectilinear line m and the surface of the photosensitive drum1. Further, a rectilinear line (third rectilinear line extending in the short direction SD) drawn along the base cloth11bis represented by “n”. At this time, in this embodiment, the brush member11is disposed so that the rectilinear line n drawn along the base cloth11bis inclined relative to the rectilinear line Lt, extending in a tangential direction of the surface of the photosensitive drum1, with respect to a direction in which the base cloth11bapproaches the surface of the photosensitive drum1toward the upstream side of the rotational direction R1of the photosensitive drum1. An angle β (°) between the rectilinear line n and the rectilinear line Lt represents an inclination angle of the brush member11. The inclination angle may suitably set in a range of, for example, 8° or more and 16° or less. When is excessively large, a difference in penetration amount between the brush leading end and the brush trailing end becomes large, and therefore, there is a possibility that the penetration amount becomes excessive in the brush leading end and thus the toner is blocked and that the brush penetration amount becomes negative (non-contact). When β is excessively small, as described in the following, it becomes difficult to provide a proper difference in contact pressure and contact area ratio between the brush leading end and the brush trailing end. In the example of this embodiment, β=12° was set. In this case, the penetration amount of the brush member11into the photosensitive drum1becomes maximum in the brush leading end, and a value thereof (maximum penetration amount) is 1.58 mm. As described above, the brush member11is disposed in the inclined state, whereby the contact pressure and the contact area ratio in the brush leading end are higher than the contact pressure and the contact area ratio in the brush trailing end, respectively. That is, of the brush contact portion (contact portion between the brush member and the image bearing member), in a first position, the contact pressure is higher than the contact pressure in a second position downstream of the first position with respect to the rotational direction R1of the image bearing member. Further, the contact area ratio between the brush member and the image bearing member in the first position is higher than the contact area ratio between the brush member and the image bearing member in the second position. In the example of this embodiment, the contact pressure in the brush leading end is 2 gf/mm2, and the contact pressure in the brush trailing end is 1 gf/mm2. Further, the contact area ratio is 50% in the brush leading end and is 20% in the brush trailing end. In this embodiment, a relationship between a short direction position and the contact pressure of the brush member11is shown in part (a) ofFIG.26, a relationship between the short direction position and the contact area ratio of the brush member11is shown in part (b) ofFIG.26, and a relationship between the contact pressure and the contact area ratio is shown in part (c) ofFIG.26. Here, the short direction position of the brush member11is a position with respect to the short direction SD in which the brush leading end is taken as a basis (0). The example shown in part (a) and (b) ofFIG.26is an example of a constitution in which the contact pressure and the contact area ratio in the first position on the brush leading end side are higher than the contact pressure and the contact area ratio in the second position on the brush trailing end side than the first position is, curves drawn to represent the contact pressure and the contact area ratio may be different. As in this example, it is preferable that the contact pressure and the contact area ratio become maximum in the brush leading end. Further, it is preferable that the contact pressure and the contact area ratio monotonously decrease from the brush leading end toward the brush trailing end, but the contact pressure and the contact area ratio at least in the brush leading end may only be required to be higher than the contact pressure and the contact area ratio in the brush trailing end. It is preferable that a difference between the contact pressure (maximum value) in the brush leading end and the contact pressure (maximum value) in the brush trailing end is 0.6 gf/mm2or more and 1.5 gf/mm2or less. Further, it is preferable that a difference between the contact area ratio (maximum value) in the brush leading end and the contact area ratio (maximum value) in the brush trailing end is 15% or more and 40% or less. Further, in this embodiment as specifically described later, a penetration amount δ1 (mm) of the brush leading end into the photosensitive drum1and a penetration amount δ2 (mm) of the brush trailing end into the photosensitive drum1satisfy a relationship of: δ1>δ2>70. (Brush Application Bias) Further, to the brush member11, the brush power source E11(FIG.20) as the voltage applying means is connected. During the image formation, to the brush member11, a predetermined voltage (brush bias) is applied by the brush power source E11. In this embodiment, during the image formation, to the brush member11, a DC voltage of the negative polarity is applied as the brush voltage. In this embodiment, the brush voltage during the image formation is −350. On the other hand, the surface potential in a surface region of the photosensitive drum1passing through the transfer portion P5and moving toward the brush contact portion is 0 to −200 V. Accordingly, the brush voltage is set so that in the brush contact portion, the surface of the photosensitive drum1is on the normal polarity side of the toner and the brush member11is on the non-normal polarity side of the toner. By such a brush voltage setting, the toner charged to the normal polarity is attracted to the photosensitive drum1side, and the toner charged to the non-normal polarity is attracted to the brush member11side. (Behavior of Toner in Brush Contact Portion) The toner which is not transferred from the photosensitive drum1onto the sheet by the transfer roller5is sent to the brush contact portion by rotation of the photosensitive drum1. At this time, when the contact pressure of the brush contact portion is high, the toner particles are easily rolled by being rubbed with the brush member11and the surface of the photosensitive drum1, so that the electric charge of the normal polarity is easily imparted to the residual toner. However, when the contact pressure is excessively high, in the brush contact portion, at a place where the contact pressure is higher than the contact pressure at a peripheral portion or at a place where the density of the bristle material is higher than the density of the bristle material at a peripheral portion, most of the toner cannot pass through the place, while the toner concentratedly passes through a place where the contact pressure or the density is low. As a result, as shown in part (a) ofFIG.27, a state in which the toner T passed through the brush contact portion is distributed in a stripe shape is formed. Part (a) ofFIG.27shows the state in which a periphery of the brush member11is viewed from an outer periphery side of the photosensitive drum1, in which the residual toner is represented by a gray portion (dot pattern), and a region where the residual toner is not deposited is represented by a non-tinted (color-free) portion. The state in which the toner T passed through the brush contact portion is distributed in the stripe shape will be described using part (b) ofFIG.27. Part (b) ofFIG.27is a schematic view showing a cross section, of the surface of the photosensitive drum1passed through the brush contact portion, cut along the longitudinal direction. In the toner T passed through the brush contact portion, the toner particles of the non-normal polarity (+) are contained. In addition, in a status in which the toner T passes through the brush contact portion in the stripe shape, at a place through which the toner T passes, there is a tendency that the brush member11does not contact a part of the toner particles and an amount of the toner particles passing through the brush contact portion while being kept in the non-normal polarity without being triboelectrically charged by the brush member11becomes large. In the charging portion P2, to the charging roller2, a charging voltage of the same polarity as the normal polarity of the toner T is applied. For that reason, as shown in part (c) ofFIG.27, the toner T of the normal polarity is pressed against the surface of the photosensitive drum1and passes through the charging portion P2without being deposited on the charging roller2. On the other hand, for the above-described reason, when the toner of the non-normal polarity reaches the charging portion P2, the toner T is attracted to and deposited on the charging roller2by the charging voltage. When the toner deposited on the charging roller2is accumulated, there is a possibility that image defect due to improper charging occurs. For that reason, the toner passed through the brush member11in the stripe shape as described above causes stripe-shaped contamination of the charging roller2with the toner, and appears as a stripe-shaped image defect due to the improper charging finally appearing in a certain position of the image with respect to a main scan direction. On the other hand, in this embodiment, the constitution in which the contact pressure and the contact area ratio on the brush leading end side of the brush contact portion are higher than the contact pressure and the contact area ratio on the brush trailing end side of the brush contact portion was employed. By this, it is possible to prevent the toner T to pass through the brush member (to uniformly scatter the toner T) while imparting the electric charge of the normal polarity to the toner T. This will be described. Parts (a) ofFIG.28is a conceptual diagram showing a contact state of the brush member11to the photosensitive drum1in this embodiment. Part (b) ofFIG.28is a sectional view of the surface of the photosensitive drum1in the brush leading end portion along the longitudinal direction of the photosensitive drum1. Part (c) ofFIG.28is a sectional view of the surface of the photosensitive drum1in a brush central portion alone the longitudinal direction of the photosensitive drum1. Part (d) ofFIG.28is a sectional view of the surface of the photosensitive drum1in the brush trailing end side along the photosensitive drum1. As shown in part (a) ofFIG.28, the contact pressure and the contact area ratio in the brush leading end side are high, and therefore, many toner particles are rolled in the brush leading end portion, so that the electric charges of the normal polarity are imparted to the toner particles by the triboelectric charge. However, the toner during passing through the brush leading end portion concentrates at a part of a portion with respect to the longitudinal direction. Thereafter, by the rotation of the photosensitive drum1, as shown in parts (b) and (c) ofFIG.28, with movement of the toner through the brush contact portion toward the brush trailing end side, the bristle materials of the brush member11randomly contact the toner. Further, the contact pressure and the contact area ratio lower toward the brush trailing end side, and therefore, the toner T easily moves freely in the longitudinal direction. Concentration of the toner T is alleviated, and the bristle materials of the brush member11uniformly contact the toner T, so that the polarity of the toner T in a large amount can be changed to the normal polarity. Further, the brush voltage is applied to the brush member11, so that a part of the toner T of the non-normal polarity is attracted to the brush member11. Thus, according to the constitution, of this embodiment, while imparting the electric charge of the normal polarity to the toner T by the brush member11, the toner T is prevented from passing through the brush member11in the stripe shape and is uniformly scattered, so that an occurrence of the improper charging can be suppressed over a long term. (Verification Experiment) In order to verify that the improper charging can be prevented by the constitution of this embodiment, an experiment for checking whether or not the improper charging occurs in a plurality of constitution examples different in constitution of the brush member11and contact condition with the photosensitive drum1was conducted. A table 1 appearing thereinafter shows a principal contact condition and the occurrence or non-occurrence of the improper charging in each of the constitution examples, and a table 2 appearing hereinafter shows detailed constitutions of each of the constitution examples. The constitution described as the example of this embodiment is a constitution example 1-1. In a constitution example 1-2, the contact pressure and the contact area ratio is made substantially constant from the brush leading end side toward the brush trailing end side. In a constitution example 1-3, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side in a degree similar to the degree in the constitution example 1-1, and compared with the constitution example 1-1, the bristle material of the brush member11is thick and is low in density. In a constitution example 1-4, the contact pressure and the contact area ratio decrease from the between side toward the brush trailing end side in a degree similar to the degree in the constitution example 1-1, and compared with the constitution example 1-1, the bristle material of the brush member11is high in density. In a constitution example 1-5, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side in a degree similar to the degree of the constitution example 1-1, and compared with the constitution example 1-1, the short width of the brush member11is narrow. In a constitution example 1-6, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side, but compared with the constitution example 1-1, change amounts of the contact pressure and the contact area ratio are small. In a constitution example 1-7, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side in a degree similar to the degree of the constitution example 1-1, and the short width of the brush member11is an intermediary value between the short widths in the constitution examples 1-1 and 1-5. In a constitution example 1-8, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side and a change amount thereof is an intermediary value between the change amounts in the constitution examples 1-1 and 1-6. An experimental environment is a low-temperature and low-humidity environment (15° C./10% RH). In the experiment, an operation in which images each with a print ratio (coating ratio, image ratio) of 3% were intermittently outputted on two sheets was repeated and the images were outputted on 10000 sheets, and then a solid white sample image was outputted. At that time, whether or not a black spot (dot-like image defect occurred was checked. In the table 1, the case where a black spot is observed is represented by that the improper charging occurred (“YES”), and the case where the black spot is not observed is represented by the improper charging did not occur (“NO”). Incidentally, the change amount of the contact pressure is a difference between the contact pressure in the brush leading end and the contact pressure in the brush trailing end. The change amount of the contact area ratio is a difference between the contact area ratio in the brush leading end and the contact area ratio in the brush trailing end. Further, constitutions which are not specifically mentioned in the constitution examples (“CNS. EXS.”) 1-2 to 1-8 are common to the constitution example (“CNS. EX.”) 1-1. TABLE 1CNS.PP*1MCAR*2CPCA*3CARCA*4SW*5EX.(gf/mm2)(%)(gf/mm2)(%)(mm)IC*61-12501304NO1-2250004YES1-34201304YES1-42701304YES1-52501302YES1-62500.3104YES1-72501303NO1-82500.6184NO*1“PP” is the peak pressure.*2“MCAR” is the maximum contact area ratio.*3“CPCA” is the contact pressure change amount.*4“CARCA” is the contact area ratio change amount.*5“SW” is the short width.*6“IC” is the improper charging. TABLE 2PENETRATION AMOUNTCNS.BT*1D*2SW*3MAX*4BLE*5MIN*6BTE*7CW*8EX.(denier)(inch2)(mm)(mm)(mm)(mm)(mm)(mm)1-1224041.581.580.430.431.151-2224041.21.21.031.20.171-3618041.581.580.430.431.151-4240041.581.580.430.431.151-5224021.581.580.430.431.151-6224041.21.150.980.980.221-7224031.581.580.530.531.051-8224041.421.420.610.611.81*1“BT” is the brush thickness.*2“D” is the density.*3“SW” is the short width.*4“MAX” is the maximum penetration amount.*5“BLE” is the brush leading end penetration amount.*6“MIN” is the minimum penetration amount.*7“BTE” is the brush trailing end penetration amount.*8“CW” is the penetration amount change width. As shown in the table 1, in the constitution examples 1-1, 1-7, and 1-8, the black did not occur, so that it was confirmed that improper charging can be prevented. On the other hand, in the constitution examples 1-2 to 1-6, the improper charging occurred. The reason that the improper charging occurred in the constitution example 1-2 in which the contact pressure and the contact pressure and the contact area ratio does not decrease from the brush leading end toward the brush trailing end would be considered because the contact pressure and the contact area ratio are high even on the brush trailing end side and therefore the toner passes through the brush contact portion in the stripe shape. From this, it is understood that as in the constitution example 1-1, the constitution in which the contact pressure and the contact area ratio decrease from the brush leading end toward the brush trailing end is effective in prevention of the occurrence of the improper charging. Further, the improper charging occurred in the constitution example 1-6 in which the change amounts of the contact pressure and the contact area ratio are small, and the improper charging did not occur in the constitution example 1-8 in which the change amounts of the contact pressure and the contact area ratio are larger than those in the constitution example 1-6 and smaller than those in the constitution example 1-1. From this, it is understood that the larger change amounts of the contact pressure and the contact area ratio are capable of effectively suppressing the occurrence of the improper charging. Specifically, it is preferable that the change amount of the contact pressure is 0.6 gf/mm2or more, and the change amount of the contact area ratio is 15% or more (preferably 18% or more in the constitution example 1-8), further preferably 1.0 gf/mm2or more and 30% or more, respectively. However, a timing when the improper charging occurred in the constitution example 1-6 is later than the timing in the constitution example 1-2 in which the contact pressure and the contact area ratio does not decrease, so that depending on a specific constitution (for example, lifetime setting of the charging roller2) of the image forming apparatus, the occurrence of the improper charging can be suppressed even in the constitution example 1-6 in some cases. Specifically, a difference between the constitution example 1-2 and the constitution example 1-6 will be described. The penetration amounts of the brush upstream end and the brush downstream end are the same in the constitution example 1-2, whereas the penetration amount of the brush upstream end is larger than the penetration amount of the brush downstream end in the constitution example 1-6. For that reason, in the constitution example 1-6, the contact pressure and the contact area ratio in the upstream end of the brush contact portion are higher than the contact pressure and the contact area ratio at least in the downstream end of the brush contact portion. Accordingly, a timing when the improper charging occurs in the constitution example 1-6 is later than the timing in the constitution example 1-2 in which the contact pressure and the contact area ratio do not decrease, and therefore, it can be said that in the constitution example 1-6, there is a certain effect in suppression of the improper charging. However, from the table 2, in both the constitution example 1-2 and the constitution example 1-6, the maximum penetration amount is obtained in the neighborhood of a central portion, not at the brush upstream end, so that a toner scattering effect from the brush upstream end toward the brush trailing end is not sufficiently obtained in some cases. The reason why the improper charging occurred in the constitution example 1-5 is because the brush member11extremely short in short width cannot sufficiently scatter, the toner on the brush trailing end side. On the other hand, the improper charging did not occur in the constitution example 1-7 in which the short width of the brush member11is 3 mm. Accordingly, it is understood that the short width of the brush member11may preferably be 3 mm or more. The reason why the improper charging occurred in the constitution example 1-3 would be considered that a minimum value (contact pressure of the brush trailing end portion) of the contact pressure of the brush member11is excessively high, and therefore, the toner passed through the brush member11in the stripe shape. For that reason, the brush trailing end contact pressure may preferably be 1.5 gf/mm2or less (more preferably be 1.4 gf/mm2or less) in the constitution example 1-8). The reason why the improper charging occurred in the constitution example 1-4 would be considered that the contact area ratio in the brush trailing end portion is excessively high, and therefore, the toner passes through the brush member11in the stripe shape. For that reason, it is preferable that the contact area ratio in the brush trailing end portion is made, for example, 40% or less (more preferably be 32% or less in the constitution example 1-8). Further, the density of the bristle material of the brush member11may preferably be made 350 kF/inch2or less as described above. As described above, in this embodiment, the constitution in which the contact pressure and the contact area ratio on the brush leading end side are higher than the contact pressure and the contact area ratio on the brush trailing end side is employed, so that the occurrence of the improper charging can be suppressed for a long term. Modified Embodiment In this embodiment, the brush member11with certain bristle height was disposed in the inclined state relative to the tangential direction of the photosensitive drum1, so that the constitution in which the contact pressure and the contact area ratio are higher on the brush leading end side than on the brush trailing end side was realized. The present invention is not limited thereto, but a constitution, in which the contact pressure and the contact area ratio are decreased by providing, for example, a stepped portion of the bristle height between the brush leading end and the brush trailing end may be employed. Fifth Embodiment In a fifth embodiment, a constitution in which a bristle material density of the brush member11is different depending on a place in order to make the contact pressure and the contact area ratio higher on the brush leading end side than on the brush trailing end side is employed. In the following, elements represented by the reference numerals or symbols common to the fourth and fifth embodiments substantially the same constitutions and functions as those in the fourth embodiment, and a difference from the fourth embodiment will be principally described. In the brush member11in this embodiment, the bristle material density on the brush leading end side is made higher than the bristle material density on the brush trailing end side. In an example of this embodiment, electroconductive threads11awith a thickness of 2 denier and used as bristle materials and are decreased in density at three levels of 240 kF/inch2, 200 kF/inch2, and 160 kF/inch2for each 2 mm from the brush leading end side toward the brush trailing end side. The short width of the brush member11in this embodiment is 6 mm. A schematic view in the case where the brush member11is observed from the free end side of the bristle materials is shown in each of parts (a) and (b) ofFIG.29. Part (a) ofFIG.29shows this embodiment, in which the density of the bristle materials (electroconductive threads11a) decreases toward the brush trailing end side (downstream side of the rotational direction R1of the photosensitive drum1). On the other hand, in the fourth embodiment shown in part (b) ofFIG.29, the density of the bristle materials (electroconductive threads11a) is constant. Thus, in this embodiment, a constitution in which the bristle material density in a first position on the brush leading end side is higher than the bristle material density in a second position on the brush trailing end side was employed. This embodiment is an example of a constitution in which the contact pressure and the contact area ratio in the first position on the brush leading end side are higher than the contact pressure and the contact area ratio in the second position on the brush trailing end side. Also, in this embodiment, the Clark-Evans index w of the brush member11may desirably be w≥1. In this embodiment, different from the fourth embodiment, there is no need to dispose the brush member11in the inclined state relative to the photosensitive drum1. In the example of this embodiment, β=0 holds inFIG.1. In this example, the penetration amount (maximum penetration amount) of the brush member11into the photosensitive drum1was 1 mm. In the example of this embodiment, the contact pressure of the brush member11to the photosensitive drum1is 2 gf/mm2at the brush leading end and is 1 gf/mm2at the brush trailing end. Further, the contact area ratio is 50% at the brush leading end and is 20% at the brush leading end. Also, in this embodiment, the brush voltage may preferably be applied to the brush member11. The brush voltage is set at −350 V similarly as in the fourth embodiment, for example. (Verification Experiment) In order to verify that the improper charging can be prevented by the constitution of this embodiment, an experiment for checking whether or not the improper charging occurs in a plurality of constitution examples different in constitution of the brush member11and contact condition with the photosensitive drum1was conducted. A table 3 appearing thereinafter shows a is principal contact condition and the occurrence or non-occurrence of the improper charging in each of the constitution examples, and a table 4 appearing hereinafter shows detailed constitutions of each of the constitution examples. The experimental environment, the output image, the sample image, and the evaluation method of the improper charging are common to the fourth and fifth embodiments. The constitution described as the example of this embodiment is a constitution example 2-1. In a constitution example 2-2, the contact pressure and the contact area ratio is made substantially constant from the brush leading end side toward the brush trailing end side. In a constitution example 2-3, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side in a degree similar to the degree in the constitution example 2-1, and compared with the constitution example 2-1, the bristle material of the brush member11is thick and is low in density. In a constitution example 2-4, the contact pressure and the contact area ratio decrease from the between side toward the brush trailing end side in a degree similar to the degree in the constitution example 2-1, and compared with the constitution example 2-1, the bristle material of the brush member11is high in density as a whole. In a constitution example 2-5, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side, but compared with the constitution example 2-1, change amounts of the contact pressure and the contact area ratio are small. In a constitution example 2-6, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side and a change amount thereof is an intermediary value between the change amounts in the constitution examples 2-1 and 2-5. TABLE 3CNS.PP*1MCAR*2CPCA*3CARCA*4SW*5EX.(gf/mm2)(%)(gf/mm2)(%)(mm)IC*62-12501306NO2-2250006YES2-34501306YES2-43701306YES2-52500.3106YES2-62500.6186NO*1“PP” is the peak pressure.*2“MCAR” is the maximum contact area ratio.*3“CPCA” is the contact pressure change amount.*4“CARCA” is the contact area ratio change amount.*5“SW” is the short width.*6“IC” is the improper charging. TABLE 4DENSITY*2PENETRATION AMOUNTCNS.BT*1(LE/CT/TE)SW*3MAX*4MIN*5CW*6EX.(denier)(kF/inch2)(mm)(mm)(mm)(mm)2-12240/200/160610.970.032-22240/240/240610.970.032-36130/110/90610.970.032-42400/360/320610.970.032-52240/220/200610.970.032-62240/210/180610.970.03*1“BT” is the brush thickness.*2“DENSITY” is measured at the leading end (“LE”), the center (“CT”), and the trailing end (“TE”).*3“SW” is the short width.*4“MAX” is the maximum penetration amount.*5“MIN” is the minimum penetration amount.*6“CW” is the penetration amount change width. As shown in the table 3, in the constitution examples 2-1 and 2-6, the black did not occur, so that it was confirmed that improper charging can be prevented. On the other hand, in the constitution examples 2-2 to 2-5, the improper charging occurred. The reason that the improper charging occurred in the constitution example 2-2 in which the contact pressure and the contact pressure and the contact area ratio does not decrease from the brush leading end toward the brush trailing end would be considered because the contact pressure and the contact area ratio are high even on the brush trailing end side and therefore the toner passes through the brush contact portion in the stripe shape. From this, it is understood that as in the constitution example 2-1, the constitution in which the contact pressure and the contact area ratio decrease from the brush leading end toward the brush trailing end is effective in prevention of the occurrence of the improper charging. Further, the improper charging occurred in the constitution example 2-5 in which the change amounts of the contact pressure and the contact area ratio are small, and the improper charging did not occur in the constitution example 2-6 in which the change amounts of the contact pressure and the contact area ratio are larger than those in the constitution example 2-5 and smaller than those in the constitution example 1-1. From this, it is understood that the larger change amounts of the contact pressure and the contact area ratio are capable of effectively suppressing the occurrence of the improper charging. Specifically, it is preferable that the change amount of the contact pressure is 0.6 gf/mm2or more, and the change amount of the contact area ratio is 15% or more, and these change amounts are more preferably 1.0 gf/mm2or more and 30% or more, respectively. However, a timing when the improper charging occurred in the constitution example 2-5 is later than the timing in the constitution example 2-2 in which the contact pressure and the contact area ratio does not decrease, so that depending on a specific constitution (for example, lifetime setting of the charging roller2) of the image forming apparatus, the occurrence of the improper charging can be suppressed even in the constitution example 2-6 in some cases. The reason why the improper charging occurred in the constitution example 2-3 would be considered that a minimum value (contact pressure of the brush trailing end portion) of the contact pressure of the brush member11is excessively high, and therefore, the toner passed through the brush member11in the stripe shape. For that reason, the brush trailing end contact pressure may preferably be 1.5 gf/mm2or less, for example. The reason why the improper charging occurred in the constitution example 2-4 would be considered that the contact area ratio in the brush trailing end portion is excessively high, and therefore, the toner passes through the brush member11in the stripe shape. For that reason, it is preferable that the contact area ratio in the brush trailing end portion is made, for example, 40% or less. Further, the density of the bristle material in the brush trailing end portion may preferably be made 200 kF/inch2or less, more preferably be 180 kF/inch2or less. As described above, also, by the constitution of this embodiment, the occurrence of the improper charging can be suppressed for a long term. Modified Embodiment In this embodiment, the bristle material develop of the brush member11was changed at the three levels, but a constitution in which the contact pressure and the contact area ratio are decreased by continuously decreasing the bristle material density from the brush leading end side toward the brush trailing end side may be employed. Further, the bristle material density may be decreased at two levels or at four or more levels. Further, the brush member11in this embodiment may be disposed in the inclined state relative to the photosensitive drum1similarly as in the fourth embodiment. Sixth Embodiment In a sixth embodiment, a constitution in which a bristle material thickness of the brush member11is different depending on a place in order to make the contact pressure and the contact area ratio higher on the brush leading end side than on the brush trailing end side is employed. In the following, elements represented by the reference numerals or symbols common to the fourth and sixth embodiments substantially the same constitutions and functions as those in the fourth embodiment, and a difference from the fourth embodiment will be principally described. In the brush member11in this embodiment, the bristle material thickness on the brush leading end side is made higher than the bristle material thickness on the brush trailing end side. In an example of this embodiment, electroconductive threads11awith a density of 240 kF/inch2and used as bristle materials. The electroconductive threads11aare decreased in thickness at three levels of 2 denier, 1.5 denier, and 1 denier for each 2 mm from the brush leading end side toward the brush trailing end side. The short width of the brush member11in this embodiment is 6 mm. A schematic view in the case where the brush member11is observed from the free end side of the bristle materials is shown in each of parts (a) and (b) ofFIG.30. Part (a) ofFIG.30shows this embodiment, in which the bristle materials (electroconductive threads11a) become thin toward the brush trailing end side (downstream side of the rotational direction R1of the photosensitive drum1). On the other hand, in the fourth embodiment shown in part (b) ofFIG.30, the thickness of the bristle materials (electroconductive threads11a) is constant. Thus, in this embodiment, a constitution in which the bristle material thickness in a first position on the brush leading end side is larger (thicker) than the bristle material thickness in a second position on the brush trailing end side was employed. This embodiment is an example of a constitution in which the contact pressure and the contact area ratio in the first position on the brush leading end side are higher than the contact pressure and the contact area ratio in the second position on the brush trailing end side. Also, in this embodiment, the Clark-Evans index w of the brush member11may desirably be w≥1. In this embodiment, different from the fourth embodiment, there is no need to dispose the brush member11in the inclined state relative to the photosensitive drum1. In the example of this embodiment, β=0 holds inFIG.1. In this example, the penetration amount (maximum penetration amount) of the brush member11into the photosensitive drum1was 1 mm. In the example of this embodiment, the contact pressure of the brush member11to the photosensitive drum1is 2 gf/mm2at the brush leading end and is 1 gf/mm2at the brush trailing end. Further, the contact area ratio is 50% at the brush leading end and is 20% at the brush leading end. Also, in this embodiment, the brush voltage may preferably be applied to the brush member11. The brush voltage is set at −350 V similarly as in the fourth embodiment, for example. (Verification Experiment) In order to verify that the improper charging can be prevented by the constitution of this embodiment, an experiment for checking whether or not the improper charging occurs in a plurality of constitution examples different in constitution of the brush member11and contact condition with the photosensitive drum1was conducted. A table 5 appearing thereinafter shows a principal contact condition and the occurrence or non-occurrence of the improper charging in each of the constitution examples, and a table 6 appearing hereinafter shows detailed constitutions of each of the constitution examples. The experimental environment, the output image, the sample image, and the evaluation method of the improper charging are common to the fourth and sixth embodiments. The constitution described as the example of this embodiment is a constitution example 3-1. In a constitution example 3-2, the contact pressure and the contact area ratio is made substantially constant from the brush leading end side toward the brush trailing end side. In a constitution example 3-3, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side in a degree similar to the degree in the constitution example 3-1, and compared with the constitution example 3-1, the bristle material of the brush member11is thick and is low in density as a whole. In a constitution example 3-4, the contact pressure and the contact area ratio decrease from the between side toward the brush trailing end side in a degree similar to the degree in the constitution example 3-1, and compared with the constitution example 3-1, the bristle material of the brush member11is high in density as a whole. In a constitution example 3-5, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side, but compared with the constitution example 3-1, change amounts of the contact pressure and the contact area ratio are small. In a constitution example 3-6, the contact pressure and the contact area ratio decrease from the brush leading end side toward the brush trailing end side and a change amount thereof is an intermediary value between the change amounts in the constitution examples 3-1 and 3-5. TABLE 5CNS.PP*1MCAR*2CPCA*3CARCA*4SW*5EX.(gf/mm2)(%)(gf/mm2)(%)(mm)IC*63-12501304NO3-2250004YES3-34201304YES3-42701304YES3-52500.3102YES3-62500.6183NO*1“PP” is the peak pressure.*2“MCAR” is the maximum contact area ratio.*3“CPCA” is the contact pressure change amount.*4“CARCA” is the contact area ratio change amount.*5“SW” is the short width.*6“IC” is the improper charging. TABLE 6BT*1DENSITY*2PENETRATION AMOUNTCNS.(LE/CT/TE)(LE/CT/TE)SW*3MAX*4MIN*5CW*6EX.(denier)(kF/inch2)(mm)(mm)(mm)(mm)3-12/1.5/1240/270/160610.970.033-22240610.970.033-36/5/4130/150/170610.970.033-42/1.5/1300/330/360610.970.033-52/1.85/1.7240/247/255610.970.033-62/1.7/1.4240/255/270610.970.03*1“BT” is the brush thickness at the leading end (“LE”), the center (“CT”), and the trailing end (“TE”).*2“DENSITY” is measured at the leading end (“LE”), the center (“CT”), and the trailing end (“TE”).*3“SW” is the short width.*4“MAX” is the maximum penetration amount. amount.*5“MIN” is the minimum penetration amount.*6“CW” is the penetration amount change width. As shown in the table 5, in the constitution examples 3-1 and 3-6, the black did not occur, so that it was confirmed that improper charging can be prevented. On the other hand, in the constitution examples 3-2 to 3-5, the improper charging occurred. The reason that the improper charging occurred in the constitution example 3-2 in which the contact pressure and the contact pressure and the contact area ratio does not decrease from the brush leading end toward the brush trailing end would be considered because the contact pressure and the contact area ratio are high even on the brush trailing end side and therefore the toner passes through the brush contact portion in the stripe shape. From this, it is understood that as in the constitution example 3-1, the constitution in which the contact pressure and the contact area ratio decrease from the brush leading end toward the brush trailing end is effective in prevention of the occurrence of the improper charging. Further, the improper charging occurred in the constitution example 3-5 in which the change amounts of the contact pressure and the contact area ratio are small, and the improper charging did not occur in the constitution example 3-6 in which the change amounts of the contact pressure and the contact area ratio are larger than those in the constitution example 3-5 and smaller than those in the constitution example 3-1. From this, it is understood that the larger change amounts of the contact pressure and the contact area ratio are capable of effectively suppressing the occurrence of the improper charging. Specifically, it is preferable that the change amount of the contact pressure is 0.6 gf/mm2or more, and the change amount of the contact area ratio is 15% or more, and these change amounts are more preferably 1.0 gf/mm2or more and 30% or more, respectively. However, a timing when the improper charging occurred in the constitution example 3-5 is later than the timing in the constitution example 3-2 in which the contact pressure and the contact area ratio does not decrease, so that depending on a specific constitution (for example, lifetime setting of the charging roller2) of the image forming apparatus, the occurrence of the improper charging can be suppressed even in the constitution example 3-6 in some cases. The reason why the improper charging occurred in the constitution example 3-3 would be considered that a minimum value (contact pressure of the brush trailing end portion) of the contact pressure of the brush member11is excessively high, and therefore, the toner passed through the brush member11in the stripe shape. For that reason, the brush trailing end contact pressure may preferably be 1.5 gf/mm2or less, for example. The reason why the improper charging occurred in the constitution example 3-4 would be considered that the contact area ratio in the brush trailing end portion is excessively high, and therefore, the toner passes through the brush member11in the stripe shape. For that reason, it is preferable that the contact area ratio in the brush trailing end portion is made, for example, 40% or less. As described above, also, by the constitution of this embodiment, the occurrence of the improper charging can be suppressed for a long term. Modified Embodiment In this embodiment, the bristle material thickness of the brush member11was changed at the three levels, but a constitution in which the contact pressure and the contact area ratio are decreased by continuously decreasing the bristle material thickness from the brush leading end side toward the brush trailing end side may be employed. Further, the bristle material thickness may be decreased at two levels or at four or more levels. Further, the brush member11in this embodiment may be disposed in the inclined state relative to the photosensitive drum1similarly as in the fourth embodiment. Seventh Embodiment In a seventh embodiment, a constitution in which a penetration amount of the brush member11into the photosensitive drum1becomes small from the upstream side (brush leading end side) toward the downstream side (brush trailing end side) with respect to a rotational direction of the photosensitive drum1and a detailed condition thereof will be studied. In the following, elements represented by the reference numerals or symbols common to the fourth and seventh embodiments substantially the same constitutions and functions as those in the fourth embodiment, and a difference from the fourth embodiment will be principally described. A positional relationship between the brush member11and the photosensitive drum1when the brush member11is disposed in the inclined state relative to the photosensitive drum1is shown inFIG.31.FIG.31is a schematic view of the brush member11and the photosensitive drum1as viewed in the rotational axis direction of the photosensitive drum1. As the brush member11in this embodiment, the brush member11described in the example of the fourth embodiment was used. That is, the brush member11is constant in the bristle height L1, the density, and the thickness of the bristle materials as shown in part (a) ofFIG.22. Further, the brush member11is actually in a state in which the brush member11is flexed along the surface of the photosensitive drum1as shown in part (b) ofFIG.22, but inFIG.31, an interference thereof with the photosensitive drum1is disregarded and a state in which the bristle material enters the photosensitive drum1is illustrated. A relation of the penetration amount δ of the brush member11into the photosensitive drum1will be described. InFIG.31, a rotational axis O of the photosensitive drum1is the origin of coordinates. A coordinate axis extending in parallel to the short direction SD of the brush member1is an X-axis, and a coordinate axis (axis extending in a direction parallel to a direction normal to the base cloth11b) perpendicular to the X-axis is a Y-axis. At a leading end (upstream end) of the brush member11with respect to the rotational direction R1of the photosensitive drum1, the penetration amount of the brush member11into a phantom circle Cl defining the surface of the photosensitive drum1is represented by δ1. At a trailing end (downstream end) of the brush member11with respect to the rotational direction R1, the penetration amount of the brush member11into the phantom circle Cl is represented by δ2. At a center between the leading end and the trailing end of the brush member11with respect to the short direction SD, the penetration amount of the brush member11into the phantom circle Cl is represented by δ3. At this time, the penetration amounts δ1, δ2 and δ are represented by the following (formula 4) to (formula 6), respectively. δ1=r×sin(90−θ1)−P(formula 4) δ2=r×sin(90−θ2)−P(formula 5) δ3=r×sin(90−θ3)−P(formula 6) Here, r is a radius of the photosensitive drum1, and P is a distance from the rotation axis O of the photosensitive drum1to the free end of the bristle material of the brush member11with respect to a Y-axis direction. In other words, P is a length obtained by subtracting a distance L5, from the rotational axis O to the base cloth of the brush member11with respect to the Y-axis direction, from a bristle height L1of the brush member11. First terms on the right side of the (formula 4) to (formula 6) represent Y-coordinate of intersection points A1, A2and A3between the phantom circle Cl and the associated bristle material positioned at the leading end, the trailing end, and the center, respectively. Further, angles θ1, θ2 and θ3 (°) which are formed by rectilinear lines extending from the rotational axis O of the photosensitive drum1through the intersection point A1, A2and A3between the brush member11and the phantom circle Cl and by associated rectilinear lines parallel to the Y-axis are represented by the following formulas (formula 7) to (formula 9), respectively. θ1=90−ACOS(Q1/r) (formula 7) θ2=90−ACOS(Q2/r) (formula 8) θ3=90−ACOS(Q3/r) (formula 9) Here, Q1is a distance from the rotational axis O to the leading end of the brush member11in the X-axis direction. Q2is a distance from the rotational axis O to the trailing end of the brush member11in the X-axis direction. Q3is a distance from the rotational axis O to the center of the brush member11in the X-axis direction. That is, Q3=(Q1+Q2)/2 holds. Further, when the short(-side) width of the brush member11is L3, Q2=Q1+L3and Q3=Q1+L3/2 hold. Further, ACOS is inverse trigonometric function (inverse function of cosine). The penetration amount of the brush member11in the example of this embodiment was δ1=1.6 mm at the brush leading end, δ3=1.2 mm at the center, and δ2=0.45 mm at the brush trailing end. That is, a constitution in which the penetration amount δ1 at the upstream end of the brush member11with respect to the rotational direction R1of the photosensitive drum1is larger than the penetration amount δ2 at the downstream end of the brush member11and in which δ=0 holds was employed. Further, the radius r of the photosensitive drum1was 12 mm. Further, θ is a contact angle of the brush member11to the photosensitive drum1. In the example of this embodiment, setting was made so that the contact angle θ3 is 16°. Also, in this embodiment, the brush voltage may preferably be applied to the brush member11. The brush voltage is set at −350 V similarly as in the fourth embodiment, for example. Further, also, in this embodiment, the Clark-Evans index w of the brush member11may desirably satisfy w≥1. (Verification Experiment) In order to verify that the improper charging can be prevented by the constitution of this embodiment, an experiment for checking whether or not the improper charging occurs in a plurality of constitution examples different in constitution of the brush member11and contact condition with the photosensitive drum1was conducted. A table 7 appearing thereinafter shows a contact condition and the occurrence or non-occurrence of the improper charging in each of the constitution examples. The experimental environment, the output image, the sample image, and the evaluation method of the improper charging are common to the fourth and seventh embodiments. The constitution described as the example of this embodiment is a to constitution example 4-1. In the constitution example 4-2, the penetration amount becomes constant from the leading end to the trailing end of the brush member11(δ1=δ3=δ2). In the constitution example 4-3, the penetration amount increases from the leading end to the trailing end of the brush member11(δ1<δ3<δ2). In the constitution example 4-4, the penetration amount decreases from the leading end to the trailing end of the brush member11, but compared with the constitution example 4-1, a degree of the decrease is moderate. A difference in the penetration amount between the respective constitution examples is, for example, set by setting the bristle height L1at a different value for each of three equal regions divided from the short width L3of the brush member11with respect to the short direction SD (in which the bristle height is changed at three levels). For example, in the constitution example 4-3, the bristle height becomes long from the brush leading end side toward the brush trailing end side. TABLE 7CNS.PA*1(mm)RATIOSW*2EX.LE(δ3)CT(δ3)TE(δ2)δ3/δ1δ2/δ3δ2/δ1(mm)IC*34-11.61.20.450.750.380.284NO4-21.21.21.21.001.001.004YES4-30.41.21.63.001.334.004YES4-41.31.20.90.920.750.694SL*1“PA” is the penetration amount at the leading end (“LE”), the center “CT”, and the trailing end (“TE”).*2“SW” is the short width.*3“IC” is the improper charging. “YES” represents that the improper charging occurred. “NO” represents that the improper charging did not occur. “SL” represents that the improper charging slightly occurred. As shown in the table 7, in the constitution examples 4-1, the black did not occur, so that it was confirmed that improper charging can be prevented. On the other hand, in the constitution examples 4-2 and 403, the improper charging occurred. In the constitution example 4-1, the slight improper charging occurred. In the constitution examples 4-2 and 4-3, the penetration amount δ2 at the brush trailing end is equal to or larger than δ1. The reason why the improper charging occurred in the constitution examples 4-2 and 4-3 would be considered because the contact pressure or the contact area ratio is excessively high in the brush trailing end portion and therefore the toner passes through the brush member11in the stripe shape. On the other hand, the reason why the improper charging did not occur or slightly occurred in the constitution examples 4-1 and 4-4 in which the penetration amount δ2 at the brush trailing end is smaller than the penetration amount δ1 at the brush leading end would be considered because the contact pressure and the contact area ratio in the brush trailing end portion are low and thus the toner can be scattered. Further, in the constitution example 4-4 larger in ratio δ/δ1 of the penetration amounts δ1 and δ2 at the brush leading end and the brush trailing end than in the constitution example 4-1, the slight improper charging occurred. From this, it is preferable that the ratio δ2/δ1 between the penetration amounts δ1 and δ2 at the brush leading end and the brush trailing end is small. For example, δ2/δ1>0.69 is preferred. Further, since the slight improper charging occurred in the constitution example 4-4 smaller in difference (δ1−δ2) between the penetration amounts δ1 and δ2 at the brush leading end and the brush trailing end than in the constitution example 4-1, the difference (δ1−δ2) between δ1 and δ2 may preferably be large. For example, (δ1−δ2)>0.4 is preferred. In the above-described table 7, the ratio between two of the penetration amounts δ1 to δ3 at the leading end, the center, and the trailing end of the brush member11are shown. When each ratio is less than 1, the ratio shows that the penetration amount of the brush member11decreases from the upstream side toward the downstream side with respect to the rotational direction R1of the photosensitive drum1. In this case, as the ratio is closer to 1, the degree of the decrease in penetration amount is more moderate (decrease rate is small), and as the ratio is closer to 0, the degree of the decrease in penetration amount is abrupt (decrease rate is large). In the constitution examples 4-1 and 4-4, the improper charging is suppressed, and therefore, a relationship of 1>δ3/δ1>δ2/δ3 may preferably hold. This relationship means that the decrease rate of the penetration amount from the brush leading end to the brush center is relatively small and that the decrease rate of the penetration amount from the brush center to the brush trailing end is relatively large. By this constitution, in a portion on a side upstream of the center of the brush member11which is a portion large in penetration amount, the toner is triboelectrically charged properly, so that the polarity of the toner can be changed to the normal polarity. Further, the penetration amount becomes small in a portion downstream of the center of the brush member11, so that the toner is scattered and can be prevented from passing through the brush member11in the stripe shape. As described above, also, by the constitution of this embodiment, the occurrence of the improper charging can be suppressed for a long term. Eighth Embodiment In an eighth embodiment, a condition in which the brush member11contacts the photosensitive drum1with a proper penetration amount even in the case where an outer diameter of the photosensitive drum1changed will be studied. In the following, elements represented by the reference numerals or symbols common to the first, fourth, seventh and eighth embodiments substantially the same constitutions and functions as those in the first and fourth embodiments, and a difference from the first and fourth embodiments will be principally described. As the brush member11in this embodiment, the brush member11described in the example of the fourth and seventh embodiments was used. That is, the brush member11is constant in the bristle height L1, the density, and the thickness of the bristle materials as shown in part (a) ofFIG.22. Definition of the penetration amounts δ1 to δ3 of the brush member11into the photosensitive drum1and definition of the contact angle θ3 of the brush member11are the same as those described in the fourth embodiment. In this embodiment, the penetration amount of the brush member11is fixed at δ1=1.6 mm at the center and at δ=1.2 mm at the brush trailing end, and the penetration amount δ2 at the brush leading end is controlled by adjusting the contact angle θ3 to the photosensitive drum1with a different outer diameter. The radius r of the photosensitive drum1studied ranges from 6 mm to 24 mm. Also, in this embodiment, the brush voltage may preferably be applied to the brush member11. The brush voltage is set at −350 V similarly as in the fourth embodiment, for example. Further, also, in this embodiment, the Clark-Evans index w of the brush member11may desirably satisfy w≥1. (Verification Experiment) In order to verify that the improper charging can be prevented by the constitution of this embodiment, an experiment for checking whether or not the improper charging occurs in a plurality of constitution examples different in outer diameter of the photosensitive drum1was conducted. A table 8 appearing thereinafter shows a contact condition and the occurrence or non-occurrence of the improper charging in the case where the penetration amount δ3 at the brush center is fixed at 1.2 mm and the contact angle θ3 is set at 16°, and a table 9 appearing hereinafter shows a contact condition and the occurrence or non-occurrence of the improper charging in the case where the δ1=1.6 and δ3=1.2 are set by adjusting the contact angle θ3. The experimental environment, the output image, the same image, and the evaluation method of the improper charging are common to the fourth and eighth embodiments. TABLE 8PA*2(mm)PDR*1BA*2LECTTERATIOSW*4(mm)(°)(δ1)(δ3)(δ2)δ/δ1δ2/δ3δ2/δ1(mm)IC*56161.421.200.210.850.180.154YES8161.501.200.320.800.270.214YES10161.551.200.390.770.330.254SL12161.601.200.450.750.380.284NO14161.611.200.470.750.390.294NO16161.631.200.490.740.410.304NO18161.651.200.510.730.430.314NO20161.661.200.520.730.430.324NO22161.661.200.520.720.430.314NO24161.691.200.520.710.430.314NO*1“PDR” is the photosensitive drum radius.*2“BA” is the brush angle.*3“PA” is the penetration amount at the leading end (“LE”), the center “CT”, and the trailing end (“TE”).*4“SW” is the short width.*5“IC” is the improper charging. “YES” represents that the improper charging occurred. “NO” represents that the improper charging did not occur. “SL” represents that the improper charging slightly occurred. TABLE 9PA*3(mm)PDR*1BA*2LECTTERATIOSW*4(mm)(°)(δ1)(δ3)(δ2)δ/δ1δ2/δ3δ2/δ1(mm)IC*5625.41.601.20−0.110.75−0.09−0.074YES818.01.601.200.220.750.180.144NO1016.61.601.200.360.750.300.234NO1216.01.601.200.450.750.380.284NO1414.91.601.200.500.750.420.314NO1614.41.601.200.540.750.450.344NO1814.01.601.200.580.750.480.364NO2013.41.601.200.610.750.510.384NO2213.41.601.200.580.750.480.364NO2413.21.601.200.610.750.510.384NO*1“PDR” is the photosensitive drum radius.*2“BA” is the brush angle.*3“PA” is the penetration amount at the leading end (“LE”), the center “CT”, and the trailing end (“TE”).*4“SW” is the short width.*5“IC” is the improper charging. “YES” represents that the improper charging occurred. “NO” represents that the improper charging did not occur. “SL” represents that the improper charging slightly occurred. As shown in the table 8, in the case where the angle θ3 of the brush member11is fixed at 16°, the improper charging occurred when the radius r of the photosensitive drum1is less than 10 mm, and the improper charging slightly occurred when the radius r is 10 mm. On the other hand, the improper charging did not occur when the radius r is larger than 10 mm. With a smaller radius r of the photosensitive drum1, the penetration amount δ1 at the brush leading end when the brush member11is contacted to the photosensitive drum1in a condition of δ3=1.2 mm and θ3=16° becomes smaller. For that reason, with the smaller radius r of the photosensitive drum1, the ratio (δ3/δ1) of the penetration amount δ3 at the brush center to the penetration amount δ1 at the brush leading end becomes larger. That is, with the smaller radius r of the photosensitive drum1, δ3/δ1 approaches 1, so that the contact state (contact pressure, contact area ratio) of the brush central portion becomes closer to the contact state of the brush leading end portion. In the case where the radius r of the photosensitive drum1is small, it would be considered that a state in which the toner is localized at the brush leading end portion where the penetration amount is large is formed, and then the localization of the toner is not readily eliminated even at the brush central portion. Further, the toner cannot be sufficiently scattered only by the brush trailing end portion, so that the toner passes through the brush member11in the stripe shape. As a result, it would be considered that the improper charging occurred in the case where the radius r of the photosensitive drum1is less than 10 mm. On the other hand, it would be considered that in the case where the radius r of the photosensitive drum1is larger than 10 mm, δ3/δ1 is relatively small, and therefore, the brush central portion contributes to scattering of the toner and thus the toner does not readily pass through the brush member11in the stripe shape and the improper charging is suppressed. From this, the ratio (δ3/δ1) of the penetration amount δ3 at the brush center to the penetration amount δ1 at the brush leading end would be considered that δ3/δ1≤0.77 may preferably hold, and more preferably, δ3/δ1≤0.75 holds. Therefore, as shown in the table 9, when the contact angle θ3 is set so as to satisfy δ/δ1=0.75 for each of the photosensitive drums1different in radius r, the improper charging did not occur even in the case where the radius r is 8 mm and 10 mm. Incidentally, as regards the brush member11of 4 mm in short width L3used in verification, when the relationship of δ3/δ1≤0.75 is intended to be satisfied in the case where the radius r of the photosensitive drum1is 6 mm, the brush trailing end floats from the surface of the photosensitive drum1. As a result, in the case of r=6 mm, the improper charging occurred. Further, from a result of the table 9, the ratio (δ2/δ1) of the penetration amount δ2 at the brush trailing end to the penetration amount δ1 at the brush leading end may preferably be in a range of 0.14≤δ2/δ1≤0.38. By making the penetration amount at the brush trailing end smaller than the penetration amount at the brush leading end so as to fall within this range, the toner is uniformly scattered on the trailing end side of the brush member11while being triboelectrically charged properly on the leading end side of the brush member11. Other Embodiments In the above-described embodiments, the constitution of the direct transfer type in which the toner image is directly transferred from the photosensitive drum1(image bearing member) onto the sheet (recording material) as the toner image receiving member was described, but the present invention may be applied to an image forming apparatus of an intermediary transfer type. In the case of the intermediary transfer type, the transfer member refers to, for example, a transfer roller (primary transfer roller) for primary-transferring the toner image from the photosensitive drum1as the image bearing member onto the intermediary transfer member as the toner image receiving member. As the intermediary transfer member, an endless belt member stretched by a plurality of rollers can be used. The toner image primary-transferred on the intermediary transfer member is secondary-transferred from the intermediary transfer member onto the sheet (recording material) by a secondary transfer means such as a secondary transfer roller for forming a secondary transfer nip between itself and the intermediary transfer member. Even in a constitution of such an intermediary transfer type, effects similar to the effects of the above-described embodiments can be obtained by replacing the transfer roller in each of the above-described embodiments with the primary transfer roller. Further, in the above-described embodiments, principally, electric charge impartment by the triboelectric charge due to the friction between the brush member and the toner was described, but a method of imparting the electric charge is not limited thereto. For example, a constitution in which injection charging for injecting the electric charge into the toner through the brush member is carried out may be employed. That is, irrespective of the charge imparting method, the brush member may only be required to be capable of localizing the electric charge distribution of the residual toner, after passing through the brush contact portion and before reaching the charging portion, on the normal polarity side compared with the electric charge distribution of the residual toner before reaching the brush contact portion while being carried on the image bearing member. According to the present invention, in the constitution in which the brush member contacting the image bearing member, the occurrence of the improper charging can be suppressed. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Applications Nos. 2021-204782 filed on Dec. 17, 2021, and 2021-204783 filed on Dec. 17, 2021, which are hereby incorporated by reference herein in their entirety. | 200,125 |
11860567 | DETAILED DESCRIPTION In the above-explained related art, there is no member that a user can grip between the first drum side plate and the second drum side plate in the drum cartridge. For this reason, it is difficult for the user to carry the drum cartridge. Therefore, illustrative aspects of the present disclosure provide a drum cartridge that a user can easily carry, and an image forming apparatus including the drum cartridge. 1. Image Forming Apparatus1 An outline of an image forming apparatus1is described with reference toFIGS.1and2. As shown inFIG.1, the image forming apparatus1includes a main body housing2, a sheet cassette3, a drawer4, four drum cartridges5Y,5M,5C and5K, four toner cartridges6Y,6M,6C and6K, an exposure device7, a belt unit8, a transfer roller9, and a fixing device10. 1.1 Main Body Housing2 In the main body housing2, the sheet cassette3, the drawer4, the four drum cartridges5Y,5M,5C and5K, the four toner cartridges6Y,6M,6C and6K, the exposure device7, the belt unit8, the transfer roller9, and the fixing device10are accommodated. As shown inFIG.2, the main body housing2has a first end portion E1, and a second end portion E2in a moving direction of the drawer4. The second end portion E2is located distant from the first end portion E1in the moving direction of the drawer4. The main body housing2has an opening2A. The main body housing2includes a cover2B. The opening2A is located at the first end portion E1of the main body housing2. In other words, the first end portion E1has the opening2A. The cover2B can move between a cover-closed position (refer toFIG.1) and a cover-opened position (refer toFIG.2). When the cover2B is located in the cover-closed position, the cover2B closes the opening2A. When the cover2B is located in the cover-opened position, the opening2A is opened. 1.2 Sheet Cassette3 As shown inFIG.1, the sheet cassette3can accommodate sheets S. The sheet S in the sheet cassette3is conveyed toward the transfer roller9. 1.3 Drawer4 As shown inFIGS.1and2, in a state where the cover2B is located in the cover-opened position, the drawer4can move in the moving direction between an inner position (refer toFIG.1) and an outer position (refer toFIG.2) through the opening2A. As shown inFIG.1, in a state where the drawer4is located in the inner position, the drawer4is entirely located inside of the main body housing2. As shown inFIG.2, in a state where the drawer4is located in the outer position, at least a part of the drawer4is located outside of the main body housing2. 1.4 Drum Cartridge5Y The drum cartridge5Y can be mounted to the drawer4. Specifically, in the state where the drawer4is located in the outer position, the drum cartridge5Y can be mounted to the drawer4. In a state where the drum cartridge5Y is mounted to the drawer4, the drawer4supports the drum cartridge5Y. The drum cartridge5Y mounted to the drawer4can be removed from the drawer4in the state where the drawer4is located in the outer position. As shown inFIG.1, the drum cartridge5Y includes a photosensitive drum11Y, a charging device12Y, and a developing device13Y. 1.4.1 Photosensitive Drum11Y The photosensitive drum11Y extends in an axis direction. In the state where the drum cartridge5Y is mounted to the drawer4, the axis direction intersects with the moving direction of the drawer4. Preferably, in the state where the drum cartridge5Y is mounted to the drawer4, the axis direction is orthogonal to the moving direction of the drawer4. The photosensitive drum11Y has a cylindrical shape. The photosensitive drum11Y can rotate about a drum axis A1. The drum axis A1extends in the axis direction. 1.4.2 Charging Device12Y The charging device12Y is configured to charge a circumferential surface of the photosensitive drum11Y. In the present illustrative embodiment, the charging device12Y is a charging roller. The charging device12Y may also be a scorotron-type charger. 1.4.3 Developing Device13Y The developing device13Y can supply toner to the photosensitive drum11Y. Specifically, the developing device13Y includes a developing housing131Y, and a developing roller132Y. 1.4.3.1 Developing Housing131Y The developing housing131Y can accommodate the toner supplied from the toner cartridge6Y. The developing housing131Y has a developing opening133(refer toFIG.9). The toner supplied from the toner cartridge6Y enters the developing housing131Y through the developing opening133. The developing housing131Y is configured to support the developing roller132Y. 1.4.3.2 Developing Roller132Y The developing roller132Y can supply the toner in the developing housing131Y to the photosensitive drum11Y. The developing roller132Y is in contact with the photosensitive drum11Y. Note that, the developing roller132Y may be configured to be separable from the photosensitive drum11Y. The developing roller132Y extends in the axis direction. The developing roller132Y can rotate about a developing axis A11. The developing axis A11extends in the axis direction in the state where the drum cartridge5Y is mounted to the drawer4. 1.5 Drum Cartridges5M,5C and5K Each of the drum cartridges5M,5C and5K is described in a similar manner to the drum cartridge5Y. Specifically, each of the drum cartridges5M,5C and5K can be mounted to the drawer4. In a state where the drum cartridges5M,5C and5K are mounted to the drawer4, the drawer4supports the drum cartridges5M,5C and5K. Each of the drum cartridges5M,5C and5K mounted to the drawer4can be removed from the drawer4. Each of the drum cartridges5M,5C and5K is in alignment with the drum cartridge5Y in the moving direction of the drawer4. The drum cartridge5M includes a photosensitive drum11M, a charging device12M, and a developing device13M. The photosensitive drum11M can rotate about a drum axis A2. The drum axis A2extends in the axis direction. The charging device12M is configured to charge a circumferential surface of the photosensitive drum11M. The developing device13M can supply the toner to the photosensitive drum11M. The developing device13M includes a developing housing131M. The drum cartridge5C includes a photosensitive drum11C, a charging device12C, and a developing device13C. The photosensitive drum11C can rotate about a drum axis A3. The drum axis A3extends in the axis direction. The charging device12C is configured to charge a circumferential surface of the photosensitive drum11C. The developing device13C can supply the toner to the photosensitive drum11C. The developing device13C includes a developing housing131C and a developing roller132C. The drum cartridge5K includes a photosensitive drum11K, a charging device12K, and a developing device13K. The photosensitive drum11K can rotate about a drum axis A4. The drum axis A4extends in the axis direction. The charging device12K is configured to charge a circumferential surface of the photosensitive drum11K. The developing device13K can supply the toner to the photosensitive drum11K. The developing device13K includes a developing housing131K and a developing roller132K. 1.6 Toner Cartridges6Y,6M,6C and6K Each of the toner cartridges6Y,6M,6C and6K is configured to accommodate the toner. As shown inFIG.2, the toner cartridge6Y can be mounted to the drum cartridge5Y. In a state where the toner cartridge6Y is mounted to the drum cartridge5Y, the drum cartridge5Y supports the toner cartridge6Y. The toner cartridge6Y mounted to the drum cartridge5Y can be removed from the drum cartridge5Y. In the state where the toner cartridge6Y is mounted to the drum cartridge5Y, the toner cartridge6Y can supply the toner to the developing device13Y. The toner cartridge6M can be mounted to the drum cartridge5M. In a state where the toner cartridge6M is mounted to the drum cartridge5M, the drum cartridge5M supports the toner cartridge6M. The toner cartridge6M mounted to the drum cartridge5M can be removed from the drum cartridge5M. In the state where the toner cartridge6M is mounted to the drum cartridge5M, the toner cartridge6M can supply the toner to the developing device13M. The toner cartridge6C can be mounted to the drum cartridge5C. In a state where the toner cartridge6C is mounted to the drum cartridge5C, the drum cartridge5C supports the toner cartridge6C. The toner cartridge6C mounted to the drum cartridge5C can be removed from the drum cartridge5C. In the state where the toner cartridge6C is mounted to the drum cartridge5C, the toner cartridge6C can supply the toner to the developing device13C. The toner cartridge6K can be mounted to the drum cartridge5K. In a state where the toner cartridge6K is mounted to the drum cartridge5K, the drum cartridge5K supports the toner cartridge6K. The toner cartridge6K mounted to the drum cartridge5K can be removed from the drum cartridge5K. In the state where the toner cartridge6K is mounted to the drum cartridge5K, the toner cartridge6K can supply the toner to the developing device13K. 1.7 Exposure Device7 As shown inFIG.1, in a state where the drum cartridges5Y,5M,5C and5K are mounted to the drawer4and the drawer4is located in the inner position, the exposure device7can expose the circumferential surfaces of the photosensitive drums11Y,11M,11C and11K. In the present illustrative embodiment, the exposure device7is a laser scan unit. 1.8 Belt Unit8 In the state where the drum cartridges5Y,5M,5C and5K are mounted to the drawer4and the drawer4is located in the inner position, the belt unit8is located below the drum cartridges5Y,5M,5C and5K. The belt unit8includes an intermediate transfer belt81, and transfer rollers82Y,82M,82C and82K. In the state where the drum cartridges5Y,5M,5C and5K are mounted to the drawer4and the drawer4is located in the inner position, the intermediate transfer belt81is in contact with the photosensitive drums11Y,11M,11C and11K. The transfer roller82Y is configured to transfer the toner on the photosensitive drum11Y to the intermediate transfer belt81. The transfer roller82M is configured to transfer the toner on the photosensitive drum11M to the intermediate transfer belt81. The transfer roller82C is configured to transfer the toner on the photosensitive drum11C to the intermediate transfer belt81. The transfer roller82K is configured to transfer the toner on the photosensitive drum11K to the intermediate transfer belt81. 1.9 Transfer Roller9 The transfer roller9is configured to transfer the toner on the intermediate transfer belt81to the sheet S. Specifically, the sheet S conveyed from the sheet cassette3toward the transfer roller9passes between the transfer roller8and the intermediate transfer belt81and is then conveyed to the fixing device10. At this time, the transfer roller9is configured to transfer the toner on the intermediate transfer belt81to the sheet S. 1.10 Fixing Device10 The fixing device10is configured to heat and pressurize the sheet S having the toner transferred thereon, thereby fixing the toner on the sheet S. The sheet S that passes through the fixing device10is discharged onto an upper surface of the main body housing2. 2. Details of Drum Cartridge5Y Subsequently, the drum cartridge5Y is described in detail with reference toFIGS.3to9. Note that, the drum cartridges5M,5C and5K are described in a similar manner to the drum cartridge5Y. For this reason, the descriptions of the drum cartridges5M,5C and5K are omitted. As shown inFIG.3, the drum cartridge5Y has a first end portion E11, and a second end portion E12in an intersection direction of intersecting with the axis direction. Note that, the intersection direction intersects with the axis direction. Preferably, the intersection direction is orthogonal to the axis direction. In the state where the drum cartridge5Y is mounted to the drawer4, the intersection direction also intersects with the moving direction of the drawer4. Preferably, in the state where the drum cartridge5Y is mounted to the drawer4, the intersection direction is orthogonal to the moving direction of the drawer4. The photosensitive drum11Y is located at the first end portion E11. The second end portion E12is located distant from the first end portion E11in the intersection direction. The drum cartridge5Y includes drum side plates51A and51B, toner guides52A and52B, a receiving part53, a shutter54, a seal member55, a drum cleaner56, a conveyor device57, a pipe58, and a lock member59, in addition to the photosensitive drum11Y, the charging device12Y and the developing device13Y. 2.1 Drum Side Plate51A The drum side plate51A is located at one end portion of the drum cartridge5Y in the axis direction. The drum side plate51A extends in the intersection direction. The drum side plate51A has a surface S1and a surface S2in the axis direction. The surface S2is located between the surface S1and the drum side plate51B in the axis direction. The drum side plate51A has a hole H1, a hole H2, and a hole H3. The hole H1is located at the first end portion E11of the drum cartridge5Y. In the hole H1, one end portion of the photosensitive drum11Y in the axis direction is rotatably fitted. Thereby, the drum side plate51A supports one end portion of the photosensitive drum11Y in the axis direction. The hole H1has a circular shape. As shown inFIG.4, the hole H2is located distant from the hole H1. The hole H2is located between the hole H1and the receiving part53in the intersection direction. The hole H2is a long hole. In the hole H2, a projection134of the developing device13Y is fitted. In a state where the projection134is fitted in the hole H2, the projection134can move with respect to the drum side plate51A in an extension direction of the hole H2. Note that, the projection134is located at one end portion of the developing housing131Y (refer toFIG.3) in the axis direction. The projection134extends in the axis direction. The projection134has a circular column shape. The projection134may also be a shaft of the developing roller132Y (refer toFIG.3). The hole H3is located distant from the hole H2. The hole H3is located between the hole H2and the receiving part53in the intersection direction. The hole H3is a long hole. The hole H3extends in the same direction as the hole H2. In the hole H3, a projection135of the developing device13Y is fitted. In a state where the projection135is fitted in the hole H3, the projection135can move with respect to the drum side plate51A in an extension direction of the hole H3. Note that, the projection135is located at one end portion of the developing housing131Y (refer toFIG.3) in the axis direction. The projection135extends in the axis direction. The projection135has a circular column shape. The projection134is fitted in the hole H2and the projection135is fitted in the hole H3, so that the drum side plate51A supports one end portion of the developing device13Y in the axis direction. In the state where the projection134is fitted in the hole H2and the projection135is fitted in the hole H3, the developing device13Y can move with respect to the drum side plate51A in the extension direction of the hole H2and the hole H3. 2.2 Drum Side Plate51B As shown inFIG.3, the drum side plate51B is located at the other end portion of the drum cartridge5Y in the axis direction. The drum side plate51B is located distant from the drum side plate51A in the axis direction. The drum side plate51B has a surface S3and a surface S4in the axis direction. The surface S4is located between the surface S3and the drum side plate51A in the axis direction. The drum side plate51B is configured to support the other end portion of the photosensitive drum11Y in the axis direction. The drum side plate51B is configured to support the other end portion of the developing device13Y in the axis direction. The developing device13Y is located between the drum side plate51A and the drum side plate51B in the axis direction. The developing device13Y is supported so as to be movable with respect to the photosensitive drum11Y by the drum side plate51A and the drum side plate51B. The drum side plate51B is described in a similar manner to the drum side plate51A. For this reason, the descriptions of the drum side plate51B are omitted. 2.3 Toner Guide52A The toner guide52A is located on the surface S2of the drum side plate51A. The drum side plate51A has the toner guide52A. The toner guide52A is located on an opposite side to the developing device13Y with respect to the receiving part53in the intersection direction. As shown inFIG.4, the toner guide52A extends in a guide direction. The guide direction intersects with the axis direction. Specifically, the guide direction intersects with the intersection direction, and is orthogonal to the axis direction. In the state where the toner cartridge6Y is mounted to the drum cartridge5Y and the drum cartridge5Y is mounted to the drawer4, the guide direction intersects with the moving direction of the drawer4. As shown inFIG.3, the toner guide52A is a concave groove. When the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52A guides one end portion of the toner cartridge6Y in the axis direction. Specifically, when the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52A guides one end portion of a second accommodation unit612(refer toFIG.10) in the axis direction. The second accommodation unit612will be described later. More specifically, when the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52A guides a projection62A (refer toFIG.10) of the toner cartridge6Y. The projection62A will be described later. 2.4 Toner Guide52B The toner guide52B is located on the surface S4of the drum side plate51B. The drum side plate51B has the toner guide52B. The toner guide52B is described in a similar manner to the toner guide52A. That is, the toner guide52A is located on an opposite side to the developing device13Y with respect to the receiving part53in the intersection direction. The toner guide52A extends in the guide direction. When the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52B guides the other end portion of the toner cartridge6Y in the axis direction. Specifically, when the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52B guides the other end portion of the second accommodation unit612(refer toFIG.10) in the axis direction. More specifically, when the toner cartridge6Y is supported to the drum cartridge5Y, the toner guide52B guides a projection62B (refer toFIG.10) of the toner cartridge6Y. The projection62B will be described later. 2.5 Receiving Part53 In the state where the toner cartridge6Y is supported to the drum cartridge5Y, the receiving part53receives the toner cartridge6Y. Specifically, in the state where the toner cartridge6Y is supported to the drum cartridge5Y, the receiving part53receives the second accommodation unit612(refer toFIG.10) of the toner cartridge6Y. The receiving part53extends in the axis direction. One end portion of the receiving part53in the axis direction connects to the drum side plate51A. The other end portion of the receiving part53in the axis direction connects to the drum side plate51B. The receiving part53has a semi-cylindrical shape. The receiving part53has a toner receiving hole53A, and two holes53B and53C. The toner receiving hole53A is located at a central part of the receiving part53in the axis direction. The toner receiving hole53A communicates with a developing opening133(refer toFIG.6) of the developing housing131Y. The hole53B is located between the toner receiving hole53A and the drum side plate51A in the axis direction. The hole53B is located distant from the toner receiving hole53A in the axis direction. The hole53C is located between the toner receiving hole53A and the drum side plate51B in the axis direction. The hole53C is located on an opposite side to the hole53B with respect to the toner receiving hole53A in the axis direction. The hole53C is located distant from the toner receiving hole53A in the axis direction. 2.6 Shutter54 The shutter54is attached to the receiving part53. The shutter54is configured to close the toner receiving hole53A. As shown inFIGS.3and5, the shutter54can move between a closed position (refer toFIG.3) and an opened position (refer toFIG.5). In a state where the shutter54is located in the closed position, the shutter54closes the toner receiving hole53A. In a state where the shutter54is located in the opened position, the toner receiving hole53A is opened. The shutter54has a through-hole541, two through-holes542A and542B, and two through-holes543A and543B. The through-hole541extends in the axis direction. As shown inFIG.5, in the state where the shutter54is located in the opened position, at least a part of the through-hole541communicates with the toner receiving hole53A. Thereby, in the state where the shutter54is located in the opened position, the toner receiving hole53A is opened. On the other hand, as shown inFIG.3, in the state where the shutter54is located in the closed position, the through-hole541is located distant from the toner receiving hole53A. The through-hole542A is located between the through-hole541and the drum side plate51A in the axis direction. The through-hole542A is located distant from the through-hole541in the axis direction. The through-hole542A extends in the moving direction of the shutter54. As shown inFIGS.3and5, both in the state where the shutter54is located in the closed position and in the state where the shutter54is located in the opened position, the through-hole542A communicates with the hole53B. As shown inFIG.3, the through-hole542B is located between the through-hole541and the drum side plate51B in the axis direction. The through-hole542B is located on an opposite side to the through-hole542A with respect to the through-hole541in the axis direction. The through-hole542B is located distant from the through-hole541in the axis direction. The through-hole542B extends in the moving direction of the shutter54. As shown inFIGS.3and5, both in the state where the shutter54is located in the closed position and in the state where the shutter54is located in the opened position, the through-hole542B communicates with the hole53C. The through-hole543A is located between the through-hole542A and the drum side plate51A in the axis direction. The through-hole543A is located on an opposite side to the through-hole541with respect to the through-hole542A in the axis direction. The through-hole543B is located between the through-hole542B and the drum side plate51B in the axis direction. The through-hole543B is located on an opposite side to the through-hole541with respect to the through-hole542B in the axis direction. 2.7 Seal Member55 As shown inFIGS.3and6, the seal member55is located between the receiving part53and the developing housing131Y. The seal member55is configured to seal the receiving part53and the developing housing131Y therebetween. The seal member55surrounds the toner receiving hole53A and the developing opening133. The seal member55is made of a sponge, for example. 2.8 Drum Cleaner56 As shown inFIG.3, the drum cleaner56is located at the first end portion E11of the drum cartridge5Y. The drum cleaner56is located between the drum side plate51A and the drum side plate51B in the axis direction. The drum cleaner56extends in the axis direction. One end portion of the drum cleaner56in the axis direction connects to the drum side plate51A. The other end portion of the drum cleaner56in the axis direction connects to the drum side plate51B. The drum cleaner56is configured to collect transfer residual toner. The transfer residual toner is toner that remains on the surface of the photosensitive drum11Y without being transferred to the intermediate transfer belt81. As shown inFIG.6, the drum cleaner56includes a cleaning housing56A, a cleaning member56B, and a screw56C. 2.8.1 Cleaning Housing56A As shown inFIG.3, the cleaning housing56A is located between the drum side plate51A and the drum side plate51B in the axis direction. The cleaning housing56A extends in the axis direction. One end portion of the cleaning housing56A in the axis direction connects to the drum side plate51A. The other end portion of the cleaning housing56A in the axis direction connects to the drum side plate51B. As shown inFIG.6, the cleaning housing56A has an opening56D. The cleaning housing56A is configured to accommodate the transfer residual toner. 2.8.2 Cleaning Member56B The cleaning member56B is attached to the cleaning housing56A. The cleaning member56B extends in the axis direction. The cleaning member56B has a plate shape. The cleaning member56B is configured to clean the circumferential surface of the photosensitive drum11Y. Specifically, an edge of the cleaning member56B is in contact with the surface of the photosensitive drum11Y. When the photosensitive drum11Y rotates, the transfer residual toner on the surface of the photosensitive drum11Y is contacted to the edge of the cleaning member56B and is thus removed from the surface of the photosensitive drum11Y. In other words, the transfer residual toner is removed from the circumferential surface of the photosensitive drum11Y by the cleaning member56B. The removed transfer residual toner is accommodated into the cleaning housing56A through the opening56D. 2.8.3 Screw56C The screw56C is located inside of the cleaning housing56A. The screw56C extends in the axis direction. The screw56C is configured to convey the transfer residual toner in the cleaning housing56A toward the conveyor device57(refer toFIG.3) in the axis direction. 2.9 Conveyor Device57 As shown inFIG.3, the conveyor device57is attached to the drum side plate51A. The conveyor device57is located on an opposite side to the drum side plate51B with respect to the drum side plate51A in the axis direction. The conveyor device57extends in the intersection direction. The conveyor device57is configured to convey the transfer residual toner from the cleaning housing56A to the pipe58. Specifically, as shown inFIG.4, the conveyor device57includes a conveyor housing57A, a belt conveyor57B, and a screw57C. 2.9.1 Conveyor Housing57A The conveyor housing57A is attached to the drum side plate51A. The conveyor housing57A is located on an opposite side to the drum side plate51B (refer toFIG.3) with respect to the drum side plate51A in the axis direction. The conveyor housing57A extends in the intersection direction. In the conveyor housing57A, the transfer residual toner conveyed by the screw56C (refer toFIG.6) of the drum cleaner56is accommodated. 2.9.2 Belt Conveyor57B The belt conveyor57B is located inside of the conveyor housing57A. The belt conveyor57B is configured to convey the transfer residual toner in the conveyor housing57A toward the pipe58in the intersection direction. 2.9.3 Screw57C The screw57C is located at the second end portion E12of the drum cartridge5Y. The screw57C is configured to convey the transfer residual toner conveyed by the belt conveyor57B into the pipe58. The screw57C extends in the axis direction. One end portion of the screw57C in the axis direction is located inside of the conveyor housing57A. The other end portion of the screw57C in the axis direction is located inside of the pipe58. 2.10 Pipe58 As shown inFIG.3, the pipe58is located at the second end portion E12of the drum cartridge5Y. The pipe58is located distant from the photosensitive drum11Y in the intersection direction. The pipe58is located on an opposite side to the photosensitive drum11Y with respect to the developing device13Y in the intersection direction. The pipe58is located on an opposite side to the photosensitive drum11Y with respect to the receiving part53in the intersection direction. The pipe58is located distant from the receiving part53in the intersection direction. The pipe58is located between the drum side plate51A and the drum side plate51B in the axis direction. The pipe extends in the axis direction. One end portion of the pipe58in the axis direction connects to the drum side plate51A. The other end portion of the pipe58in the axis direction connects to the drum side plate51B. The pipe58has a cylindrical shape. A user can grip the pipe58when handling the drum cartridge5Y. That is, the pipe58is a grip. As shown inFIG.6, the pipe58has a transfer residual toner discharge hole581. The transfer residual toner discharge hole581is located at a central part of the pipe58in the axis direction. The transfer residual toner discharge hole581can discharge the transfer residual toner. The transfer residual toner conveyed by the conveyor device57(refer toFIG.3) is discharged from the transfer residual toner discharge hole581through the pipe58. 2.11 Lock Member59 As shown inFIG.3, the lock member59is attached to the pipe58. The lock member59is located at a central part of the pipe58in the axis direction. The lock member59is configured to lock the toner cartridge6Y to the drum cartridge5Y. Specifically, as shown inFIGS.7and8, in a state where the toner cartridge6Y is supported to the drum cartridge5Y, a toner housing61of the toner cartridge6Y can move between a first position (refer toFIG.7) and a second position (refer toFIG.8) with respect to the drum cartridge5Y. The toner housing61will be described later. The lock member59can move between a lock position (refer toFIG.9) and a lock release position (refer toFIG.8). As shown inFIG.9, in a state where the toner housing61is located in the second position and the lock member59is located in the lock position, the lock member59locks the toner housing61to the second position. As shown inFIG.8, in a state where the lock member59is located in the lock release position, the lock member59releases a locked state of the toner housing61. As shown inFIG.3, the lock member59includes a shutter59A, a projection59B, and a lock lever59C. In other words, the drum cartridge5Y includes the shutter59A. 2.11.1 Shutter59A The shutter59A is a main body part of the lock member59. The shutter59A extends in the axis direction. The shutter59A has a cylindrical shape. The shutter59A is located on the circumferential surface of the pipe58. The shutter59A can rotate with respect to the pipe58. Thereby, the lock member59can rotate with respect to the pipe58. As shown inFIGS.8and9, the shutter59A can move between a shutter-closed position (refer toFIG.8) and a shutter-opened position (refer toFIG.9) with respect to the transfer residual toner discharge hole581. As shown inFIG.8, in a state where the lock member59is located in the lock release position, the shutter59A is located in the shutter-closed position. In the state where the shutter59A is located in the shutter-closed position, the shutter59A closes the transfer residual toner discharge hole581. On the other hand, as shown inFIG.9, in a state where the lock member59is located in the lock position, the shutter59A is located in the shutter-opened position. In the state where the shutter59A is located in the shutter-opened position, the transfer residual toner discharge hole581is opened. Specifically, the shutter59A has a shutter opening591. In the state where the shutter59A is located in the shutter-opened position, at least a part of the shutter opening591communicates with the transfer residual toner discharge hole581. Thereby, the transfer residual toner discharge hole581is opened. On the other hand, as shown inFIG.8, in the state where the shutter59A is located in the shutter-closed position, the shutter opening591is located distant from the transfer residual toner discharge hole581. 2.11.2 Projection59B As shown inFIG.3, the projection59B is located on a circumferential surface of the shutter59A. The projection59B protrudes from the circumferential surface of the shutter59A. The projection59B extends in a diametrical direction of the shutter59A. Note that, the projection59B may also be attached to the shutter59A. 2.11.3 Lock Lever59C The lock lever59C is located on the circumferential surface of the shutter59A. The lock lever59C is located distant from the projection59B. The lock lever59C extends from the circumferential surface of the shutter59A. Note that, the lock lever59C may also be attached to the shutter59A. 3. Details of Toner Cartridge6Y Subsequently, the toner cartridge6Y is described in detail with reference toFIGS.7to12. Note that, the toner cartridges6M,6C and6K are described in a similar manner to the toner cartridge6Y. For this reason, the descriptions of the toner cartridges6M,6C and6K are omitted. In descriptions of the toner cartridge6Y, “the axis direction” and “the intersection direction” are “the axis direction” and “the intersection direction” “in the state where the toner cartridge6Y is supported to the drum cartridge5Y and is located in the second position”. As shown inFIG.10, the toner cartridge6Y extends in the axis direction. The toner cartridge6Y includes a toner housing61, two projections62A and62B, two projections63A and63B, a shutter64, two projections65A and65B, a toner conveying member66(refer toFIGS.11A and11B), a shutter67(refer toFIG.11A), and a lock pin68(refer toFIG.11A). 3.1 Toner Housing61 The toner housing61extends in the axis direction. The toner housing61has a first end portion E21and a second end portion E22in the intersection direction. The second end portion E22is located distant from the first end portion E21in the intersection direction. The toner housing61includes a first accommodation unit611, a second accommodation unit612, and two connection parts613A and613B. 3.1.1 First Accommodation Unit611 The first accommodation unit611is located at the second end portion E22of the toner housing61in the intersection direction. The first accommodation unit611extends in the axis direction. The first accommodation unit611has a tubular shape. Specifically, the first accommodation unit611has a square tube shape. A volume of the first accommodation unit611is greater than a volume of the second accommodation unit612. As shown inFIG.11A, the first accommodation unit611includes a concave part611A, a toner accommodation part611B, and a transfer residual toner accommodation part611C. 3.1.1.1 Concave Part611A The concave part611A is located at the second end portion E22of the toner housing61. The concave part611A is located distant from the second accommodation unit612in the intersection direction. The concave part611A has a circular arc shape. The concave part611A has a transfer residual toner receiving hole611D. In other words, the toner housing61has the transfer residual toner receiving hole611D. The toner cartridge6Y has the transfer residual toner receiving hole611D. The transfer residual toner receiving hole611D communicates with an internal space of the transfer residual toner accommodation part611C. 3.1.1.2 Toner Accommodation Part611B The toner accommodation part611B can accommodate the toner. As shown inFIG.9, in the state where the toner cartridge6Y is supported to the drum cartridge5Y and the toner housing61is located in the second position, the toner in the toner accommodation part611B is supplied to the developing device13Y. 3.1.1.3 Transfer Residual Toner Accommodation Part611C As shown inFIG.11A, the transfer residual toner accommodation part611C is located between the concave part611A and the second accommodation unit612in the intersection direction. The transfer residual toner accommodation part611C can accommodate the transfer residual toner. The internal space of the transfer residual toner accommodation part611C does not communicate with an internal space of the toner accommodation part611B. 3.1.2 Second Accommodation Unit612 As shown inFIG.10, the second accommodation unit612is located at the first end portion E21of the toner housing61in the intersection direction. The second accommodation unit612is located distant from the first accommodation unit611in the intersection direction. The second accommodation unit612extends in the axis direction. The second accommodation unit612has a cylindrical shape. The second accommodation unit612has a toner discharge hole614. In other words, the toner cartridge6Y has a toner discharge hole614. The toner discharge hole614is located at a central part of the second accommodation unit612in the axis direction. The toner discharge hole614is located between the projection63A and the projection63B in the axis direction. The toner discharge hole614extends in the axis direction. The toner discharge hole614can discharge the toner. 3.1.3 Connection Parts613A and613B The connection part613A is located between one end portion of the first accommodation unit611in the axis direction and one end portion of the second accommodation unit612in the axis direction. The connection part613A interconnects one end portion of the first accommodation unit611in the axis direction and one end portion of the second accommodation unit612in the axis direction. The connection part613B is located between the other end portion of the first accommodation unit611in the axis direction and the other end portion of the second accommodation unit612in the axis direction. The connection part613B is located at an interval from the connection part613A in the axis direction. The connection part613B interconnects the other end portion of the first accommodation unit611in the axis direction and the other end portion of the second accommodation unit612in the axis direction. As shown inFIG.11B, the second accommodation unit612communicates with the toner accommodation part611B via the connection part613A. Similarly, the second accommodation unit612communicates with the toner accommodation part611B via the connection part613B (refer toFIG.10). The toner in the first accommodation unit611enters the second accommodation unit612through the connection parts613A and613B. 3.2 Projections62A and62B As shown inFIG.10, the projection62A is located at one end portion of the toner cartridge6Y in the axis direction. The projection62A is located at one end portion of the second accommodation unit612in the axis direction. The projection62A protrudes from one end portion of the second accommodation unit612in the axis direction. The projection62A protrudes in the axis direction. The projection62B is located at the other end portion of the toner cartridge6Y in the axis direction. The projection62B is located at the other end portion of the second accommodation unit612in the axis direction. The projection62B protrudes from the other end portion of the second accommodation unit612in the axis direction. The projection62B protrudes in the axis direction. 3.3 Projections63A and63B The projection63A is located between the toner discharge hole614and the projection62A in the axis direction. The projection63A is located between the shutter64and the projection62A in the axis direction. The projection63A is located on an opposite side to the first accommodation unit611with respect to the second accommodation unit612in the intersection direction. The projection63A extends from a circumferential surface of the second accommodation unit612. The projection63B is located distant from the projection63A in the axis direction. The projection63B is located on an opposite side to the projection63A with respect to the toner discharge hole614in the axis direction. The projection63B is located between the toner discharge hole614and the projection62B in the axis direction. The projection63B is located on an opposite side to the projection63A with respect to the shutter64in the axis direction. The projection63B is located between the shutter64and the projection62B in the axis direction. The projection63B is located on an opposite side to the first accommodation unit611with respect to the second accommodation unit612in the intersection direction. The projection63B extends from the circumferential surface of the second accommodation unit612. 3.4 Shutter64 The shutter64is located between the connection part613A and the connection part613B in the axis direction. The shutter64is located on the circumferential surface of the second accommodation unit612. The shutter64can move between a closed position (refer toFIG.10) and an opened position (refer toFIG.12) with respect to the toner discharge hole614. In the state where the shutter64is located in the closed position, the shutter64closes the toner discharge hole614. As shown inFIG.12, in the state where the shutter64is located in the opened position, the toner discharge hole614is opened. The shutter64extends in the axis direction. The shutter64has a cylindrical shape. The shutter64has an opening641. The opening641extends in the axis direction. In the state where the shutter64is located in the opened position, at least a part of the opening641communicates with the toner discharge hole614. As shown inFIG.10, in the state where the shutter64is located in the closed position, the opening641is located distant from the toner discharge hole614. 3.5 Projections65A and65B The projection65A is located on a circumferential surface of the shutter64. The projection65A extends from the circumferential surface of the shutter64. The projection65A is located between the opening641and the projection63A in the axis direction. The projection65B is located distant from the projection65A in the axis direction. The projection65B is located on an opposite side to the projection65A with respect to the opening641in the axis direction. The projection65B is located on the circumferential surface of the shutter64. The projection65B extends from the circumferential surface of the shutter64. The projection65B is located between the opening641and the projection63B in the axis direction. 3.6 Toner Conveying Member66 As shown inFIGS.11A and11B, the toner conveying member66is located inside of the second accommodation unit612. The toner conveying member66can rotate about an axis A21. The axis A21extends in the axis direction. The toner conveying member66extends along the axis A21. The toner conveying member66is configured to convey the toner that enters the second accommodation unit612through the connection part613A toward the toner discharge hole614and to convey the toner that enters the second accommodation unit612through the connection part613B toward the toner discharge hole614. 3.7 Shutter67 As shown inFIG.11A, the shutter67is located at the second end portion E22of the toner housing61in the intersection direction. The shutter67is located at the concave part611A. The shutter67is a curved plate. The shutter67extends along the concave part611A. The shutter67has a circular arc shape. The shutter67can move between a closed position (refer toFIG.8) and an opened position (refer toFIG.9). As shown inFIG.8, in a state where the shutter67is located in the closed position, the shutter67closes the transfer residual toner receiving hole611D. As shown inFIG.9, in a state where the shutter67is located in the opened position, the transfer residual toner receiving hole611D is opened. As shown inFIG.11A, the shutter67has a through-hole671. 3.8 Lock Pin68 The lock pin68is located at the second end portion E22of the toner housing61in the intersection direction. The lock pin68is located at the concave part611A. The lock pin68can move between a lock position (refer toFIG.11A) and a lock release position (refer toFIG.8) with respect to the shutter67located in the closed position. As shown inFIG.11A, in a state where the shutter67is located in the closed position and the lock pin68is located in the lock position, the lock pin68is fitted in the through-hole671of the shutter67. Thereby, in the state where the shutter67is located in the closed position and the lock pin68is located in the lock position, the lock pin68locks the shutter67to the closed position. In other words, the lock pin68stops movement of the shutter67from the closed position to the opened position. On the other hand, as shown inFIG.8, in a state where the shutter67is located in the closed position and the lock pin68is located in the lock release position, the lock pin68separates from the through-hole671of the shutter67. Thereby, in the state where the shutter67is located in the closed position and the lock pin68is located in the lock release position, the lock pin68releases a locked state of the shutter67. In other words, the lock pin68allows the shutter67to move from the closed position to the opened position. 3.9 Mounting of Toner Cartridge6Y to Main Body Housing2 When mounting the toner cartridge6Y to the main body housing2, an operator first mounts the toner cartridge6Y to the drum cartridge5Y. Note that, the drum cartridge5Y may be removed from the drawer4or may be mounted to the drawer4located in the outer position. When mounting the toner cartridge6Y to the drum cartridge5Y, the operator first fits the projection62A (refer toFIG.10) to the toner guide52A (refer toFIG.3) and fits the projection62B (refer toFIG.10) to the toner guide52B (refer toFIG.3). Thereby, the second accommodation unit612of the toner cartridge6Y is guided toward the receiving part53by the toner guide52A and the toner guide52B. The second accommodation unit612is fitted in the receiving part53, as shown inFIG.7. Then, the toner cartridge6Y is supported to the drum cartridge5Y. At this time, the toner housing61is located in the first position. In the state where the toner cartridge6Y is supported to the drum cartridge5Y and the toner housing61is located in the first position, the projection65A (refer toFIG.10) is fitted in the hole53B (refer toFIG.3) of the receiving part53through the through-hole542A (refer toFIG.3) of the shutter54, and the projection65B (refer toFIG.10) is fitted in the hole53C (refer toFIG.3) of the receiving part53through the through-hole542B (refer toFIG.3) of the shutter54. The projection65A is fitted in the hole53B of the receiving part53and the projection65B is fitted in the hole53C of the receiving part53, so that the shutter64is fixed to the receiving part53. Also, the projection63A (refer toFIG.10) is fitted in the through-hole543A (refer toFIG.3) of the shutter54and the projection63B (refer toFIG.10) is fitted in the through-hole543B (refer toFIG.3) of the shutter54. The projection63A is fitted in the through-hole543A of the shutter54and the projection63B is fitted in the through-hole543B of the shutter54, so that the shutter54can move together with the toner housing61. Then, as shown inFIGS.7and8, the operator moves the toner housing61from the first position to the second position. At this time, the toner housing61is pivoted with respect to the second accommodation unit612. Then, the movement of the toner housing61from the first position to the second position locates the shutter64from the closed position to the opened position with respect to the toner housing61. Also, as the toner housing61moves from the first position to the second position, the shutter54moves from the closed position to the opened position. As shown inFIG.8, when the toner housing61is located in the second position, the shutter64is located in the opened position and the shutter54is located in the opened position. Then, in the state where the toner cartridge6Y is supported to the drum cartridge5Y and the toner housing61is located in the second position, the toner discharge hole614communicates with the toner receiving hole53A. Thereby, the toner receiving hole53A can receive the toner discharged from the toner discharge hole614. Also, when the toner housing61is located in the second position, the shutter67of the toner cartridge6Y is contacted to the lock member59of the drum cartridge5Y. Specifically, in the state where the toner housing61is located in the second position, the shutter67located in the closed position is contacted to the lock member59located in the lock release position. Then, the projection59B of the lock member59is fitted in the through-hole671of the shutter67while moving the lock pin68from the lock position (refer toFIG.7) to the lock release position (refer toFIG.8). Thereby, the shutter67can move from the closed position to the opened position, together with the lock member59. Then, as shown inFIGS.8and9, the operator moves the lock member59from the lock release position to the lock position. As the lock member59is moved, the shutter59A moves from the shutter-closed position to the shutter-opened position and the shutter67moves from the closed position to the opened position. As shown inFIG.9, when the lock member59is located in the lock position, the toner housing61is locked in the second position. Specifically, in the state where the lock member59is located in the lock position, the projection59B of the lock member59extends in a direction of intersecting with the moving direction of the toner housing61from the second position to the first position. In this state, since the projection59B of the lock member59is fitted in the through-hole671of the shutter67, the toner housing61cannot move from the second position to the first position. Thereby, the toner cartridge6Y cannot be removed from the drum cartridge5Y. That is, in the state where the toner housing61is located in the second position, the toner cartridge6Y cannot be removed from the drum cartridge5Y. Also, when the lock member59is located in the lock position, the shutter59A is located in the shutter-opened position and the shutter67is located in the opened position, so that the transfer residual toner receiving hole611D communicates with the transfer residual toner discharge hole581. That is, in a state where the toner cartridge6Y is supported to the drum cartridge5Y, the toner housing61is located in the second position, the shutter59A is located in the shutter-opened position and the shutter67is located in the opened position, the transfer residual toner receiving hole611D communicates with the transfer residual toner discharge hole581. Thereby, the transfer residual toner receiving hole611D can receive the transfer residual toner discharged from the transfer residual toner discharge hole581. By the above, the mounting of the toner cartridge6Y to the drum cartridge5Y is completed. The “state where the toner cartridge6Y is mounted to the drum cartridge5Y” means the state where the toner cartridge6Y is supported to the drum cartridge5Y, the toner housing61is located in the second position and the lock member59is located in the lock position. In a case where the drum cartridge5Y is removed from the drawer4, the operator locates the drawer4in the outer position to support the drum cartridge5Y having the toner cartridge6Y mounted thereto by the drawer4. Then, the operator moves the drawer4from the outer position to the inner position. Also, in a case where the drum cartridge5Y is supported to the drawer4located in the outer position, the operator moves the drawer4from the outer position to the inner position after the mounting of the toner cartridge6Y to the drum cartridge5Y is completed. By the above, the mounting of the toner cartridge6Y to the main body housing2is completed. 3.10 Removal of Toner Cartridge6Y from Main Body Housing2 In order to remove the toner cartridge6Y from the main body housing2, the operator first moves the drawer4from the inner position to the outer position. Then, as shown inFIGS.9and8, the operator moves the lock member59from the lock position (refer toFIG.9) to the lock release position (refer toFIG.8). Then, as shown inFIG.9, in the state where the lock member59is located in the lock release position, the shutter59A is located in the shutter-closed position and the shutter67is located in the closed position. Then, as shown inFIGS.8and7, the operator moves the toner housing61from the second position (refer toFIG.8) to the first position (refer toFIG.7). Then, as shown inFIG.7, in the state where the toner housing61is located in the first position, the shutter64is located in the closed position and the shutter54is located in the closed position. Also, in the state where the toner housing61is located in the first position, the transfer residual toner receiving hole611D is located distant from the transfer residual toner discharge hole581. Also, in the state where the toner housing61is located in the first position, the shutter67is located distant from the lock member59¥. Also, in the state where the toner housing61is located in the first position, the toner housing61is located distant from the pipe58. Thereby, in the state where the toner housing61is located in the first position, the toner cartridge6Y can be removed from the drum cartridge5Y. Also, as shown inFIG.2, in the state where the drawer4is located in the outer position and the toner housing61is located in the first position, the transfer residual toner receiving hole611D (refer toFIG.7) is located downstream of the transfer residual toner discharge hole581(refer toFIG.7) with respect to the moving direction of the drawer4from the inner position to the outer position. In other words, in the state where the drawer4is located in the outer position and the toner housing61is located in the first position, the transfer residual toner discharge hole581(refer toFIG.7) is located between the transfer residual toner receiving hole611D (refer toFIG.7) and the second end portion E2of the main body housing2in the moving direction of the drawer4. Then, the operator removes the toner cartridge6Y from the drum cartridge5Y. By the above, the removal of the toner cartridge6Y from the main body housing2is completed. 4. Operational Effects (1) According to the drum cartridge5Y, as shown inFIG.3, the pipe58extending in the axis direction is provided between the drum side plate51A and the drum side plate51B. For this reason, the user can carry the drum cartridge5Y while gripping the pipe58. As a result, the user can easily carry the drum cartridge5Y. (2) According to the drum cartridge5Y, as shown inFIG.6, the transfer residual toner discharge hole581is located at the central part of the pipe58in the axis direction. For this reason, it is possible to discharge the transfer residual toner from the central part in the axis direction. As a result, it is possible to accommodate the transfer residual toner uniformly in the axis direction in the transfer residual toner accommodation part611C configured to accommodate the discharged transfer residual toner. (3) According to the drum cartridge5Y, as shown inFIG.6, the seal member55is located between the developing housing131Y and the receiving part53. The seal member55surrounds the developing opening133and the toner receiving hole53A. For this reason, it is possible to suppress the toner, which enters the toner receiving hole53A, from being leaked between the receiving part53and the developing housing131Y by the seal member55. (4) According to the drum cartridge5Y, as shown inFIG.9, in the state where the toner housing61is located in the second position, the transfer residual toner receiving hole611D communicates with the transfer residual toner discharge hole581. In the state where the toner housing61is located in the second position, the toner cartridge6Y cannot be removed from the drum cartridge5Y. For this reason, in the state where the transfer residual toner discharge hole581and the transfer residual toner receiving hole611D communicate with each other, the toner cartridge6Y can be prevented from being removed from the drum cartridge5Y. As a result, it is possible to suppress the transfer residual toner from being leaked. (5) According to the drum cartridge5Y, as shown inFIG.9, in the state where the toner housing61is located in the second position and the lock member59is located in the lock position, the lock member59locks the toner housing61to the second position. For this reason, in the state where the toner housing61is located in the second position and the transfer residual toner discharge hole581and the transfer residual toner receiving hole611D communicate with each other, it is possible to lock the toner housing61by the lock member59. Thereby, in the state where the transfer residual toner discharge hole581and the transfer residual toner receiving hole611D communicate with each other, it is possible to securely prevent the toner cartridge6Y from being removed from the drum cartridge5Y. As a result, it is possible to further suppress the transfer residual toner from being leaked. Also, as shown inFIG.8, in the state where the lock member59is located in the lock release position, the shutter59A is located in the shutter-closed position. Thereby, in the state where the lock member59is located in the lock release position, it is possible to close the transfer residual toner discharge hole581by the shutter59A. As a result, it is possible to further suppress the transfer residual toner from being leaked. (6) According to the image forming apparatus1, as shown inFIG.2, in the state where the drawer4is located in the outer position and the toner housing61is located in the first position, the transfer residual toner receiving hole611D (refer toFIG.7) may be located downstream of the transfer residual toner discharge hole581(refer toFIG.7) with respect to the direction in which the drawer4moves from the inner position to the outer position. For this reason, after moving the drawer4from the inner position to the outer position, the user can move the toner housing61downstream of the direction in which the drawer4moves from the inner position to the outer position, thereby positioning the toner housing61in the first position. Thereby, it is possible to implement the movement of the drawer4to the outer position and the movement of the toner housing61to the first position from the same side (the downstream side of the direction in which the drawer4moves from the inner position to the outer position). As a result, the user can easily operate the drawer4and the toner cartridge6Y. 5. Modified Illustrative Embodiments The toner cartridge6Y may also be a developing cartridge having the developing roller132M. In this case, the drum cartridge5Y is not provided with the developing device13Y and the receiving part53. As discussed above, the disclosure may provide at least the following illustrative, non-limiting aspects. (1) A drum cartridge of the present disclosure includes a photosensitive drum, a drum cleaner, a first drum side plate, a second drum side plate, a pipe, and a conveyor device. The photosensitive drum can rotate about a drum axis. The drum axis extends in an axis direction. The drum cleaner includes a cleaning member, and a cleaning housing. The cleaning member is configured to clean a circumferential surface of the photosensitive drum. The cleaning housing is configured to accommodate transfer residual toner. The transfer residual toner is removed from the circumferential surface of the photosensitive drum by the cleaning member. The first drum side plate is configured to support one end portion of the photosensitive drum in the axis direction. The second drum side plate is located distant from the first drum side plate in the axis direction. The second drum side plate is configured to support the other end portion of the photosensitive drum in the axis direction. The pipe is located between the first drum side plate and the second drum side plate in the axis direction. The pipe extends in the axis direction. One end portion of the pipe in the axis direction connects to the first drum side plate. The other end portion of the pipe in the axis direction connects to the second drum side plate. The conveyor device is configured to convey the transfer residual toner from the cleaning housing to the pipe. According to the above configuration, the pipe extending in the axis direction is provided between the first drum side plate and the second drum side plate. For this reason, a user can carry the drum cartridge while gripping the pipe. As a result, the user can easily carry the drum cartridge. (2) The drum cartridge may be configured to support a toner cartridge. The toner cartridge includes a toner housing. The toner housing may have: a toner discharge hole for discharging toner; and a transfer residual toner receiving hole for receiving the transfer residual toner. The pipe may have a transfer residual toner discharge hole. The toner discharge hole is able to discharge the transfer residual toner. The transfer residual toner discharge hole may communicate with the transfer residual toner receiving hole in a state the toner cartridge is supported to the drum cartridge. (3) The transfer residual toner discharge hole may be located at a central part of the pipe in the axis direction. According to the above configuration, it is possible to discharge the transfer residual toner from the central part in the axis direction. For this reason, it is possible to accommodate the transfer residual toner uniformly in the axis direction in the transfer residual toner accommodation part configured to accommodate the discharged transfer residual toner. (4) The drum cartridge may further include a developing device. The developing device includes a developing roller and a developing housing. The developing roller is rotatable about a developing axis extending in the axis direction. The developing housing is configured to support the developing roller. The developing device may be located between the first drum side plate and the second drum side plate in the axis direction. (5) The developing device is supported to be movable with respect to the photosensitive drum by the first drum side plate and the second drum side plate. The developing housing has a developing opening. The drum cartridge may further include a receiving part and a seal member. The receiving part is configured to receive the toner cartridge in a state the toner cartridge is supported to the drum cartridge. The receiving part has a toner receiving hole. The toner receiving hole is formed to communicate with the toner discharge hole in the state the toner cartridge is supported to the drum cartridge. The toner receiving hole communicates with the developing opening of the developing housing. The seal member is located between the developing housing and the receiving part. The seal member is configured to surround the developing opening and the toner receiving hole. According to the above configuration, it is possible to suppress the toner, which enters the toner receiving hole, from being leaked between the receiving part and the developing housing by the seal member. (6) The first drum side plate may include a first toner guide. The first toner guide guides one end portion of the toner cartridge in the axis direction at the time when the toner cartridge is supported to the drum cartridge. The first toner guide extends in a guide direction of intersecting with the axis direction. The second side plate may have a second toner guide. The second toner guide guides the other end portion of the toner cartridge in the axis direction at the time when the toner cartridge is supported to the drum cartridge. The second toner guide extends in the guide direction. (7) In a state the toner cartridge is supported to the drum cartridge, the toner housing may be movable between a first position and a second position. In a state the toner housing is located in the first position, the transfer residual toner receiving hole is located distant from the transfer residual toner discharge hole. In a state the toner housing is located in the second position, the transfer residual toner receiving hole communicates with the transfer residual toner discharge hole. In the state the toner housing is located in the first position, the toner cartridge is removable from the drum cartridge. In the state the toner housing is located in the second position, the toner cartridge is not removable from the drum cartridge. According to the above configuration, it is possible to prevent the toner cartridge from being removed from the drum cartridge in the state the transfer residual toner discharge hole and the transfer residual toner receiving hole communicate with each other. As a result, it is possible to suppress the transfer residual toner from being leaked. (8) The drum cartridge may further include a lock member and a shutter. The lock member is movable between a lock position and a lock release position. In a state the toner housing is located in the second position and the lock member is located at the lock position, the lock member locks the toner housing in the second position. In the state the lock member is located at the lock release position, the lock member releases the locked state of the toner housing. The shutter is movable between a shutter-closed position and a shutter-opened position. In a state the shutter is located at the shutter-closed position, the shutter closes the transfer residual toner discharge hole. In a state the shutter is located at the shutter-opened position, the transfer residual toner discharge hole is opened. In a state the lock member is located in the lock release position, the shutter is located in the shutter-closed position. In a state the lock member is located in the lock position, the shutter is located in the shutter-opened position. According to the above configuration, it is possible to lock the toner housing by the lock member in the state the toner housing is located in the second position and the transfer residual toner discharge hole and the transfer residual toner receiving hole communicate with each other. For this reason, it is possible to securely prevent the toner cartridge from being removed from the drum cartridge in the state the transfer residual toner discharge hole and the transfer residual toner receiving hole communicate with each other. As a result, it is possible to further suppress the transfer residual toner from being leaked. Also, it is possible to close the transfer residual toner discharge hole by the shutter in the state the lock member is located in the lock release position. Thereby, it is possible to further suppress the transfer residual toner from being leaked. (9) An image forming apparatus of the present disclosure includes: the drum cartridge; a main body housing; and a drawer. The main body housing has an opening. The main body housing includes a cover. The main body housing is movable between a cover-closed position and a cover-opened position. In a state the main body housing is located at the cover-closed position, the cover closes the opening. In a state the main body housing is located at the cover-opened position, the opening is opened. The drawer is movable between an inner position and an outer position in a moving direction through the opening, in a state the cover is located in the cover-opened position. The moving direction of the drawer intersects with the axis direction. In a state the drawer is located at the inner position, the drawer is located inside of the main body housing. In a state the drawer is located at the outer position, the drawer is located outside of the main body housing. The drawer is configured to support the drum cartridge. (10) An image forming apparatus according to the disclosure may further include a toner cartridge. The drum cartridge may be configured to support the toner cartridge. (11) In a state the drawer is located in the outer position and the toner cartridge is located in the first position, the transfer residual toner receiving hole may be located downstream of the transfer residual toner discharge hole with respect to a direction in which the drawer moves from the inner position to the outer position. According to the above configuration, after moving the drawer from the inner position to the outer position, the user can move the toner housing downstream of the direction in which the drawer moves from the inner position to the outer position, thereby positioning the toner housing in the first position. For this reason, it is possible to implement the movement of the drawer to the outer position and the movement of the toner housing to the first position from the same side (the downstream side of the direction in which the drawer moves from the inner position to the outer position). As a result, the user can easily operate the drawer and the toner cartridge. (12) The main body housing may have a first end portion and a second end portion in a moving direction of the drawer. The first end portion has the opening. The second end portion is distant from the first end portion. In a state the drawer is located in the outer position and the toner cartridge is located in the first position, the transfer residual toner discharge hole may be located between the transfer residual toner receiving hole and the second end portion in the moving direction of the drawer. | 69,054 |
11860568 | DETAILED DESCRIPTION OF EMBODIMENTS A preferred embodiment of the present invention will be described with reference to drawings. In the following embodiment, a direction which crosses an electric contact surface of an IC chip (in the following embodiment, more preferably, a direction which is perpendicular to the electric contact surface of the IC chip) will be referred to as “first direction”, and a moving direction of a casing in a separating operation will be referred to as “second direction”. And an extending direction of a rotation axis of a developing roller will be referred to as “third direction”. 1. Overall Structure of Developing Cartridge FIGS.1to5are perspective views of a developing cartridge1. The developing cartridge1is used for an electro-photographic type image forming apparatus (for example, a laser printer or a LED printer), and is a unit for supplying developer (toner, for example) to a photosensitive drum. As shown inFIG.1, the developing cartridge1is attached to a drawer unit90of the image forming apparatus. When the developing cartridge1is replaced, the drawer unit90is drawn out from a front surface of the image forming apparatus. The drawer unit90includes four cartridge holding portions91, and the developing cartridge1is attached to four cartridge holding portions91, respectively. Each of four cartridge holding portions91includes a photosensitive drum. In the present embodiment, four developing cartridges1are attached to one drawer unit90. Each of the four developing cartridges1is configured to accommodate developer therein, and the color of the developer is different colors (cyan, magenta, yellow, and black, for example) among the four developing cartridges respectively. However, the number of developing cartridges1that can be attached to the drawer unit90may be 1 to 3 or be greater than or equal to 5. As shown inFIGS.2to5, each developing cartridge1according to the present embodiment includes a casing10, an agitator20, a developing roller30, a first gear portion40, a second gear portion50, and an IC (Integrated Circuit) chip assembly60. The developing roller30is rotatable about a rotation axis extending in the third direction. The developing roller30according to the present embodiment includes a roller body31and a roller shaft32. The roller body31is a cylinder-shaped member extending in the third direction. The roller body31is made of an elastic rubber, for example. The roller shaft32is a cylindrical member penetrating through the roller body31in the third direction. The roller shaft32is made of metal or conductive resin. The roller shaft32may not penetrate through the roller body31in the third direction. For example, each of a pair of roller shafts32may extend from each end of the roller body31in the third direction. The agitator20includes an agitator shaft21and an agitation blade22. The agitator shaft21extends along the rotation axis extending in the third direction. The agitation blade22expands outward from the agitator shaft21in a radial direction. The agitation blade22is positioned inside a developing chamber13of the casing10. A first agitator gear44and a second agitator gear51described later are mounted to both end portions in the third direction of the agitator shaft21, respectively. Accordingly, the agitator shaft21and the agitation blade22are rotatable with the first agitator gear44and the second agitator gear51. The developer which is accommodated in the developing chamber13is agitated by rotation of the agitation blade22. Instead of the agitation blade22, the agitator may include an agitation film. The casing10is a case configured to accommodate therein developer (toner, for example) for electro-photographic printing. The casing10includes a first outer surface11and a second outer surface12. The first outer surface11and the second outer surface12are separated from each other in the third direction. The first gear portion40and the IC chip assembly60are positioned at the first outer surface11. The second gear portion50is positioned at the second outer surface12. The casing10extends in the third direction from the first outer surface11to the second outer surface12. The developing chamber13for accommodating the developer is provided in the casing10. The casing10has an opening14. The opening14communicates between the developing chamber13and an exterior of the developing chamber13. The opening14is positioned at one end portion in the second direction of the casing10. The developing roller30is positioned at the opening14. That is, the developing roller30is positioned closer to one side of the casing10than to the center of the casing10in the second direction. The roller body31is fixed to the roller shaft32so as to be incapable of rotating relative to the roller shaft32. One end portion of the roller shaft32in the third direction is mounted to a developing gear42described later so as to be incapable of rotating relative to the developing gear42. When the developing gear42rotates, the roller shaft32rotates with the developing gear42and then the roller body31rotates together with the roller shaft32. When the developing cartridge1receives a driving force, the developer is supplied from the developing chamber13in the casing10onto an outer peripheral surface of the developing roller30via a supply roller (omitted in the figure). At this time, the developer is turbocharged between the supply roller and the developing roller30. On the other hand, bias voltage is applied to the roller shaft32of the developing roller30. Accordingly, static electricity between the roller shaft32and the developer moves the developer toward the outer peripheral surface of the roller body31. The developing cartridge1further includes a layer thickness regulation blade which is omitted in the figure. The layer thickness regulation blade regulates a thin layer of the developer supplied onto the outer peripheral surface of the roller body31so that the thickness of the developer becomes constant. Then, the developer on the outer peripheral surface of the roller body31is supplied to the photosensitive drum of the drawer unit90. At this time, the developer moves from the roller body31to the photosensitive drum on the basis of an electrostatic latent image formed on the outer peripheral surface of the photosensitive drum. Accordingly, the electrostatic latent image is visualized on the outer peripheral surface of the photosensitive drum. The first gear portion40is positioned at one end portion in the third direction of the casing10. That is, the first gear portion40is positioned at the first outer surface11.FIG.4is a perspective view of the developing cartridge1in a state in which the first gear portion40is disassembled. As shown inFIG.4, the first gear portion40includes a coupling41, a developing gear42, an idle gear43, a first agitator gear44, and a first cover45. A plurality of gear teeth of each gear are not illustrated inFIG.4. The coupling41is a gear for initially receiving the driving force applied from the image forming apparatus. The coupling41is rotatable about a rotation axis extending in the third direction. The coupling41includes a coupling portion411and a coupling gear412. The coupling portion411and the coupling gear412are integral with each other and made of a resin, for example. The coupling portion411has a coupling hole413depressed in the third direction. The coupling gear412includes a plurality of gear teeth. The gear teeth are provided on the entire outer peripheral surface of the coupling gear412at equal intervals. When the drawer unit90to which the developing cartridge1is attached is accommodated in the image forming apparatus, a drive shaft of the image forming apparatus is inserted into the coupling hole413of the coupling portion411. With this configuration, the drive shaft and the coupling portion411are connected so as to be incapable of rotating relative to each other. Accordingly, the coupling portion411rotates when the drive shaft rotates, and the coupling gear412rotates together with the coupling portion411. The developing gear42is a gear for rotating the developing roller30. The developing gear42is rotatable about a rotation axis extending in the third direction. The developing gear42includes a plurality of gear teeth. The gear teeth are provided on the entire outer peripheral surface of the developing gear42at equal intervals. At least a portion of the plurality of gear teeth of the coupling gear412meshes with at least a portion of the plurality of gear teeth of the developing gear42. Further, the developing gear42is mounted to the end portion of the roller shaft32in the third direction so as to be incapable of rotating relative to the roller shaft32. With this construction, when the coupling gear412rotates, the developing gear42rotates with the coupling gear412and the developing roller30also rotates with the developing gear42. The idle gear43is a gear for transmitting rotational driving force of the coupling gear412to the first agitator gear44. The idle gear43is rotatable about a rotation axis extending in the third direction. The idle gear43includes a large diameter gear portion431and a small diameter gear portion432. The large diameter gear portion431and the small diameter gear portion432are arranged in the third direction. The small diameter gear portion432is positioned between the large diameter gear portion431and the first outer surface11of the casing10. In other words, the large diameter gear portion431is farther away from the first outer surface11than the small diameter gear portion432is. A diameter of the small diameter gear portion432is smaller than a diameter of the large diameter gear portion431. In other words, a diameter of an addendum circle of the small diameter gear portion432is smaller than a diameter of an addendum circle of the large diameter gear portion431. The large diameter gear portion431and the small diameter gear portion432are integral with each other and are made of a resin. The large diameter gear portion431includes a plurality of gear teeth, and the plurality of gear teeth are provided on the entire outer peripheral surface of the large diameter gear portion431at equal intervals. The small diameter gear portion432includes a plurality of gear teeth, and the plurality of gear teeth are provided on the entire outer peripheral surface of the small diameter gear portion432at equal intervals. The number of gear teeth of the small diameter gear portion432is less than the number of gear teeth of the large diameter gear portion431. At least a portion of the plurality of gear teeth of the coupling gear412meshes with at least a portion of the plurality of gear teeth of the large diameter gear portion431. Further, at least a portion of the plurality of gear teeth of the small diameter gear portion432meshes with at least a portion of the plurality of gear teeth of the first agitator gear44. When the coupling gear412rotates, the large diameter gear portion431rotates together with the coupling gear412and the small diameter gear portion432rotates together with the large diameter gear portion431. Also, the first agitator gear44rotates with the rotation of the small diameter gear portion432. The first agitator gear44is a gear for rotating the agitator20in the developing chamber13. The first agitator gear44is rotatable about a rotation axis extending in the third direction. The first agitator gear44includes a plurality of gear teeth, and the plurality of gear teeth are provided on the entire outer peripheral surface of the first agitator gear44at equal intervals. As described above, at least a portion of the plurality of gear teeth of the small diameter gear portion432meshes with the at least a portion of the plurality of gear teeth of the first agitator gear44. Further, the first agitator gear44is mounted to one end portion of the agitator shaft21in the third direction so as to be incapable of rotating relative to the agitator shaft21. With the configuration, when the rotational driving force is transmitted from the coupling41to the first agitator gear44via the idle gear43, the first agitator gear44rotates and the agitator20rotates together with the first agitator gear44. The first cover45is fixed to the first outer surface11of the casing10by screws, for example. The coupling gear412, the developing gear42, the idle gear43, and the first agitator gear44are accommodated in a space between the first outer surface11and the first cover45. The coupling hole413of the coupling portion411is exposed to an outside of the first cover45. The first cover45according to the present embodiment also serves as a holder cover for holding the holder62of the IC chip assembly60described later. A structure of the first cover45as the holder cover will be described later in detail. The second gear portion50is positioned at the other end portion of the casing10in the third direction. In other words, the second gear portion50is positioned at the second outer surface12.FIG.5is a perspective view of the developing cartridge1in which the second gear portion50is exploded. As illustrated inFIG.5, the second gear portion50includes a second agitator gear51, a detection gear52, an electrically conductive member53, and a second cover54. Note that, inFIG.5, gear teeth are not illustrated in the second agitator gear51and the detection gear52. The second agitator gear51is a gear for transmitting rotational driving force of the agitator shaft21to the detection gear52. The second agitator gear51is rotatable about a rotation axis extending in the third direction. The second agitator gear51includes a plurality of gear teeth, and the plurality of gear teeth are provided on the entire outer peripheral surface of the second agitator gear51at equal intervals. At least a portion of the plurality of gear teeth of the second agitator gear51meshes with at least a portion of a plurality of gear teeth of the detection gear52. The second agitator gear51is mounted to the other end portion of the agitator shaft21in the third direction so as to be incapable of rotating relative to the agitator shaft21. With this configuration, the second agitator gear51rotates with rotation of the agitator shaft21. The detection gear52is a gear for providing information on the developing cartridge1for the image forming apparatus. The information on the developing cartridge1includes, for example, information as to whether the developing cartridge1is a new (unused) cartridge or a used cartridge. The information on the developing cartridge1also includes, for example, a product specification of the developing cartridge1. The product specification of the developing cartridge1includes, for example, the number of sheets that can be printed with the developer accommodated in the developing cartridge1(i.e. sheet-yield number). The detection gear52is rotatable about a rotation axis extending in the third direction. The detection gear52includes a plurality of gear teeth. The gear teeth are provided on a portion of an outer peripheral surface of the detection gear52. When the drawer unit90to which an unused developing cartridge1is attached is attached in the image forming apparatus, the detection gear52can rotate by meshing with the second agitator gear51. When the detection gear52is disengaged from the second agitator gear51, rotation of the detection gear52is stopped. When the drawer unit90to which a used developing cartridge1is attached is attached in the image forming apparatus, the detection gear52does not mesh with the second agitator gear51. Thus, the detection gear52cannot rotate. A gear may be provided between the second agitator gear51and the detection gear52. For example, the second gear portion50may further include a second idle gear meshing with both the second agitator gear51and the detection gear52. In this case, rotational driving force of the second agitator gear51may be transmitted to the detection gear52via the second idle gear. As illustrated inFIG.5, the detection gear52includes a detecting protrusion521. The detecting protrusion521protrudes in the third direction. The detecting protrusion521has a circular arc shape extending along a portion of an addendum circle of the detection gear about the rotation axis of the detection gear52. The electrically conductive member53is electrically conductive. The electrically conductive member53is formed of a material such as electrically conductive metal or electrically conductive resin. The electrically conductive member53is positioned at the second outer surface12of the casing10. The electrically conductive member53includes a gear shaft531protruding in the third direction. The detection gear52rotates about the gear shaft531in a state where the detection gear52is supported by the gear shaft531. The electrically conductive member53further includes a bearing portion532. The bearing portion532is in contact with the roller shaft32of the developing roller30. The drawer unit90includes an electrically conductive lever (not illustrated) that is in contact with the gear shaft531in a state where the developing cartridge1is attached to the drawer unit90. Instead of the drawer unit90, the image forming apparatus may include the electrically conductive lever. When the lever contacts the gear shaft531, electrical connection between the lever and the electrically conductive member53is established and electrical connection between the electrically conductive member53and the roller shaft32is also established. When the image forming apparatus is in operation, electric power is supplied to the roller shaft32through the lever, and the roller shaft32can keep a prescribed bias voltage. Note that the detecting protrusion521covers a portion of an outer peripheral surface of the gear shaft531. Hence, when the detection gear52rotates after a new developing cartridge1is attached in the drawer unit90, the contact state between the lever and the gear shaft531changes according to the shape of the detection gear52. More specifically, the contact state between the lever and the gear shaft531changes according to the shape of the detecting protrusion521because the detecting protrusion521pass through between the lever and the gear shaft according to the rotation of the detection gear52. Alternatively, the contact state between the lever and the gear shaft531changes according to the number of the detecting protrusions521which are provided with the detection gear52because one or more of detecting protrusions521pass through between the lever and the gear shaft according to the rotation of the detection gear52. The image forming apparatus recognizes the change in the contact state between the lever and the gear shaft531to identify whether the attached developing cartridge1is new or used and/or the product specification of the mounted developing cartridge1. However, the method for detecting the information on the developing cartridge1using the detection gear52is not limited to detection of electrical conduction. For example, movement of the lever may be optically detected. Further, the detecting protrusion521may be formed to have a different circumferential position and length from those in the present embodiment. Further, the detection gear52may have a plurality of detecting protrusions521. The shape of the detection gear52may vary according to the product specification of the developing cartridge1such as the number of printable sheets. More specifically, the number of the detecting protrusions521may be differentiated among a plural type of the developing cartridges, and the product specification regarding each of the developing cartridges may be identified based on the number of the detecting protrusions21. When each of the plural type of the developing cartridges includes the number of the detecting protrusions521, circumferential intervals between the plurality of detecting projections521may be differentiated among the plural type of the developing cartridges. In the above-described case, a circumferential length of each detecting projection521and/or a radial length of each detecting projection521may be differentiated based on the product specification regarding each of the developing cartridges. In this way, variations in the number of the detecting protrusions521and/or circumferential positions of the each of the detecting projections521enables the image forming apparatus to identify the product specification regarding each of the developing cartridges. The detection gear52may be configured of a plurality of components. For example, the detecting protrusion521and the detection gear52may be different components. Further, the detection gear52may include a detection gear body and a supplemental member that shifts its position relative to the detection gear body in accordance with rotation of the detection gear body. In this case, the supplemental member changes between a first position in which the supplemental member is in contact with the lever and a second position in which the supplemental member is not in contact with the lever in accordance with shifting the position of the supplemental member relative to the detection gear body. As a result, the supplemental member may change the position of the lever. Further, the detection gear52may be configured of a movable gear that can move in the third direction. The movable gear may not be limited to a partially toothless gear. In other words, the movable gear includes a plurality of gear teeth, and the plurality of gear teeth are provided on an outer peripheral surface of the movable gear along the circumference of the movable gear. In this case, the movable gear moves in the third direction in accordance with rotation of the movable gear, thereby the movable gear is disengaged from the second agitator gear51. The movable gear may be moved in the third direction away from the second outer surface12or toward the second outer surface12. Further, the detection gear52may include a cam, and the cam may contact the detecting protrusion521. In this case, the cam rotates together with rotation of the detection gear52, and the rotating cam contacts the detecting projection521. This causes the detecting projection521to move relative to the detection gear52. The detecting protrusion521may be rotatably attached to a shaft provided at the second outer surface12or the second cover54. Alternatively, the detecting protrusion521may have a shaft, and the shaft of the detecting projection521may be inserted into a hole formed in the second outer surface12or the second cover54so that the detecting protrusion521is rotatably supported by the second outer surface12or the second cover54. Further, in the present embodiment, the gear shaft531extends in the third direction from the second outer surface12. However, the gear shaft531does not need to be in direct contact with the second outer surface12. For example, the casing10may have a through-hole penetrating the second outer surface12and a cap fitted with the through-hole, and a gear shift may extend from the cap in the third direction. In this case, the cap includes the gear shift protruding in the third direction toward the detection gear52, and the detection gear52rotates about the gear shaft531in a state where the detection gear is supported by the gear shaft531. The second cover54is fixed to the second outer surface12of the casing10by a screw, for example. The second agitator gear51, the detection gear52, and the electrically conductive member53are accommodated in a space between the second outer surface12and the second cover54. The second cover54has an opening541. A portion of the detection gear52and a portion of the gear shaft531are exposed to an outside through the opening541. The electrically conductive lever of the drawer unit90contacts the detection gear52and the gear shaft531through the opening541. 2. IC Chip Assembly The IC chip assembly60is positioned at the first outer surface11of the casing10.FIG.6is an exploded perspective view of the IC chip assembly60.FIG.7is a cross-sectional view of the IC chip assembly60taken along a plane perpendicular to the third direction. As shown inFIGS.2through7, the IC chip assembly60includes an IC (Integrated Circuit) chip61as a storage medium and a holder62for holding the IC chip61. The holder62is held to the first cover45at one end of the casing10in the third direction. The IC chip61stores various information on the developing cartridge1. The IC chip61includes an electric contact surface611. The electric contact surface611is made of electrically conductive metal. The IC chip61is fixed to an outer surface of the holder62in the third direction. The drawer unit90includes an electric connector. The electric connector is made of metal, for example. The electric connector of the drawer unit90contacts the electric contact surface611when the developing cartridge1is attached to the drawer unit90. At this time, the image forming apparatus can perform at least one of reading information from the IC chip61and writing information in the IC chip61. At least a portion of the holder62is covered by the first cover45. The holder62includes a boss621a, a boss621b, and a boss621c. Each of the boss621aand boss621bextends in the third direction toward the first cover45from a surface of the holder62opposite to a surface thereof facing the casing10. The boss621aand boss621bare aligned in the second direction. As shown inFIGS.2and4, the first cover45has a through-hole451aand a through-hole451b. The through-hole451aand through-hole451bpenetrate the first cover45in the third direction, respectively. The through-hole451aand through-hole451bare aligned in the second direction. The boss621ais inserted into the through-hole451a. The boss621bis inserted into the through-hole451b. The boss621cextends in the third direction toward the casing10from the surface of the holder62facing the casing10. On the other hand, the casing10includes a recessed portion15. The recessed portion15is recessed in the third direction on the first outer surface11of the casing10. The boss621cis inserted into the recessed portion15. The bosses621a,621band621cmay have a circular columnar shape or a rectangular columnar shape, respectively. The through-hole451ahas a dimension (inner dimension) in the second direction larger than a dimension (outside dimension) of the boss621ain the second direction. The through-hole451bhas a dimension (inner dimension) in the second direction larger than a dimension (outside dimension) of the boss621bin the second direction. Further, the recessed portion15has a dimension (inner dimension) in the second direction larger than a dimension (outer dimension) of the boss621cin the second direction. Hence, the holder62can move with the bosses621a,621band621cin the second direction relative to the casing10and the first cover45. As the holder62moves in the second direction, the IC chip61having the electric contact surface611also moves in the second direction together with the holder62. The through-hole451ahas a dimension (inner dimension) in the first direction larger than a dimension (outer dimension) of the boss621ain the first direction. The through-hole451bhas a dimension (inner dimension) in the first direction larger than a dimension (outer dimension) of the boss621bin the first direction. Further, the recessed portion15has a dimension (inner dimension) in the first direction larger than a dimension (outer dimension) of the boss621cin the first direction. Hence, the holder62can move with the bosses621a,621band621cin the first direction relative to the casing10and the first cover45. As the holder62moves in the first direction, the IC chip61having the electric contact surface611also moves in the first direction together with the holder62. The holder62may be movable in the third direction between the first cover45and the first outer surface11. Alternatively, the holder62may include a single boss, or equal to or more than three bosses. Likewise, the first cover45may have a single through-hole, or equal to or more than three through-holes. Or, instead of the through-holes451aand451b, the first cover45may include one or more of recesses to have the bosses621aand/or621binserted thereinto. As shown inFIGS.6and7, the holder62includes a first end portion710and a second end portion720. The first end portion710is one end portion of the holder62in the first direction. The second end portion720is another end portion of the holder62in the first direction. The first end portion710is movable relative to the second end portion720in the first direction. More specifically, the holder62of the present embodiment includes a first holder member71, a second holder member72, and a coil spring73positioned between the first holder member71and the second holder member72. The first holder member71is made of resin, for example. The second holder member72is made of resin, for example. The first holder member71includes the first end portion710. An outer surface of the first holder member71includes a holding surface620. The IC chip61is fixed to the holding surface620. The second holder member72includes the second end portion720. After assembling the first holder member71, the second holder member72and the coil spring73as the holder62, the first end portion710and the second end portion720are separated from each other in the first direction. The coil spring73is an elastic member extending in the first direction. The coil spring73is positioned between the first end portion710and the second end portion720in the first direction. The coil spring73can be stretched or compressed in the first direction at least between a first state and a second state more compressed than the first state. The coil spring73in the first state has a length in the first direction longer than a length of the coil spring73in the second state in the first direction. Therefore, a distance between the first end portion710and the second end portion720in the first direction in the first state is longer than a distance between the first end portion710and the second end portion720in the first direction in the second state. At least, the coil spring73in the second state has a length in the first direction shorter than a natural length of the coil spring73. As shown inFIGS.6and7, the first holder member71includes a pawl714aand a pawl714b. The pawl714aand the pawl714brespectively protrude from the first holder member71in a direction crossing the first direction. The second holder member72has an opening721aand an opening721b. The pawl714ais inserted into the opening721a. The pawl714bis inserted into the opening721b. In the first state, the pawl714ais in contact with the second holder member72at a periphery of the opening721aon a side of the first end portion710in the first direction. Also, in the first state, the pawl714bis in contact with the second holder member72at a periphery of the opening721bon a side of the first end portion710in the first direction. With this structure, the length of the coil spring73in the first direction is prevented from getting further longer than the length of the coil spring73in the first state. Further, the first holder member71cannot be detached from the second holder member72easily. On the other hand, in the second state, the pawl714ais separated from the periphery of the opening721aon the side of the first end portion710in the first direction, and pawl714bis separated from the periphery of the opening721bon the side of the first end portion710in the first direction. Instead of opening721aand the opening721b, one or more of recesses or one or more of steps which is capable of contacting the pawl714aand the pawl714brespectively may be provided. Alternatively, the first holder member71may have one or more of openings or one or more of recesses or one or more of steps, whereas the second holder member72may include one or more of pawls. Due to the difference in dimension between the through-hole451and boss621and stretch and compression of the coil spring73described above, the holding surface620of the holder62can move in the first direction relative to the casing10. Hereinafter, the position of the holding surface620in the first direction relative to the casing10will be referred to as an “initial position.” Before attaching the developing cartridge1to the drawer unit90, the holding surface620is in the initial position. Further, the position of the holding surface620in the first direction relative to the casing10at a moment when the coil spring73is most compressed during attaching the developing cartridge1to the drawer unit90will be referred to as an “intermediate position.” Further, the position of the holding surface620in the first direction relative to the casing10when the electric contact surface611make contact with an electric connector913described later will be referred to as a “contact position.” And the position of the holding surface620in the first direction relative to the casing10after attaching the developing cartridge1to the drawer unit90has been completed will be referred to as a “final position.” The outer surface of the first end portion710further includes a first guide surface711(an example of a first surface), a second guide surface712(an example of second surface), and third guide surfaces713aand713b(an example of a third surface), in addition to the holding surface620described above. The first guide surface711is positioned at one side of the holding surface620in the second direction which is closer to the developing roller30than another side of the holding surface620in the second direction. The first guide surface711is inclined relative to the electric contact surface611of the IC chip61held by the holding surface620. Specifically, the first guide surface711is inclined at an acute angle relative to the relative to the electric contact surface611. Here, one end of the first end portion710in the second direction will be defined as a first outer end position711a(third position). One end of the holding surface620in the second direction is defined as a first inner end position711b(fourth position). As illustrated inFIG.7, the first guide surface711extends from the first outer end position711ato the first inner end position711btoward the electric contact surface611. The first outer end position711ais farther away from the electric contact surface611than the first inner end position711bboth in the first direction and the second direction. In addition, as illustrated inFIG.7, the distance d1between the first outer end position711aand first inner end position711bin the first direction is greater than the distance d2between the electric contact surface611and first inner end position711bin the first direction. The second guide surface712is positioned at one side of the holding surface620in the second direction which is farther from the developing roller30than another side of the holding surface620in the second direction. The second guide surface712is inclined relative to the electric contact surface611of the IC chip61held by the holding surface620. Specifically, the second guide surface712is inclined at an acute angle relative to the electric contact surface611. Here, another end of the first end portion710in the second direction will be defined as a second outer end position712a(fifth position). Another end of the holding surface620in the second direction is defined as a second inner end position712b(sixth position). As illustrated inFIG.7, the second guide surface712extends from the second outer end position712ato the second inner end position712btoward the electric contact surface611. The second outer end position712ais farther away from the electric contact surface611than the second inner end position712bboth in the first direction and the second direction. In addition, as illustrated inFIG.7, the distance d3between the second outer end position712aand second inner end position712bin the first direction is greater than the distance d4between the electric contact surface611and second inner end position712bin the first direction. The third guide surface713ais positioned at one side of the electric contact surface611in the third direction. The third guide surface713bis positioned at another side of the electric contact surface611in the third direction. The third guide surfaces713a,713bextend in the second direction respectively. Each of the third guide surfaces713a,713bis farther away from the coil spring73than the electric contact surface611in the first direction. Therefore, the electric contact surface611is positioned at a recessed area which is recessed toward the coil spring73side relative to the third guide surfaces713a,713b. Each of the first guide surface711, second guide surface712, and third guide surfaces713a,713bmay be planar or curved. However, it is preferable that each of the first guide surface711, second guide surface712, and third guide surfaces713a,713bis a smooth surface without one or more steps so that each of the first guide surface711, second guide surface712, and third guide surfaces713a,713bdoes not hook a portion of the drawer unit90when the developing cartridge1is attached to the drawer unit90. 3. Attaching Operation Subsequently, operation when each developing cartridge1is attached to the drawer unit90will be described.FIGS.8through14respectively illustrate how the developing cartridge1is attached to one of the cartridge holding portions91of the drawer unit90. When the developing cartridge1is attached to the cartridge holding portion91, as illustrated inFIG.8, the developing roller30of the developing cartridge1first faces an insertion opening910of the cartridge holding portion91. At this time, the first end portion710of the holder720and second end portion720of the holder62are not in contact with the drawer unit90. Thus, the coil spring73is in the first state described above. The position of the holding surface620with respect to the casing10in the first direction is the initial position described above. The developing cartridge1is inserted into the cartridge holding portion91in the second direction, as shown by a dashed arrow illustrated inFIG.8. The cartridge holding portion91includes a first guide plate911and a second guide plate912. The first guide plate911is spaced apart from the second guide plate912in the first direction and the first guide plate91and the second guide plate912face each other. Each of the first guide plate911and second guide plate912extends along both the second direction and the third direction. The first guide plate911includes an electric connector913made of metal. The electric connector913is contactable with the electric contact surface611of the IC chip61. The electric connector913protrudes from the surface of the first guide plate911toward the second guide plate912in the first direction. When the developing cartridge1is inserted into the cartridge holding portion91, the first guide surface711of the holder62contacts the end of the first guide plate911in the second direction, as illustrated inFIG.9. Then, the first guide plate911presses the first guide surface711, thereby the holder62moves in the first direction. At this time, the movement of the holder62is relative movement with respect to the casing10. As a result, the holder62is positioned between the first guide plate911and second guide plate912in the first direction, as illustrated inFIG.10. The first end portion710of the first holder member71then contacts the first guide plate911. The second end portion720of the second holder member72also contacts the second guide plate912. The coil spring73is more compressed in the first direction than the first state. As illustrated inFIG.11, the first guide plate911includes a guide protrusion914protruding toward the second guide plate912. The guide protrusion914is positioned closer to the insertion opening910than the electric connector913. The guide protrusion914includes a first inclined surface915. The second guide plate912also includes a second inclined surface916. The distance between the first inclined surface915and second inclined surface916in the first direction becomes gradually smaller toward the inserting direction of the developing cartridge1. When the developing cartridge1is further inserted in the second direction, the first holder member71contacts the first inclined surface915and the second holder member72contacts the second inclined surface916. As a result, the first holder member71and second holder member72become closer to each other in the first direction and the length of the coil spring73in the first direction becomes shorter gradually. When each of the third guide surfaces713a,713bof the first holder member71contacts the top portion of the guide protrusion914, the length of the coil spring73in the first direction becomes shortest. That is, a length of the coil spring73in the first direction becomes a shortest state, and a length of the coil spring73in the shortest state is shorter than a length of the coil spring73in the second state described above. The position of the holding surface620relative to the casing10in the first direction is the intermediate position described above. As described above, the IC chip assembly60can change the position of the holding surface620in the first direction when the developing cartridge1is inserted into the drawer unit90. As a result, the developing cartridge1can be inserted into the drawer unit90by changing the position of the holding surface620in the first direction along the guide protrusion914. Therefore, the developing cartridge1can be inserted into the drawer unit90with suppressing friction of the electric contact surface611of the IC chip61. In addition, as illustratedFIGS.10,11, and12, the electric contact surface611directly contacts the electric connector913after the first guide surface711moves over the guide protrusion914. As a result, friction of the electric connector913can be reduced. In particular, in the developing cartridge1according to the present embodiment, the electric contact surface611of the IC chip61is positioned at a recessed area which is recessed relative to the third guide surfaces713a,713b. As a result, the top portion of the guide protrusion914contacts only the third guide surfaces713a,713bbut does not contact the electric contact surface611in the state illustrated inFIG.11. Therefore, friction of the guide protrusion914against the electric contact surface611can be prevented. When the developing cartridge1is further inserted into the second direction, the third guide surfaces713a,713bpass the guide protrusion914. The second guide surface712then contacts the guide protrusion914as illustrated inFIG.12. With such contact, the coil spring73stretches again from the shortest state to the second state described above. As a result, the electric contact surface611of the IC chip61contacts the electric connector913as illustrated inFIG.13. The length in the first direction of the coil spring73in the second state is shorter than the length of the coil spring73in the first state and the length in the first direction of the coil spring73in the second state is longer than the length of the coil spring73in the shortest state. In addition, the length in the first direction of the coil spring73in the second state is shorter than the natural length of the coil spring73. The relative position of the holding surface620with respect to the casing10in the first direction corresponds to the contact position described above. Consequently, the IC chip assembly60is fixed in a state where the IC chip assembly60is nipped between the electric connector913and second guide plate912. In the present embodiment, the casing10is then inclined in the first direction as shown by a dashed arrow illustrated inFIG.14. As a result, the developing roller30contacts the photosensitive drum92in the drawer unit90. At this time, the position of the holding surface620relative to the casing10in the first direction changes from the contact position to the final position described above. The boss621amoves inside of the through-hole451ain the first direction and the boss621bmoves inside of the through-hole451bin the third direction. As a result, the boss621ais not in contact with the edge of the through-hole451aof the first cover45, and the boss621bis not in contact with the edge of the through-hole451bof the first cover45. Thus, the IC chip assembly60and first cover45are not in contact with each other. Accordingly, oscillation of the drive unit such as the first gear portion40and the like is difficult to be transmitted to the IC chip assembly60when the image forming apparatus executes the print process. Therefore, the contact state of the electric contact surface611and electric connector913can be sufficiently maintained. 4. Separating Operation After the developing cartridge1is attached to the drawer unit90, the drawer unit90can perform a “separating operation” in which the developing roller30is temporarily separated from the photosensitive drum92. As illustrated inFIG.2, the first cover45of the developing cartridge1includes a first columnar protrusion46extending in the third direction. As illustrated inFIG.3, the second cover54of the developing cartridge1includes a second columnar protrusion55extending in the third direction. As illustrated inFIG.1, the drawer unit90includes a pressure member93. The pressure member93is positioned at one side portion of the cartridge holding portion91in the third direction, and another pressure member (not shown in theFIG.1) is positioned at another side portion of the cartridge holding portion91in the third direction. The other pressure member has same structures of the pressure member93and same functions of the pressure member93. Each of four cartridge holding portions91includes the pressure member93and the other pressure member. In the motion indicated by the dashed arrow inFIG.14, the pressure member93presses the first columnar protrusion46and the other pressure member93presses the second columnar protrusion55in the same manner as the pressure member93presses the first columnar protrusion46as shown inFIG.14, and the casing10is thus inclined in the first direction. Accordingly, the position of the holding surface620in the first direction relative to the casing10is changed from the contact position to the final position, described above. FIG.15illustrates the developing cartridge1in the separating operation. During the separating operation, the driving force from the image forming apparatus changes the positions of the first columnar protrusion46and the second columnar protrusion55. Specifically, the lever of the drawer unit90(not illustrated) presses each of the first columnar protrusion46and the second columnar protrusion55, and each of the first columnar protrusion46and the second columnar protrusion55thus moves against the pressing force of the pressure member93. Consequently, as shown by a dashed arrow illustrated inFIG.15, the casing10and the developing roller30of the developing cartridge1move in the second direction so as to separate away from the photosensitive drum92. Meanwhile, the IC chip assembly60is fixed in a state where the IC chip assembly60is nipped between the electric connector913and the second guide plate912. Accordingly, the position of the IC chip assembly60is not changed relative to the drawer unit90, when the casing10and the developing roller30move in the second direction so that the developing roller30is separated from the photosensitive drum92. Further, the state of the coil spring73does not change from the second state. As a result, the position of the holder62relative to the casing10in the second direction changes from a standard position (first position) to a separation position (second position). The boss621athen moves inside of the through-hole451ain the second direction and the boss621bthen moves inside of the through-hole451bin the second direction. As described above, the developing cartridge1can change the position of the casing10relative to the drawer unit90in the second direction, without changing the position of the electric contact surface611in the second direction relative to the drawer unit90. Accordingly, the developing cartridge1can maintain the contacting state between the electric contact surface611and the electric connector913during the separating operation. The contacting state between the electric contact surface611and the electric connector913can also be maintained during the shipment of the image forming apparatus in which the developing cartridge1is attached to the drawer unit90. Accordingly, abrasion or wear of the electric contact surface611can be suppressed. 5. Modifications While the description has been made in detail with reference to the specific embodiment thereof, it would be apparent to those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the above described embodiment. In the following description, differences between the above embodiment and the modifications are mainly explained. 5-1. First Modification In the following a first modification of the main embodiment is discussed. Due to the many similarities between the first modification and the main embodiment only differences between the main embodiment and the first modification will be discussed. With regard to all other features reference is made to the discussion of the main embodiment above.FIG.16is a partial exploded perspective view of the developing cartridge1A according to a first modification. In the first modification, at least a portion of the holder62A holding the IC chip61A is covered by the first cover45A, as illustrated inFIG.16. As illustrated inFIG.16, the first cover45A includes a boss451aA and a boss451bA. The boss451aA and the boss451bA are arrayed in the second direction. Each of the boss451aA and the boss451bA extends from the first cover45A toward the casing10A in the third direction. The holder62A has a through-hole621A that penetrates the holder62A in the third direction. Both of the boss451aA and the boss451bA are inserted in the through-hole621A. The boss451aA includes one edge of the boss451aA and another edge of the boss451aA in the second direction, and the boss451bA includes one edge of the boss451bA facing the other edge of the boss451aA in the second direction and another edge of the boss451bA in the second direction. The through-hole621A has a dimension in the second direction greater than the distance between the one edge of the boss451aA and the other edge of the boss451bA in the second direction. Specifically, the distance between the one edge of the boss451aA and the other edge of the boss451bA in the second direction is the longest distance the boss451aA and the boss451bA in the second direction, and the dimension of the through-hole621A in the second direction is greater than the longest distance. The holder62A can move together with the through-hole621A in the second direction relative to both the casing10A and the first cover45A. When the holder62A moves in the second direction, the IC chip61A having the electric contact surface611A moves in the second direction together with the holder62A. The dimension of the through-hole621A in the first direction is greater than each dimension of the boss451aA and the boss451bA in the first direction. Accordingly, the holder62A, can move together with the through-hole621A in the first direction relative to both the casing10A and the first cover45A. When the holder62A moves in the first direction, the IC chip61A having the electric contact surface611A moves in the first direction together with the holder62A. The holder62A may be movable in the third direction between the first cover45A and the first outer surface11A. As described above, the first cover45A may include the boss451Aa and boss451bA, and the holder62A may have the through-hole621A, so that the electric contact surface611can move relative to the casing10A in the first and second directions. In accordance with the configuration, the boss451aA and the boss451bA can be moved in the first direction inside of the through-hole621A when the casing10A is inclined in the first direction during the attachment of the developing cartridge1A to the drawer unit90. When the separating operation is performed after the developing cartridge1A is attached to the drawer unit90, the boss451aA and the boss451bA can move in the second direction inside of the through-hole621A. As a result, the position of the casing10A can be changed in a state where the contact state of the electric contact surface611A and the electric connecter is satisfactorily maintained. Instead of the boss451aA and the boss451bA, the number of the bosses may be one, or more than or equal to three. The number of the through-holes621A formed on the holder62A may be more than or equal to two. Instead of the through-hole621A, the holder62A may have a recessed portion in which the boss451aA and the boss45bA can be inserted. Further, the first outer surface of the casing may have a boss and the holder has the through-hole or the recessed portion through which the boss of the casing is inserted. Each of the boss451aA and the boss451bA may have either a cylindrical shape or a prism shape. FIG.17is a cross-sectional view of the IC chip assembly60A indicated inFIG.16taken along a plane orthogonal to the third direction. As illustrated inFIG.17, the holder62A of the IC chip assembly60A includes a holder member74A made of resin and a leaf spring75A fixed to the holder member74A. The holder member74A includes a first end portion740A that is positioned at one end portion of the holder62A in the first direction. The IC chip61A is fixed to the holding surface620A that is portion of the outer surface of the first end portion740A. The leaf spring75A includes a second end portion750A that is positioned at the other end portion of the holder62A in the first direction. The first end portion740A and the second end portion750A are separated from each other in the first direction in the assembled holder62A. The leaf spring75A is made of a bent elastic metal plate, for example. The leaf spring75A can be stretched or compressed in the first direction between a first state, and a second state in which the leaf spring75A is bent more than in the first state. The length in the first direction of the leaf spring75A in the first state is larger than the length in the first direction of the leaf spring75A in the second state. That is, the distance in the first direction between the first end portion740A and the second end portion750A in the first state is longer than the distance in the first direction between the first end portion740A and the second end portion750A in the second state. The length in the first direction of the leaf spring75A in the second state is smaller than the natural length of the leaf spring75A. As described above, instead of the coil spring, the leaf spring75A may be used so that the IC chip assembly60A can be stretched or compressed in the first direction. Further, as described above, the dimensional difference between the boss451aA and the through-hole621A, the dimensional difference between the boss451bA and the through-hole621A and stretch and compression of the leaf spring75A enable the electric contact surface611A to move in the first direction relative to the casing10A, when the developing cartridge1A is being attached to the drawer unit90. 5-2. Second Modification In the following, a second modification of the main embodiment is discussed. Due to the many similarities between the second modification and the main embodiment only differences between the main embodiment and the second modification will be discussed. With regard to all other features reference is made to the discussion of the main embodiment above.FIG.18is a partial perspective view illustrating a developing cartridge1B according to a second modification. In the second modification depicted inFIG.18, the electric contact surface611B of an IC chip61B is oriented to face in the third direction. Accordingly, in the second modification, the first direction orthogonal to the electric contact surface611B is the same direction as the third direction. In the second modification depicted inFIG.18, a columnar elastic body63B is positioned between a casing10B and an IC chip assembly60B. As the columnar elastic body63B, for example, a coil spring extending in the first direction may be used. The columnar elastic body63B includes one end portion in the third direction, and the one end portion is fixed to a holder62B of the IC chip assembly60B. The columnar elastic body63B includes another end portion in the third direction, and the other end portion is fixed to a first outer surface of the casing10B. That is, the casing10B and the IC chip assembly60B are connected to each other by the columnar elastic body63B. FIGS.19and20are explanatory diagrams illustrating movement of the IC chip assembly60B in accordance with deformation of the columnar elastic body63B. As illustrated inFIG.19, the columnar elastic body63B is capable of being stretched or compressed in the first direction. As the columnar elastic body63B is stretched or compressed, the position of the electric contact surface611B relative to the casing10B in the first direction also changes. Further, as illustrated inFIG.20, the columnar elastic body63B can deform in a direction diagonal to the first direction. As the columnar elastic body63B diagonally deforms, the position of the one end of the columnar elastic body63B also changes relative to another end of the columnar elastic body63B in a direction perpendicular to the first direction. FIGS.21and22are explanatory diagrams illustrating how the developing cartridge1B according to the second modification is attached to a drawer unit90B. As illustrated inFIGS.21and22, a first cover45B includes a first frame portion456B and a second frame portion457B, and the first frame portion456B and the second frame portion457B are arranged with a gap between the first frame portion456B and the second frame portion457B in the second direction. The IC chip assembly60B and the columnar elastic body63B are accommodated in an accommodating portion452B which defines a space between the first frame portion456B and the second frame portion457B. The first cover45B further includes a pawl453B protruding from the first frame portion456B toward the accommodating portion452B. As illustrated inFIG.21, before the developing cartridge1B is attached to the drawer unit90B, a portion of the IC chip assembly60is in contact with the pawl453B. Hence, the columnar elastic body63B is maintained in a more compressed state than the natural length of the columnar elastic body63B in the first direction. When the developing cartridge1B has been attached to the drawer unit90B, as illustrated inFIG.22, the electric contact surface611B of the IC chip61B contacts an electric connector913B. In this state, a length of the columnar elastic body63B in the first direction is shorter than the length in the first direction of the columnar elastic body63B in the compressed state illustrated inFIG.21. Thus, due to a repulsion force of the columnar elastic body63B, a contact state between the electric contact surface611B and the electric connector913B is maintained. FIG.23is an explanatory diagram illustrating a state where the separating operation is performed after the developing cartridge1B is attached to the drawer unit90B. When the separating operation is performed, as illustrated inFIG.23, the columnar elastic body63B is deformed diagonally with respect to the first direction. Thus, the IC chip assembly60B connected to the one end of the columnar elastic body63B moves in the second direction relative to the casing10B connected to the other end of the columnar elastic body63B. Thus, the position of the casing10B in the second direction can be changed without changing the position of the electric contact surface611B in the second direction relative to the drawer unit90B. That is, the separating operation can be performed in a state where the contact state between the electric contact surface611B and the electric connector913B is maintained. 5-3. Third Modification In the following, a third modification of the main embodiment is discussed. Due to the many similarities between the third modification and the main embodiment only differences between the main embodiment and the third modification will be discussed. With regard to all other features, reference is made to the discussion of the main embodiment above.FIG.24is a perspective view of a developing cartridge1C according to a third modification. In the third modification depicted inFIG.24, an IC chip assembly60C includes an IC chip61C, a holder62C, a shaft portion66C, and a lever67C. The shaft portion66C extends in the second direction within a first cover45C. The shaft portion66C includes one end portion in the second direction, and the one end portion of the shaft portion66C is mounted to the holder62C so as to be incapable of rotating relative to the holder62C. The shaft portion66C includes other end portion in the second direction, and the another end portion is mounted to the lever67C positioned outside the first cover45C so as to be incapable of rotating relative to the lever67C. Accordingly, as the lever67C pivots about the shaft portion66C as indicated by a dashed line arrow depicted inFIG.24, the shaft portion66C and the holder62C also pivots about the shaft portion66C. In the third modification, the third direction orthogonal to the electric contact surface611C of the IC chip61C is the same direction as the first direction. Accordingly, a position of the holder62C in the first direction is changed. FIGS.25through27are views of the developing cartridge1C according to the third modification as viewed in the third direction. Before the developing cartridge1C is attached to the drawer unit, as illustrated inFIGS.24and25, the IC chip61C and the holder62C are accommodated inside the first cover45C. When the developing cartridge1C is attached to the drawer unit and then the drawer unit is accommodated in the image forming apparatus, the lever67C pivots about the shaft portion66C. As a result, a portion of the holder62C and the IC chip61C protrude from the first cover45C. Further, as illustrated inFIG.27, the electric contact surface611C of the IC chip61C are in contact with an electric connector913C of the drawer unit. The lever67C may be manually operated by a user after the developing cartridge1C is attached to the drawer unit. Alternatively, the lever67C may be pivoted by a guide surface provided at a main body of the image forming apparatus when the drawer unit is attached into the main body of the image forming apparatus. The first cover45C includes a support surface454C that can contact the lever67C before the lever67C pivots. In a state depicted inFIG.25, one surface of the lever67C in the second direction is in contact with the support surface454C. Accordingly, the lever67C, the shaft portion66C, the holder62C and the IC chip61C as a whole are supported by the first cover45C in the second direction. However, as illustrated inFIG.26, as the lever67C pivots, the lever67C moves outside the support surface454C. Accordingly, the one surface of the lever67C in the second direction and the support surface454C are not in contact with each other. Further, the holder62C is held at a position depicted inFIG.26, by a positioning member of the drawer unit. As a result, a state in which the electric contact surface611C of the IC chip61C and the electric connector913C are in contact with each other is maintained. Further, in the state depicted inFIG.26, the one surface of the holder62C in the second direction is not in contact with the first cover45C. Accordingly, the lever67C, the shaft portion66C, the holder62C and the IC chip61C as a whole can move relative to the first cover45C in the second direction. Consequently, as illustrated inFIG.27, when the separating operation is performed, the casing10C and the first cover45C can move in the second direction in a state where the contact state between the electric contact surface611C of the IC chip61C and the electric connector913C is maintained. 5-4. Fourth Modification In the following, a fourth modification of the main embodiment is discussed. Due to the many similarities between the fourth modification and the main embodiment only differences between the main embodiment and the fourth modification will be discussed. With regard to all other features, reference is made to the discussion of the main embodiment above.FIG.28is an exploded perspective view illustrating a first cover45D and an IC chip assembly60D of the developing cartridge according to a fourth modification.FIG.29is a cross-sectional view illustrating the first cover45D and the IC chip assembly60D. In the fourth modification depicted inFIGS.28and29, the electric contact surface611D of the IC chip61D are oriented to face in the third direction. Accordingly, the first direction orthogonal to the electric contact surfaces611D is the same direction as the third direction. As illustrated inFIGS.28and29, the IC chip assembly60D according to the fourth modification includes the IC chip61D, the holder62D holding the IC chip61D, and a joint member63D. The holder62D includes a plurality of pawls622D, and each of the plurality of pawls622D extends away from the electric contact surfaces611D in the first direction. In the fourth modification depicted inFIG.28, the holder62D has four pawls622D. The joint member63D includes a fixing portion631D fixed to the first cover45D, and an arm632D extending from the fixing portion631D toward the holder62D in the first direction. The arm632D includes a distal end in the first direction, and a spherical portion633D whose diameter is larger than a thickness of the arm632D. The spherical portion633D is positioned at the distal end of the arm632D. The spherical portion633D is held at a position inside of the holder62D by the plurality of pawls622D. With this configuration, as illustrated inFIG.30, the arm632D and the holder62D are connected to each other so as to be rotatable relative to each other. That is, the IC chip61D and the holder62D are rotatable relative to each other about the spherical portion633D. Accordingly, the position of the electric contact surfaces611D of the IC chip61D relative to the fixing portion631D can be moved in the second direction. Therefore, when the separating operation of the developing cartridge is performed, the casing and the first cover45D can move in the second direction in a state where the contact state between the electric contact surfaces611D of the IC chip61D and the electric connector is maintained. Further, with the configuration depicted inFIGS.28through30, the plurality of pawls622D of the holder62D and the arm632D of the joint member63D are movable relative to each other in the first direction. Thus, when the developing cartridge is inserted into the drawer unit, the IC chip61D and the holder62D can move relative to the fixing portion631D in the first direction. Accordingly, the developing cartridge can be inserted, while the electric contact surfaces611D of the IC chip61D can be suppressed from being rubbed. An elastic member such as a coil spring being stretched or compressed in the first direction may be positioned between the fixing portion631D of the joint member63D and the plurality of pawls622D. An elastic member such as a coil spring being stretched or compressed in the first direction may be positioned between the first cover45D and the plurality of pawls622D. Accordingly, a repulsion force by the elastic member allows the electric contact surface611D to reliably contact the electric connector. Further, the arm632D may be rotatably connected to the fixing portion631D or the first cover45D. For example, the arm632D includes one spherical portion at one end of the arm632D and another spherical portion at another end of the arm632D. Either the one spherical portion or the other spherical portion may be rotatably held by a plurality of pawls of the first cover45D. In this manner, when both ends of the arm632D are rotatably connected, the position of the electric contact surface611D in the second direction may be changed more flexibly. 5-5. Fifth Modification In the following, a fifth modification of the main embodiment is discussed. Due to the many similarities between the fifth modification and the main embodiment only differences between the main embodiment and the fifth modification will be discussed. With regard to all other features, reference is made to the discussion of the main embodiment above.FIG.31is a partial perspective view of a developing cartridge1E of the fifth modification. In the embodiment shown inFIG.31, the holder62E holding the IC chip61E has a plate shape which has been deformed in a circular manner and whose ends are connected to each other. The holder62E is made of a flexible resin, for example. Accordingly, in the embodiment shown inFIG.31, the holder62E itself is an elastic member which is stretched or compressed in the first direction. With this structure, a distance between both ends of the holder62E in the first direction is changeable. Accordingly, when the developing cartridge1E is inserted into the drawer unit, abrasion or wear of the electric contact surface611E of the IC chip61E can be suppressed. In the embodiment shown inFIG.31, the holder62E is not necessarily configured by a plurality of members due to stretch and compression of the holder62E in the first direction. The holder62E is not necessarily comprised by an elastic member which is different from the member for holding the IC chip61E. FIG.32is an exploded perspective view showing a first cover45E and an IC chip assembly60E of the fifth modification. As shown inFIG.32, the first cover45E includes a boss451aE extending in the third direction and a boss451bE extending in the third direction. The boss451aE and the boss451bE are aligned in the second direction. And, the first cover45E includes a connecting portion455E which connects a top of the boss451aE and a top of the boss451bE to each other. The holder62E extends in a ring shape surrounding the boss451aE and the boss451bE. And one pawl623E positioned at one end of the holder62E and another pawl623E positioned at another end of the holder62E are engaged with each other. Accordingly, a through-hole621E is positioned at the inside of the holder62E and the through-hole621E penetrates through the holder62E in the third direction. The boss451aE and the boss451bE are positioned inside of the through-hole621E. The holder62E further includes a plate portion624E protruding from an inner surface of the holder62E toward the through-hole621E. The plate portion624E is inserted between the boss451aE and the boss451bE. The distance between the boss451aE and the boss451bE in the second direction is greater than the thickness of the plate portion624E in the second direction. Therefore, the holder62E is able to relatively move together with the plate portion624E with respect to the casing10E and the first cover45E in the second direction. When the holder62E moves in the second direction, the IC chip61E having the electric contact surface611E moves together with the holder62E in the second direction. The size of the through-hole621E in the first direction is greater than the sizes of each of the boss451aE and the boss451bE in the first direction. Therefore, the holder62E is movable with respect to the casing10E and the first cover45E in the first direction. When the holder62E moves in the first direction, the IC chip61E having the electric contact surface611E moves together with the holder62E in the first direction. When the developing cartridge1E is attached to the drawer unit, the holder62E is nipped by the guide plates of the drawer unit and the holder62E is compressed in the first direction. Specifically, when the one pawl623E and the other pawl623E approach each other, an urging force exerting in the direction to separate the one pawl623E and the other pawl623E from each other is generated. The electric contact surface611E of the IC chip61E is in contact with the electric connector in a state where the holder62E is elastically deformed. The electric contact surface611E is fixed to the electric connector due to the urging force in a state where the electric contact surface611E is in contact with the electric connector. And, in the separating operation, the casing10E moves in the second direction in a state where the contact between the electric contact surface611E and the electric connector is maintained. With the above configuration, when the developing cartridge1E is attached to the drawer unit and the casing10E is inclined in the first direction, the boss451aE and the boss451bE are able to move in the first direction inside the through-hole621E. After the developing cartridge1E is attached to the drawer unit and the separating operation is performed, the boss451aE and the boss451bE are able to move in the second direction inside the through-hole621E. As a result, the position of the casing10E can be changed in a state where the contact condition between the electric contact surface611E and the electric connector is maintained in a good manner. The number of the bosses provided at the first cover45E may be one, two, three or more than three. 5-6. Sixth Modification In the following, a sixth modification of the main embodiment is discussed. Due to the many similarities between the sixth modification and the main embodiment only differences between the main embodiment and the sixth modification will be discussed. With regard to all other features, reference is made to the discussion of the main embodiment above.FIG.33is a perspective view showing a developing cartridge1F and a drum cartridge80F of the sixth modification. The developing cartridge1F shown inFIG.33includes a casing10F, a developing roller30F, an IC chip assembly60F, and a first cover45F. In the embodiment shown inFIG.33, the developing cartridge1F is attached to the drum cartridge80F instead of the drawer unit. The drum cartridge80F includes one developing cartridge holding portion81F holding the developing cartridge1F. The developing cartridge holding portion81F includes a photosensitive drum82F. When the developing cartridge1F is attached to the drum cartridge80F, the developing roller30F of the developing cartridge1F is in contact with the photosensitive drum82F. FIG.34is a view showing how to attach the drum cartridge80F to an image forming apparatus100F in a state where the developing cartridge1F is attached to the drum cartridge80F. As shown inFIG.34, the drum cartridge80F is attached to a drum cartridge holding portion101F provided in the image forming apparatus100F in a state where the developing cartridge1F is attached to the drum cartridge80F. In the above manner, the similar structure to that of the IC chip assemblies according to the above embodiment or the first to fifth modifications respectively can be applied to the developing cartridge1F to be attached to the drum cartridge80F.FIG.35is an exploded perspective view showing a detail of the IC chip assembly60F of the developing cartridge1F. As shown inFIG.35, the IC chip assembly60F of the developing cartridge1F includes an IC chip61F as a storage medium and a holder62F holding the IC chip61F. The first cover holds the holder62F at a side of the casing10F in the third direction. The holder62F includes a first holder member71F, a second holder member72F, and a coil spring73F. The coil spring73F is an elastic member that can be stretched or compressed in the first direction. The first holder member71F includes a boss621aF, a boss621bF, and a boss621cF. The boss621aF extends in the third direction toward the first cover45F from a certain surface of the first holder member71F, and the certain surface faces the first cover45F. On the other hand, the first cover45F has a through-hole451F. The through-hole451F penetrates through the first cover45F in the third direction. The boss621aF is inserted through the through-hole451F. Each of the boss621bF and the boss621cF extends in the third direction toward the casing10F from a certain surface of the first holder member71F, and the certain surface faces casing10F. On the other hand, the casing10F includes a recessed portion15aF and a recessed portion15bF. Each of the recessed portion15aF and the recessed portion15bF is recessed from the first outer surface11F of the casing10F in the third direction. The boss621bF is inserted through the recessed portion15aF. The boss621cF is inserted through the recessed portion15bF. The through-hole451F has a size (inner dimension) in the second direction greater than a size (outer dimension) of the boss621aF in the second direction. The recessed portion15aF has a size (inner dimension) in the second direction greater than a size (outer dimension) of the boss621bF in the second direction. Further, the recessed portion15bF has a size (inner dimension) in the second direction greater than a size (outer dimension) of the boss621cF in the second direction. Hence, the holder62F can move in the second direction relative to the casing10F and the first cover45F, together with the bosses621aF,621bF, and621cF. As the holder62F moves in the second direction, the IC chip61F including the electric contact surface611F also moves in the second direction, together with the holder62F. The through-hole451F has a size (inner dimension) in the first direction greater than a size (outer dimension) of the boss621aF in the first direction. The recessed portion15aF has a size (inner dimension) in the first direction greater than a size (outer dimension) of the boss621bF in the first direction. Further, the recessed portion15bF has a size (inner dimension) in the first direction greater than a size (outer dimension) of the boss621cF in the first direction. Hence, the holder62F can move in the first direction relative to the casing10F and the first cover45F, together with the boss621aF, boss621bF, and boss621cF. As the holder62F moves in the first direction, the IC chip61F including the electric contact surface611F also moves in the first direction, together with the holder62F. As shown inFIG.34, the second holder member72F includes a recess portion625F. On the other hand, the drum cartridge80F includes a convex portion83F. The recess portion625F and the convex portion83F face each other in the first direction. The size of the recess portion625F gradually enlarges while progressing away from the IC chip61F in the first direction. The size of the convex portion83F gradually diminishes while progressing toward a top of the convex portion83F in the first direction. As shown inFIG.34, the image forming apparatus100F includes an electric connector102F. When the drum cartridge80F is inserted into the image forming apparatus100F in a state where the developing cartridge1F is attached to the drum cartridge80F, the first holder member71F is brought into contact with a component of the image forming apparatus100F. The convex portion83F of the drum cartridge80F is fitted in the recess portion625F of the second holder member72F. Therefore, the position of the second holder member72F relative to the drum cartridge80fis fixed. As a result, the holder62F is nipped between the component of the image forming apparatus100F and the drum cartridge80F, whereby the coil spring73F is compressed in the first direction. When the drum cartridge80F is further inserted into the image forming apparatus100F, the electric contact surfaces611F of the IC chip61F are brought into contact with the one or more of electric connectors102F. The IC chip61F is brought into contact with the electric connector102F, while receiving a repulsion force from the coil spring73F. The holder62F is nipped between the electric connector102F and the convex portion83F. In this way, the holder62F is positioned relative to the image forming apparatus100F and the drum cartridge80F. As shown inFIG.35, the second holder member72F includes a pawl714F. The pawl714F protrudes from the second holder member72F in a direction that crosses the first direction. In the example ofFIG.35, the pawl714F protrudes in the third direction from the second holder member72F. The first holder member71F has an opening721F. The pawl714F is inserted through the opening721F. This prevents the first holder member71F from being detached from the second holder member72F. The casing10F of the developing cartridge1F includes a first rib46F and a second rib55F. The first rib46F protrudes from the first outer surface11F in the third direction. The second rib55F protrudes from the second outer surface12F in the third direction. The drum cartridge80F includes a first lever84F and a second lever85F. During the separating operation, the first lever84F and second lever85F are operated by a driving force supplied from the image forming apparatus, whereupon the first rib46F is pushed by the first lever84F and the second rib55F is pushed by the second lever85F. This operation changes the positions of the first rib46F and second rib55F. As a result, the casing10F of the developing cartridge1F and the developing roller30F move in the second direction and move away from the photosensitive drum92. As described above, also in the developing cartridge1F, the position of the holder62F can be changed in the second direction relative to the casing10F. Accordingly, the position of the casing10F in the second direction can be changed, while the positions of the electric contact surface611F relative to the electric connector102F in the second direction being maintained, that is, the positions of the electric contact surface611F relative to the electric connector102F in the second direction being unchanged. Therefore, it is possible to perform the separating operation, while maintaining the electric contact surface611F and electric connector102F in contact with each other. Accordingly, abrasion or wear of the electric contact surface611F can be suppressed. Also in the developing cartridge1F, the electric contact surfaces611F are movable relative to the casing10F in the first direction. Accordingly, when the drum cartridge80F is attached to the image forming apparatus100F, abrasion or wear of the electric contact surface611F can be suppressed. 5-7. Other Modifications In the above-described embodiments, the IC chip including the electric contact surfaces is fixed to the outer surface of the holder. However, only the electric contact surfaces of the IC chip that serve to contact the electric connectors may be fixed to the outer surface of the holder, but portions of the IC chip other than the electric contact surfaces may be positioned at other portions of the developing cartridge. According to the above-described embodiments, the plural gears provided within each of the first gear portion and the second gear portion are engaged with one another through meshing engagement of the gear teeth. However, the plural gears provided within each of the first gear portion and the second gear portion may be engaged with one another through a frictional force. For example, instead of the plural gear teeth, frictional members, such as rubber members, may be provided to the outer circumferences of two gears that engage with each other. According to the above-described embodiments, the developing cartridge can be attached to the drawer unit of the image forming apparatus. However, the developing cartridge may be attached to the image forming apparatus which does not include the drawer unit. Shapes of the details in the developing cartridge may differ from those shown in the drawings attached to this application. The respective components employed in the above-described embodiment and modifications can be selectively combined together within an appropriate range so that no inconsistency will arise. | 84,497 |
11860569 | PREFERRED EMBODIMENTS OF THE INVENTION The description will be made as to a developer supply container and a developer supplying system according to the present invention. In the following description, various structures of the developer supply container may be replaced with other known structures having similar functions within the scope of the concept of invention unless otherwise stated. In other words, the present invention is not limited to the specific structures of the embodiments which will be described hereinafter, unless otherwise stated. Embodiment 1 First, basic structures of an image forming apparatus will be described, and then, a developer receiving apparatus and a developer supply container constituting a developer supplying system used in the image forming apparatus will be described. (Image Forming Apparatus) Referring toFIG.1, the description will be made as to a structure of a copying machine (electrophotographic image forming apparatus) of an electrophotographic type as an example of an image forming apparatus comprising a developer receiving apparatus to which a developer supply container (so-called toner cartridge) is detachably (removably) mounted. In the Figure, designated by100is a main assembly of the copying machine (main assembly of the image forming apparatus or main assembly of the apparatus). Designated by101is an original which is placed on an original supporting platen glass102. A light image corresponding to image information of the original is imaged on an electrophotographic photosensitive member104(photosensitive member) by way of a plurality of mirrors M of an optical portion103and a lens Ln, so that an electrostatic latent image is formed. The electrostatic latent image is visualized with toner (one component magnetic toner) as a developer (dry powder) by a dry type developing device (one component developing device)201a. In this embodiment, the one component magnetic toner is used as the developer to be supplied from a developer supply container1, but the present invention is not limited to the example and includes other examples which will be described hereinafter. Specifically, in the case that a one component developing device using the one component non-magnetic toner is employed, the one component non-magnetic toner is supplied as the developer. In addition, in the case that a two component developing device using a two component developer containing mixed magnetic carrier and non-magnetic toner is employed, the non-magnetic toner is supplied as the developer. In such a case, both of the non-magnetic toner and the magnetic carrier may be supplied as the developer. As described hereinbefore, the developing device201ofFIG.1develops, using the developer, the electrostatic latent image formed on the photosensitive member104as an image bearing member on the basis of image information of the original101. The developing device201is provided with a developing roller201fin addition to the developer hopper portion201a. The developer hopper portion201ais provided with a stirring member201cfor stirring the developer supplied from the developer supply container1. The developer stirred by the stirring member201cis fed to the feeding member201eby a feeding member201d. The developer having been fed by the feeding members201e,201bin the order named is supplied finally to a developing zone relative to the photosensitive member104while being carried on the developing roller201f. In this example, the toner as the developer is supplied from the developer supply container1to the developing device201, but another system may be used, and the toner and the carrier functioning developer may be supplied from the developer supply container1, for example. Of the sheet S stacked in the cassettes105-108, an optimum cassette is selected on the basis of a sheet size of the original101or information inputted by the operator (user) from a liquid crystal operating portion of the copying machine. The recording material is not limited to a sheet of paper, but OHP sheet or another material can be used as desired. One sheet S supplied by a separation and feeding device105A-108A is fed to registration rollers110along a feeding portion109, and is fed at timing synchronized with rotation of a photosensitive member104and with scanning of an optical portion103. Designated by111,112are a transfer charger and a separation charger. An image of the developer formed on the photosensitive member104is transferred onto the sheet S by a transfer charger111. Thereafter, the sheet S fed by the feeding portion113is subjected to heat and pressure in a fixing portion114so that the developed image on the sheet is fixed, and then passes through a discharging/reversing portion115, in the case of one-sided copy mode, and subsequently the sheet S is discharged to a discharging tray117by discharging rollers116. The trailing end thereof passes through a flapper118, and a flapper118is controlled when it is still nipped by the discharging rollers116, and the discharging rollers116are rotated reversely, so that the sheet S is refed into the apparatus. Then, the sheet S is fed to the registration rollers110by way of re-feeding portions119,120, and then conveyed along the path similarly to the case of the one-sided copy mode and is discharged to the discharging tray117. In the main assembly100of the apparatus, around the photosensitive member104, there are provided image forming process equipment such as a developing device201aas the developing means a cleaner portion202as a cleaning means, a primary charger203as charging means. The developing device201develops the electrostatic latent image formed on the photosensitive member104by the optical portion103in accordance with image information of the101, by depositing the developer onto the latent image. The primary charger203uniformly charges a surface of the photosensitive member for the purpose of forming a desired electrostatic image on the photosensitive member104. The cleaner portion202removes the developer remaining on the photosensitive member104. FIG.2is an outer appearance of the image forming apparatus. When an exchange cover40which is a part of an outer casing of the image forming apparatus, a part of a developer receiving apparatus8which will be described hereinafter is exposed. By inserting (mounting) the developer supply container1into the developer receiving apparatus8, the developer supply container1is set in the state capable of supplying the developer into the developer receiving apparatus8. On the other hand, when the operator exchanges the developer supply container1the developer supply container1is taken out (disengaged) from the developer receiving apparatus8through the operation reciprocal to the mounting operation, and a new developer supply container1is set. Here, the exchange cover40is exclusively for mounting and demounting (exchange) of the developer supply container1, and is opened and closed for mounting and demounting the developer supply container1. For other maintenance operations for the main assembly of the apparatus100, a front cover100cis opened and closed. The exchange cover40and the front cover100cmay be made integral with each other, and in this case, the exchange of the developer supply container1and the maintenance of the main assembly of the apparatus100are carried out with opening and closing of the integral cover (unshown). (Developer Receiving Apparatus) Referring toFIGS.3and4the developer receiving apparatus8will be described. Part (a) ofFIG.3is a schematic perspective view of the developer receiving apparatus8, and part (b) ofFIG.3is a schematic sectional view of the developer receiving apparatus8. Part (a) ofFIG.4is a partial enlarged perspective view of the developer receiving apparatus8, part (b) ofFIG.4is a partial enlarged sectional view of the developer receiving apparatus8, and a part (c) ofFIG.4is a perspective view of a developer receiving portion11. As shown in part (a) ofFIG.3, the developer receiving apparatus8is provided with a mounting portion (mounting space)8finto which the developer supply container1is removably (detachably) mounted. It is also provided with a developer receiving portion11for receiving the developer discharged through a discharge opening3a4(part (b) ofFIG.7), which will be described hereinafter, of the developer supply container1. The developer receiving portion11is mounted so as to be movable (displaceable) relative to the developer receiving apparatus8in the vertical direction. As shown in part (c) ofFIG.4, the developer receiving portion11is provided with a main assembly seal13having a developer receiving port11aat the central portion thereof. The main assembly seal13is made of an elastic member, a foam member or the like, and is close-contacted with an opening seal3a5(part (b) ofFIG.7) having a discharge opening3a4of the developer supply container1, by which the developer discharged through the discharge opening3a4is prevented from leaking out of a developer feeding path including developer receiving port11a. In order to prevent the contamination in the mounting portion8fby the developer as much as possible, a diameter of the developer receiving port11ais desirably substantially the same as or slightly larger than a diameter of the discharge opening3a4of the developer supply container1. This is because if the diameter of the developer receiving port11ais smaller than the diameter of the discharge opening3a4, the developer discharged from the developer supply container1is deposited on the upper surface of the main assembly seal13having the developer receiving port11a, and the deposited developer is transferred onto the lower surface of the developer supply container1during the dismounting operation of the developer supply container1, with the result of contamination with the developer. In addition, the developer transferred onto the developer supply container1may be scattered to the mounting portion8fwith the result of contamination of the mounting portion8fwith the developer. On the contrary, if the diameter of the developer receiving port11ais quite larger than the diameter of the discharge opening3a4, an area in which the developer scattered from the developer receiving port11ais deposited around the discharge opening3a4formed in the opening seal3a5is large. That is, the contaminated area of the developer supply container1by the developer is large, which is not preferable. Under the circumstances, the difference between the diameter of the developer receiving port11aand the diameter of the discharge opening3a4is preferably substantially 0 to approx. 2 mm. In this example, the diameter of the discharge opening3a4of the developer supply container1is approx. Φ2 mm (pin hole), and therefore, the diameter of the developer receiving port11ais approx. φ3 mm. As shown in part (b) ofFIG.3, the developer receiving portion11is urged downwardly by an urging member12. When the developer receiving portion11moves upwardly, it has to move against an urging force of the urging member12. As shown in part (b) ofFIG.3, below the developer receiving apparatus8, there is provided a sub-hopper8cfor temporarily storing the developer. In the sub-hopper8c, there are provided a feeding screw14for feeding the developer into the developer hopper portion201awhich is a part of the developing device201, and an opening8dwhich is in fluid communication with the developer hopper portion201a. As shown in part (b) ofFIG.13, the developer receiving port11ais closed so as to prevent foreign matter and/or dust entering the sub-hopper8cin a state that the developer supply container1is not mounted. More specifically, the developer receiving port11ais closed by a main assembly shutter15in the state that the developer receiving portion11is away to the upside. The developer receiving portion11moves upwardly (arrow E) from the position shown in part (b) ofFIG.13toward the developer supply container1. By this, as shown in part (b) ofFIG.15, the developer receiving port11aand the main assembly shutter15are spaced from each other so that the developer receiving port11ais open. With this open state, the developer is discharged from the developer supply container1through the discharge opening3a4, so that the developer received by the developer receiving port11ais movable to the sub-hopper8c. As shown in part (c) ofFIG.4, a side surface of the developer receiving portion11is provided with an engaging portion11b. The engaging portion11bis directly engaged with an engaging portion3b2,3b4(FIG.8) provided on the developer supply container1which will be described hereinafter, and is guided thereby so that the developer receiving portion11is raised toward the developer supply container1. As shown in part (a) ofFIG.3, the mounting portion8fof the developer receiving apparatus8is provided with an insertion guide8efor guiding the developer supply container1in the mounting and demounting direction, and by the insertion guide8e, the mounting direction of the developer supply container1is made along the arrow A. The dismounting direction of the developer supply container1is the opposite (arrow B) to the direction of the arrow A. As shown in part (a) ofFIG.3, the developer receiving apparatus8is provided with a driving gear9functioning as a driving mechanism for driving the developer supply container1. The driving gear9receives a rotational force from a driving motor500through a driving gear train, and functions to apply a rotational force to the developer supply container1which is set in the mounting portion8f. As shown inFIGS.3and4, the driving motor500is controlled by a control device (CPU)600. (Developer Supply Container) Referring toFIG.5, the developer supply container1will be described. Part (a) ofFIG.5a schematic exploded perspective view of the developer supply container1, and part (b) ofFIG.5is a schematic perspective view of the developer supply container1. In the part (b) ofFIG.5, a cover7is partly broken for better understanding. As shown in part (a) ofFIG.5, the developer supply container1mainly comprises a container body2, a flange portion3, a shutter4, a pump portion5, a reciprocating member6and the cover7. The developer supply container1is rotated about a rotational axis P shown in part (b) ofFIG.5in a direction of an arrow R in the developer receiving apparatus8, by which the developer is supplied into the developer receiving apparatus8. Each element of the developer supply container1will be described in detail. (Container Body) FIG.6is a perspective view of a container body. As shown inFIG.6, the container body (developer feeding chamber)2mainly comprises a developer accommodating portion2cfor accommodating the developer, and a helical feeding groove2a(feeding portion) for feeding the developer in the developer accommodating portion2cby rotation of the container body2about a rotational axis P in the direction of the arrow R. As shown inFIG.6, a cam groove2band drive receiving portion (drive inputting portion) for receiving the drive from the main assembly side are formed integrally with the body2, over the full circumference at one end portion of the container body2. In this example, the cam groove2band the drive receiving portion2dare integrally formed with the container body2, but the cam groove2bor the drive receiving portion2dmay be formed as another member, and may be mounted to the container body2. In this example, the developer containing the toner having a volume average particle size of 5 μm-6 μm is accommodated in the developer accommodating portion2cof the container body2. In this example, the developer accommodating portion (developer accommodating space)2cis provided not only by the container body2but also by the inside space of the flange portion3and the pump portion5. (Flange Portion) Referring toFIG.5, the flange portion25will be described. As shown in part (b) ofFIG.5, the flange portion (developer discharging chamber)3is rotatably the rotational axis P relative to the container body2, and when the developer supply container1is mounted to the developer receiving apparatus8, it is not rotatable in the direction of the arrow R relative to the mounting portion8f(part (a) ofFIG.3). In addition, it is provided with the discharge opening3a4(FIG.7). As shown in part (a) ofFIG.5, the flange portion3is divided into an upper flange portion3a, a lower flange portion3btaking into account an assembling property, and the pump portion5, the reciprocating member6, the shutter4and the cover7are mounted thereto. As shown in part (a) ofFIG.5, the pump portion5is connected with one end portion side of—the upper flange portion3aby screws, and the container body2is connected with the other end portion side through a sealing member (unshown). The pump portion5is sandwiched between the reciprocating members6, and engaging projections6b(FIG.11) of the reciprocating member6are fitted in the cam groove2bof the container body2. Furthermore, the shutter4is inserted into a gap between the upper flange portion3aand the lower flange portion3b. For protection of the reciprocating member6and the pump portion5and for better outer appearance, the cover7is integrally provided so as to cover the entirety of the flange portion3, the pump portion5and the reciprocating member6. (Upper Flange Portion) FIG.7illustrates the upper flange portion3a. Part (a) ofFIG.7is a perspective view of the upper flange portion3aas seen obliquely from an upper portion, and part (b) ofFIG.7is a perspective view of the upper flange portion3eaas seen obliquely from bottom. The upper flange portion3aincludes a pump connecting portion3a1(screw is not shown) shown in part (a) ofFIG.7to which the pump portion5is threaded, a container body connecting portion3a2shown in part (b) ofFIG.7to which the container body2is connected, and a storage portion3a2shown in part (a) ofFIG.7for storing the developer fed from the container body2. As shown in part (b) ofFIG.7, there are provided a circular discharge opening (opening)3a4for permitting discharging of the developer into the developer receiving apparatus8from the storage portion3a3, and a opening seal3a5forming a connecting portion3a6connecting with the developer receiving portion11provided in the developer receiving apparatus8. The opening seal3a5is stuck on the bottom surface of the upper flange portion35aby a double coated tape and is nipped by shutter4which will be described hereinafter and the flange portion3ato prevent leakage of the developer through the discharge opening3a4. In this example, the discharge opening3a4is provided to opening seal3a5which is unintegral with the flange portion3a, but the discharge opening3a4may be provided directly in the upper flange portion35a. As described above, the diameter of the discharge opening3a4is approx. 2 mm for the purpose of minimizing the contamination with the developer which may be unintentionally discharged by the opening and closing of the shutter4in the mounting and demounting operation of the developer supply container1relative to the developer receiving apparatus8. In this example, the discharge opening3a4is provided in the lower surface of the developer supply container1, that is, the lower surface of the upper flange portion3a, but the connecting structure of this example can be accomplished if it is fundamentally provided in a side except for an upstream side end surface or a downstream side end surface with respect to the mounting and dismounting direction of the developer supply container1relative to the developer receiving apparatus8. The position of the discharge opening25a4may be properly selected taking situation of the specific apparatus into account. A connecting operation between the developer supply container1and the developer receiving apparatus8in this example will be described hereinafter. (Lower Flange Portion) FIG.8shows the lower flange portion25b. Part (a) ofFIG.8is a perspective view of the lower flange portion3bas seen obliquely from an upper position, part (b) ofFIG.8is a perspective view of the lower flange portion3bas seen obliquely from a lower position, and part (c) ofFIG.8is a front view. As shown in part (a) ofFIG.8, the lower flange portion3bis provided with a shutter inserting portion3b1into which the shutter4(FIG.9) is inserted. The lower flange portion3bis provided with engaging portions3b2,3b4engageable with the developer receiving portion11(FIG.4). The engaging portions3b2,3b4displace the developer receiving portion11toward the developer supply container1with the mounting operation of the developer supply container1so that the connected state is established in which the developer supply from the developer supply container1to the developer receiving portion11is enabled. The engaging portions3b2,3b4guide the developer receiving portion11to space away from the developer supply container1so that the connection between the developer supply container1and the developer receiving portion39is broken with the dismounting operation of the developer supply container1. A first engaging portion3b2of the engaging portions3b2,3b4displaces the developer receiving portion11in the direction crossing with the mounting direction of the developer supply container1for permitting an unsealing operation of the developer receiving portion1. In this example, the first engaging portion3b2displaces the developer receiving portion11toward the developer supply container1so that the developer receiving portion11is connected with the connecting portion3a6formed in a part of the opening seal3a5of the developer supply container1with the mounting operation of the developer supply container1. The first engaging portion3b2extends in the direction crossing with the mounting direction of the developer supply container1. The first engaging portion3b2effects a guiding operation so as to displace the developer receiving portion11in the direction crossing with the dismounting direction of the developer supply container1such that the developer receiving portion11is resealed with the dismounting operation of the developer supply container1. In this example, the first engaging portion3b2effects the guiding so that the developer receiving portion11is spaced away from the developer supply container1downwardly, so that the connection state between the developer receiving portion11and the connecting portion3a6of the developer supply container1is broken with the dismounting operation of the developer supply container1. On the other hand, a second engaging portion3b4maintains the connection stated between the opening seal3a5and a main assembly seal13during the developer supply container1moving relative to the shutter4which will be described hereinafter, that is, during the developer receiving port11amoving from the connecting portion3a6to the discharge opening3a4, so that the discharge opening3a4is brought into communication with a developer receiving port11aof the developer receiving portion11accompanying the mounting operation of the developer supply container1. The second engaging portion3b4extends in parallel with the mounting direction of the developer supply container1. The second engaging portion3b4maintains the connection between the main assembly seal13and the opening seal3a5during the developer supply container1moving relative to the shutter4, that is, during the developer receiving port11amoving from the discharge opening3a4to the connecting portion3a6, so that the discharge opening3a4is resealed accompanying the dismounting operation of the developer supply container1. A configuration of the first engaging portion3b2desirably includes an inclined surface (inclined portion) crossing the inserting direction of the developer supply container1, and it is not limited to the linear inclined surface as shown in part (a) ofFIG.8. The configuration of the first engaging portion3b2may be a curved and inclined surface as shown in part (a) ofFIG.18, for example. Furthermore, as shown in part (b) ofFIG.18, may be stepped including a parallel surface and an inclined surface. The configuration of the first engaging portion3b2is not limited to the configuration shown in parts (a) or (b) ofFIGS.8and18, if it can displace the developer receiving portion11toward the discharge opening3a4, but a linear inclined surface is desirable from the standpoint of constant manipulating force required by the mounting and dismounting operation of the developer supply container1. An inclination angle of the first engaging portion3b2relative to the mounting and dismounting direction of the developer supply container1is desirably approx. 10-50 degrees in view of the situation which will be described hereinafter. In this example, the angle is approx. 40 degrees. In addition, as shown in part (c) ofFIG.18, the first engaging portion3b2and the second engaging portion3b4may be unified to provide a uniformly linear inclined surface. In this case, with the mounting operation of the developer supply container1, the first engaging portion3b2displaces the developer receiving portion to connect the main assembly seal13with the shield portion3b6developer receiving portion11in the direction crossing with the mounting direction of the developer supply container1. Thereafter, it displaces the developer receiving portion11while compressing the main assembly seal13and the opening seal3a5, until the developer receiving port11aand the discharge opening3a4are brought into fluid communication with each other. Here, when such a first engaging portion3b2is used, the developer supply container1always receives a force in the direction of B (part (a) ofFIG.16) by the relationship between the first engaging portion3b2and the engaging portion11bof the developer receiving portion11in the completed position of the mounting of the developer supply container1which will be described hereinafter. Therefore, the developer receiving apparatus8is required to have a holding mechanism for holding the developer supply container1in the mounting completed position, with the result of increase in cost and/or increase in the number of parts. Therefore, this standpoint, it is preferable that the developer supply container1is provided with the above-described second engaging portion3b4so that the force in the B direction is not applied to the developer supply container1in the mounting completed position, thus stabilizing the connection state between the main assembly seal13and the opening seal3a5. The first engaging portion3b2shown in part (c) ofFIG.18has a linear inclined surface, but similar to the part (a) ofFIG.18or part (b) ofFIG.18, for example, a curved or stepped configuration is usable, although the linear inclined surface is preferable from the standpoint of constant manipulating force in the mounting and dismounting operations of the developer supply container1, as described hereinbefore. The lower flange portion3bis provided with a regulation rib (regulating portion)3b3(part (a) ofFIG.3) for preventing or permitting an elastic deformation of a supporting portion4dof the shutter4which will be described hereinafter, with the mounting or dismounting operation of the developer supply container1relative to the developer receiving apparatus8. The regulation rib3b3protrudes upwardly from an insertion surface of the shutter inserting portion3b1and extends along the mounting direction of the developer supply container1. In addition, as shown in part (b) ofFIG.8, the protecting portion3b5is provided to protect the shutter4from damage during transportation and/or mishandling of the operator. The lower flange portion3bis integral with the upper flange portion3ain the state that the shutter4is inserted in the shutter inserting portion3b1. (Shutter) FIG.9shows the shutter4. Part (a) ofFIG.9is a top plan view of the shutter4, and part (b) ofFIG.9is a perspective view of shutter4as seen obliquely from an upper position. The shutter4is movable relative to the developer supply container1to open and close the discharge opening3a4with the mounting operation and the dismounting operation of the developer supply container1. The shutter4is provided with a developer sealing portion4afor preventing leakage of the developer through the discharge opening3a4when the developer supply container1is not mounted to the mounting portion8fof the developer receiving apparatus8, and a sliding surface4iwhich slides on the shutter inserting portion3b1of the lower flange portion3bon the rear side (back side) of the developer sealing portion4a. Shutter4is provided with a stopper portion (holding portion)4b,4cheld by shutter stopper portions8n,8p(part (a) ofFIG.4) of the developer receiving apparatus8with the mounting and dismounting operations of the developer supply container1so that the developer supply container1moves relative to the shutter4. A first stopper portion5bof the stopper portions4b,4cengages with a first shutter stopper portion8nof the developer receiving apparatus8to fix the position of the shutter4relative to the developer receiving apparatus8at the time of mounting operation of the developer supply container1. A second stopper portion4cengages with a second shutter stopper portion8bof the developer receiving apparatus8at the time of the dismounting operation of the developer supply container1. The shutter4is provided with a supporting portion4dso that the stopper portions4b,4care displaceable. The supporting portion4dextends from the developer sealing portion4aand is elastically deformable to displaceably support the first stopper portion4band the second stopper portion4c. The first stopper portion4bis inclined such that an angle α formed between the first stopper portion4band the supporting portion4dis acute. On the contrary, the second stopper portion4cis inclined such that an angle β formed between the second stopper portion4cand the supporting portion4dis obtuse. The developer sealing portion4aof the shutter4is provided with a locking projection4eat a position downstream of the position opposing the discharge opening3a4with respect to the mounting direction when the developer supply container1is not mounted to the mounting portion8fof the developer receiving apparatus8. A contact amount of the locking projection4erelative to the opening seal3a5(part (b) ofFIG.7) is larger than relative to the developer sealing portion4aso that a static friction force between the shutter4and the opening seal3a5is large. Therefore, an unexpected movement (displacement) of the shutter4due to a vibration during the transportation or the like can be prevented. Therefore, an unexpected movement (displacement) of the shutter4due to a vibration during the transportation or the like can be prevented. The entirety of the developer sealing portion4amay correspond to the contact amount between the locking projection4eand the opening seal3a5, but in such a case, the dynamic friction force relative to the opening seal3a5at the time when the shutter4moves is large as compared with the case of the locking projection4eprovided, and therefore, a manipulating force required when the developer supply container1is mounted to the developer replenishing apparatus8is large, which is not preferable from the standpoint of the usability. Therefore, it is desired to provide the locking projection4ein a part as in this example. (Pump Portion) FIG.10shows the pump portion5. Part (a) ofFIG.10is a perspective view of the pump portion5, and part (b) is a front view of the pump portion5. The pump portion5is operated by the driving force received by the drive receiving portion (drive inputting portion)2dso as to alternately produce a state in which the internal pressure of the developer accommodating portion2cis lower than the ambient pressure and a state in which it is higher than the ambient pressure. In this example, the pump portion5is provided as a part of the developer supply container1in order to discharge the developer stably from the small discharge opening3a4. The pump portion5is a displacement type pump in which the volume changes. More specifically, the pump includes a bellow-like expansion-and-contraction member. By the expanding-and-contracting operation of the pump portion5, the pressure in the developer supply container1is changed, and the developer is discharged using the pressure. More specifically, when the pump portion5is contracted, the inside of the developer supply container1is pressurized so that the developer is discharged through the discharge opening3a4. When the pump portion5expands, the inside of the developer supply container1is depressurized so that the air is taken in through the discharge opening3a4from the outside. By the take-in air, the developer in the neighborhood of the discharge opening3a4and/or the storage portion3a3is loosened so as to make the subsequent discharging smooth. By repeating the expanding-and-contracting operation described above, the developer is discharged. As shown in part (b) ofFIG.110, the pump portion5of this modified example has the bellow-like expansion-and-contraction portion (bellow portion, expansion-and-contraction member)5ain which the crests and bottoms are periodically provided. The expansion-and-contraction portion5aexpands and contracts in the directions of arrows A and B. When the bellow-like pump portion5as in this example, a variation in the volume change amount relative to the amount of expansion and contraction can be reduced, and therefore, a stable volume change can be accomplished. In addition, in this example, the material of the pump portion2is polypropylene resin material (PP), but this is not inevitable. The material of the pump portion5may be any if it can provide the expansion and contraction function and can change the internal pressure of the developer accommodating portion by the volume change. The examples includes thin formed ABS (acrylonitrile, butadiene, styrene copolymer resin material), polystyrene, polyester, polyethylene materials. Alternatively, other expandable-and-contractable materials such as rubber are usable. In addition, as shown in part (a) ofFIG.10, the opening end side of the pump portion5is provided with a connecting portion5bconnectable with the upper flange portion3a. Here, the connecting portion5bis a screw. Furthermore, as shown in part (b) ofFIG.10the other end portion side is provided with a reciprocating member engaging portion5cengaged with the reciprocating member5to displace in synchronism with the reciprocating member6which will be described hereinafter. (Reciprocating Member) FIG.11shows the reciprocating member6. Part (a) ofFIG.11is a perspective view of the reciprocating member6as seen obliquely from an upper position, and part (b) is perspective view of the reciprocating member6as seen obliquely from a lower position. As shown in part (b) ofFIG.11, the reciprocating member6is provided with a pump engaging portion6aengaged with the reciprocating member engaging portion5cprovided on the pump portion5to change the volume of the pump portion5as described above. Furthermore, as shown in part (a) and part (b) ofFIG.11the reciprocating member6is provided with the engaging projection6bfitted in the above-described cam groove2b(FIG.5) when the container is assembled. The engaging projection6bis provided at a free end portion of the arm6cextending from a neighborhood of the pump engaging portion6a. Rotation displacement of the reciprocating member6about the axis P (part (b) ofFIG.5) of the arm6cis prevented by a reciprocating member holding portion7b(FIG.12) of the cover7which will be described hereinafter. Therefore, when the container body2receives the drive from the drive receiving portion2dand is rotated integrally with the cam groove20nby the driving gear9, the reciprocating member6reciprocates in the directions of arrows An and B by the function of the engaging projection6bfitted in the cam groove2band the reciprocating member holding portion7bof the cover7. Together with this operation, the pump portion5engaged through the pump engaging portion6aof the reciprocating member6and the reciprocating member engaging portion5cexpands and contracts in the directions of arrows An and B. (Cover) FIG.12shows the cover7. Part (a) ofFIG.12is a perspective view of the cover7as seen obliquely from a upper position, and part (b) is a perspective view of the cover7as seen obliquely from a lower position. The cover24is provided as shown in part (b) ofFIG.69in order to protect the reciprocating member38and/or the pump portion2and to improve the outer appearance. In more detail, as shown in part (b) ofFIG.5, the cover7is provided integrally with the upper flange portion3aand/or the lower flange portion3band so on by a mechanism (unshown) so as to cover the entirety of the flange portion3, the pump portion5and the reciprocating member6. In addition, the cover7is provided with a guide groove7ato be guided by the insertion guide8e(part (a) ofFIG.3) of the developer receiving apparatus8. In addition, the cover7is provided with a reciprocating member holding portion7bfor regulating a rotation displacement about the axis P (part (b) ofFIG.5) of the reciprocating member6as described above. Mounting Operation of Developer Supply Container) Referring toFIGS.13,14,15,16and17in the order of operation, mounting operation of the developer supply container1to the developer receiving apparatus8will be described in detail. Parts (a)-(d) ofFIGS.13-FIG.16show the neighborhood of the connecting portion between the developer supply container1and the developer receiving apparatus8. Parts (a) ofFIG.13-FIG.16are perspective view of a partial section, (b) is a front view of the partial section, (c) is a top plan view of (b), and (d) show the relation between the lower flange portion3band the developer receiving portion11, particularly.FIG.17is a timing chart of operations of each elements relating to the mounting operation of the developer supply container1to the developer receiving apparatus8as shown inFIG.13-FIG.16. The mounting operation is the operation until the developer becomes able to be supplied to the developer receiving apparatus8from the developer supply container1. FIG.13shows a connection starting position (first position) between the first engaging portion3b2of the developer supply container1and the engaging portion11bof the developer receiving portion11. As shown in part (a) ofFIG.13, the developer supply container1is inserted into the developer receiving apparatus8in the direction of an arrow A. First, as shown in part (c) ofFIG.13, the first stopper portion4bof the shutter4contacts the first shutter stopper portion8aof developer receiving apparatus8, so that the position of the shutter4relative to the developer receiving apparatus8is fixed. In this state, the relative position between the lower flange portion3band the upper flange portion3aof the flange portion3and the shutter4remains unchanged, and therefore, the discharge opening3a4is sealed assuredly by the developer sealing portion4aof the shutter4. As shown in part (b) ofFIG.13, the connecting portion3a6of the opening seal3a5is shielded by the shutter4. As shown in part (c) ofFIG.13, the supporting portion4dof the shutter4is displaceable in the direction of arrows C and D, since the regulation rib3b3of the lower flange portion3bdoes not enter the supporting portion4d. As has been described above, the first stopper portion4bis inclined such that the angle α (part (a) ofFIG.9) relative to the supporting portion4dis acute, and the first shutter stopper portion8ais also inclined, correspondingly. In this example, the inclination angle α is approx. 80 degrees. Therefore, when the developer supply container1is inserted further in the arrow A direction, the first stopper portion4breceives a reaction force in the arrow B direction from the first shutter stopper portion8a, so that the supporting portion4dis displaced in an arrow D direction. That is, the first stopper portion4bof the shutter4displaces in the direction of holding the engagement state with the first shutter stopper portion8aof the developer receiving apparatus8, and therefore, the position of the shutter4is held assuredly relative to the developer receiving apparatus8. In addition, as shown in part (d) ofFIG.13, the positional relation between the engaging portion11bof the developer receiving portion11and the first engaging portion3b2of the lower flange portion3bis such that they start engagement with each other. Therefore, the developer receiving portion11remains in the initial position in which it is spaced from the developer supply container1. More specifically, as shown in part (b) ofFIG.13, the developer receiving portion11is spaced from the connecting portion3a6formed on a part of the opening seal3a5. As shown in part (b) ofFIG.13, the developer receiving port11ais in the sealed state by the main assembly shutter15. In addition, the driving gear9of the developer receiving apparatus8and the drive receiving portion2dof the developer supply container1are not connected with each other, that is, in the non-transmission state. In this example, the distance between the developer receiving portion11and the developer supply container1is approx. 2 mm. When the distance is too small, not more than approx. 1.5 mm, for example, the developer deposited on the surface of the main assembly seal13provided on the developer receiving portion11may be scattered by air flow produced locally by the mounting and dismounting operation of the developer supply container1, the scattered developer may be deposited on the lower surface of the developer supply container1. On the other hand, the distance is too large, a stroke required to displace the developer receiving portion11from the spacing position to the connected position is large with the result of upsizing of the image forming apparatus. Or, the inclination angle of the first engaging portion3b2of the lower flange portion3bis steep relative to the mounting and dismounting direction of the developer supply container1with the result of increase of the load required to displace the developer receiving portion11. Therefore, the distance between the developer supply container1and the developer receiving portion11is properly determined taking the specifications of the main assembly or the like into account. As described above, in this example, the inclination angle of the first engaging portion3b2relative to the mounting and dismounting direction of the developer supply container1is approx. 40 degrees. The same applies to the following embodiments. Then, as shown in part (a) ofFIG.14, the developer supply container1is further inserted in the direction of the arrow A. As shown in part (c) ofFIG.14, the developer supply container1moves relative to the shutter4in the direction of the arrow A, since the position of the shutter4is held relative to the developer receiving apparatus8. At this time, as shown in part (b) ofFIG.14, a part of the connecting portion3a6of the opening seal3a5is exposed through the shutter4. Further, as shown in part (d) ofFIG.14, the first engaging portion3b2of the lower flange portion3bdirectly engages with the engaging portion11bof the developer receiving portion11so that the engaging portion11bis displaced in the direction of the arrow E by the first engaging portion3b2. Therefore, the developer receiving portion11is displaced in the direction of the arrow E against the urging force of the urging member12(arrow F) to the position shown in part (b) ofFIG.14, so that the developer receiving port11ais spaced from the main assembly shutter15, thus starting to unseal. Here, in the position ofFIG.14, the developer receiving port11aand the connecting portion3a6are spaced from each other. Further, as shown in part (c) ofFIG.14, the regulation rib3b3of the lower flange portion3benters of supporting portion4dof the shutter4, so that the supporting portion4dcan not displace in the direction of arrow C or arrow D. That is, the elastic deformation of the supporting portion4dis limited by the regulation rib3b3. Then, as shown in part (a) ofFIG.15, the developer supply container1is further inserted in the direction of the arrow A. Then, as shown in part (c) ofFIG.15, the developer supply container1moves relative to the shutter4in the direction of the arrow A, since the position of the shutter4is held relative to the developer receiving apparatus8. At this time, the connecting portion3a6formed on the part of the opening seal3a5is completely exposed from the shutter4. In addition, the discharge opening3a4is not exposed from the shutter4, so that it is still sealed by the developer sealing portion4a. Furthermore, as described hereinbefore, the regulation rib3b3of the lower flange portion3benters the supporting portion4dof the shutter4, by which the supporting portion4dcan not displace in the direction of arrow C or arrow D. At this time, as shown in part (d) ofFIG.15, the directly engaged engaging portion11bof the developer receiving portion11reaches the upper end side of the first engaging portion3b2. The developer receiving portion11is displaced in the direction of the arrow E against the urging force (arrow F) of the urging member12, to the position shown in part (b) ofFIG.15, so that the developer receiving port11ais completely spaced from the main assembly shutter15to be unsealed. At this time, the connection is established in the state that the main assembly seal13having the developer receiving port11ais close-contacted to the connecting portion3a6of the opening seal3a5. In other words, by the developer receiving portion11directly engaging with the first engaging portion3b2of the developer supply container1, the developer supply container1can be accessed by the developer receiving portion11from the lower side in the vertical direction which is crossed with the mounting direction. Thus, the above-described the structure, can avoid the developer contamination at the end surface Y (part (b) ofFIG.5) in the downstream side with respect to the mounting direction of the developer supply container1, the developer contamination having been produced in the conventional structure in which the developer receiving portion11accesses the developer supply container1in the mounting direction. The conventional structure will be described hereinafter. Subsequently, as shown in part (a) ofFIG.16, when the developer supply container1is further inserted in the direction of the arrow A to the developer receiving apparatus8, the developer supply container1moves relative to the shutter4in the direction of the arrow A similar to the forgoing, up to a supply position (second position). In this position, the driving gear9and the drive receiving portion2dare connected with each other. By the driving gear9rotating in the direction of an arrow Q, the container body2is rotated in the direction of the arrow R. As a result, the pump portion5is reciprocated by the reciprocation of the reciprocating member6in interrelation with the rotation of the container body2. Therefore, the developer in the developer accommodating portion2cis supplied into the sub-hopper8cfrom the storage portion3a3through the discharge opening3a4and the developer receiving port11aby the reciprocation of the pump portion5described above. In addition, as shown in part (d) ofFIG.16, when the developer supply container1reaches the supply position relative to the developer receiving apparatus8, the engaging portion11bof the developer receiving portion11is engaged with the second engaging portion3b4by way of the engaging relation with the first engaging portion3b2of the lower flange portion3b. And, the engaging portion11bis brought into the state of being urged to the second engaging portion3b4by the urging force of the urging member12in the direction of the arrow F. Therefore, the position of the developer receiving portion11in the vertical direction is stably maintained. Furthermore, as shown in part (b) ofFIG.16, the discharge opening3a4is unsealed by the shutter4, and the discharge opening3a4and the developer receiving port11aare brought into fluid communication with each other. At this time, the developer receiving port11aslides on the opening seal3a5to communicate with the discharge opening3a4while keeping the close-contact state between the main assembly seal13and the connecting portion3a6formed on the opening seal3a5. Therefore, the amount of the developer falling from the discharge opening3a4and scattering to the position other than the developer receiving port11a. Thus, the contamination of the developer receiving apparatus8by the scattering of the developer is less. (Dismounting Operation of Developer Supply Container) Referring mainly toFIG.13-FIGS.16and17, the operation of dismounting of the developer supply container1from the developer receiving apparatus8will be described.FIG.17is a timing chart of operations of each elements relating to the dismounting operation of the developer supply container1from the developer receiving apparatus8as shown inFIG.13-FIG.16. The dismounting operation of the developer supply container1is a reciprocal of the above-described mounting operation. Thus, the developer supply container1is dismounted from the developer receiving apparatus8in the order fromFIG.16toFIG.13. The dismounting operation (removing operation) is the operation to the state in which the developer supply container1can be take out of the developer receiving apparatus8. The amount of the developer in the developer supply container1placed in the supply position shown inFIG.16decreases, a message promoting exchange of the developer supply container1is displayed on the display (unshown) provided in the main assembly of the image forming apparatus100(FIG.1). The operator prepares a new developer supply container1opens the exchange cover40provided in the main assembly of the image forming apparatus100shown inFIG.2, and extracts the developer supply container1in the direction of the arrow B shown in part (a) ofFIG.16. In this process, as described hereinbefore, the supporting portion4dof the shutter4can not displace in the direction of arrow C or arrow D by the limitation of the regulation rib3b3of the lower flange portion3b. Therefore, as shown in part (a) ofFIG.16, when the developer supply container1tends to move in the direction of the arrow B with the dismounting operation, the second stopper portion4cof the shutter4abuts to the second shutter stopper portion8bof the developer receiving apparatus8, so that the shutter4does not displace in the direction of the arrow B. In other words, the developer supply container1moves relative to the shutter4. Thereafter, when the developer supply container1is drawn to the position shown inFIG.15, the shutter4seals the discharge opening3a4as shown in part (b) ofFIG.15. Further, as shown in part (d) ofFIG.15, the engaging portion11bof the developer receiving portion11displaces to the downstream lateral edge of the first engaging portion3b2from the second engaging portion3b4of the lower flange portion3bwith respect to the dismounting direction. As shown in part (b) ofFIG.15, the main assembly seal13of the developer receiving portion11slides on the opening seal3a5from the discharge opening3a4of the opening seal3a5to the connecting portion3a6, and maintains the connection state with the connecting portion3a6. Similarly to the foregoing, as shown in part (c) ofFIG.15, the supporting portion4dis in engagement with the regulation rib3b3, so that it can not displace in the direction of the arrow B in the Figure. Thus, when the developer supply container1is taken out from the position ofFIG.15to the position ofFIG.13, the developer supply container1moves relative to the shutter4, since the shutter4can not displace relative to the developer receiving apparatus8. Subsequently, the developer supply container1is drawn from the developer receiving apparatus8to the position shown in part (a) ofFIG.14. Then, as shown in part (d) ofFIG.14, the engaging portion11bslides down on the first engaging portion3b2to the position of the generally middle point of the first engaging portion3b2by the urging force of the urging member12. Therefore, the main assembly seal13provided on the developer receiving portion11downwardly spaces from the connecting portion3a6of the opening seal3a5, thus releasing the connection between the developer receiving portion11and the developer supply container1. At this time, the developer is deposited substantially on the connecting portion3a6of the opening seal3a5with which the developer receiving portion11has been connected. Subsequently, the developer supply container1is drawn from the developer receiving apparatus8to the position shown in part (a) ofFIG.13. Then, as shown in part (d) ofFIG.13, the engaging portion11bslides down on the first engaging portion3b2to reach the upstream lateral edge with respect to dismounting direction of the first engaging portion3b2, by the urging force of the urging member12. Therefore, the developer receiving port11aof the developer receiving portion11released from the developer supply container1is sealed by the main assembly shutter15. By this, it is avoided that foreign matter or the like enters through the developer receiving port11aand that the developer in the sub-hopper8c(FIG.4) scatters from the developer receiving port11a. The shutter4displaces to the connecting portion3a6of the opening seal3a5with which the main assembly seal13of the developer receiving portion11has been connected to shield the connecting portion3a6on which the developer is deposited. Further, with the above-described dismounting operation of the developer supply container1, the developer receiving portion11is guided by the first engaging portion3b2, and after the completion of the spacing operation from the developer supply container1, the supporting portion4dof the shutter4is disengaged from the regulation rib3b3so as to be elastically deformable. The configurations of the regulation rib3b3and/or the supporting portion4dare properly selected so that the position where the engaging relation is released is substantially the same as the position where the shutter4enters when developer supply container1is not mounted to the developer receiving apparatus8. Therefore, when the developer supply container1is further drawn in the direction of the arrow B shown in part (a) ofFIG.13, the second stopper portion4cof the shutter4abuts to the second shutter stopper portion8bof the developer receiving apparatus8, as shown in part (c) ofFIG.13. By this, the second stopper portion4cof the shutter4displaces (elastically deforms) in the direction of arrow C along a taper surface of the second shutter stopper portion8b, so that the shutter4becomes displaceable in the direction of the arrow B relative to the developer receiving apparatus8together with the developer supply container1. That is, when the developer supply container1is completely taken out of the developer receiving apparatus8, the shutter4returns to the position taken when the developer supply container1is not mounted to the developer receiving apparatus8. Therefore, the discharge opening3a4is assuredly sealed by the shutter4, and therefore, the developer is not scattered from the developer supply container1demounted from the developer receiving apparatus8. Even if the developer supply container1is mounted to the developer receiving apparatus8, again, it can be mountable without any problem. FIG.17shows flow of the mounting operation of the developer supply container1to the developer receiving apparatus8(FIGS.13-16) and the flow of the dismounting operation of the developer supply container1from the developer receiving apparatus8. When the developer supply container1is mounted to the developer receiving apparatus8, the engaging portion11bof the developer receiving portion11is engaged with the first engaging portion3b2of the developer supply container1, by which the developer receiving port displaces toward the developer supply container. On the other hand, when the image material supply container1is dismounted from the developer receiving apparatus8, the engaging portion11bof the developer receiving portion11engages with the first engaging portion3b2of the developer supply container1, by which the developer receiving port displaces away from the developer supply container. As described in the foregoing, according to this example, the mechanism for connecting and spacing the developer receiving portion11relative to the developer supply container1by displacement of the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. In a conventional structure, a large space is required to avoid an interference with the developing device in the upward and downward movement, but according to this example, such a large space is unnecessary so that the upsizing of the image forming apparatus can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. The developer supply container1of this example can cause the developer receiving portion11to connect upwardly and space downwardly in the direction crossing with the mounting direction of developer supply container1, using the engaging portions3b2,3b4of the lower flange portion3bwith the mounting and demounting operation to the developer receiving apparatus8. The developer receiving portion11is sufficiently small relative to developer supply container1, and therefore, the developer contamination of the downstream side end surface Y (part (b) ofFIG.5) of the developer supply container1with respect to the mounting direction, with the simple and space saving structure. In addition, the developer contamination by the main assembly seal13slides on the protecting portion3b5of the lower flange portion3band the sliding surface (lower surface of the shutter)4i. Furthermore, according to this example, after the developer receiving portion11is connected to the developer supply container1with the mounting operation of the developer supply container1to the developer receiving apparatus8, the discharge opening3a4is exposed from the shutter4so that the discharge opening3a4and the developer receiving port11acan be brought into communication with each other. In other words, the timing of each step is controlled by the engaging portions3b2,3b4of the developer supply container1, and therefore, the scattering of the developer can be suppressed assuredly with a simple and easy structure, without the being influenced by the way of operation by the operator. In addition, after the discharge opening3a4is sealed and the developer receiving portion11is spaced from the developer supply container1with the dismounting operation of the developer supply container1from the developer receiving apparatus8, the shutter4can shield the developer deposition portion of the opening seal3a5. In other words, the timing of each step in the dismounting operation can be controlled by the engaging portions3b2and3b4of the developer supply container1, and therefore, the scattering of the developer can be suppressed, and the developer deposition portion can be prevented from the exposing to the outside. In the prior-art structure, the connection relation between the connecting portion and the connected portion is established indirectly through another mechanism, and therefore, it is difficulty to control the connection relation with high precision, However, in this example, the connection relation can be established by the directly engagement between the connecting portion (developer receiving portion11) and the connected portion (developer supply container1). More specifically, the timing of the connection between the developer receiving portion11and the developer supply container1can be controlled easily by the positional relation, in the mounting direction, among the engaging portion11bof the developer receiving portion11, the first and second engaging portions3b2and3a4of the lower flange portion3bof the developer supply container1and discharge opening3a4. In other words, the timing may deviate within the tolerances of the three elements, and therefore, very high accuracy control can be performed. Therefore, the connecting operation of the developer receiving portion11to the developer supply container1and the spacing operation from the developer supply container1can be carried out assuredly, with the mounting operation and the dismounting operation of the developer supply container1. Regarding the displacement amount of the developer receiving portion11in the direction crossing with the mounting direction of the developer supply container1can be controlled by the positions of the engaging portion11bof the developer receiving portion11and the second engaging portion3b4of the lower flange portion3b. Similarly to the foregoing, the deviation of the displacement amount may deviate within the tolerances of the two elements, and therefore, very high accuracy control can be performed. Therefore, for example, close-contact state (amount of sealing compression or the like) between the main assembly seal13and the discharge opening3a4can be controlled easily, so that the developer discharged from the discharge opening3a4can be fed into the developer receiving port11aassuredly. Embodiment 2 Referring toFIG.19FIG.32, Embodiment 2 will be described. Embodiment 2 is partly different from Embodiment 1 in the configuration and structure developer receiving portion11, the shutter4, the lower flange portion3b, and the mounting and demounting operations of the developer supply container1to the developer receiving apparatus8are partly different, correspondingly. Of other structures are substantially the same as Embodiment 1. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. (Developer Receiving Portion) FIG.19shows the developer receiving portion11of Embodiment 2. Part (a) ofFIG.19is a perspective view of the developer receiving portion11, and part (b) ofFIG.19is a sectional view of the developer receiving portion11. As shown in part (a) ofFIG.19, the developer receiving portion11of Embodiment 2 is provided with a tapered portion11cfor misalignment prevention at the end portion of the downstream side with respect to the connecting direction to the developer supply container1, and the end surface continuing from the tapered portion11cis substantially annular. The misalignment prevention tapered portion11cis engaged with a misalignment prevention taper engaging portion4g(FIG.21) provided on the shutter4, as will be described hereinafter. The misalignment prevention tapered portion11cis provided in order to prevent a misalignment between the developer receiving port11aand a shutter opening4f(FIG.21) of the shutter4due to a vibration by a driving source inner the image forming apparatus and/or a deformation of a part. The detail of the engaging relation (contact relation) between the misalignment prevention tapered portion11cand the misalignment prevention taper engaging portion4gwill be described hereinafter. The material and/or configuration and dimensions of the main assembly seal13such as a width and/or height or the like are properly selected so that the leakage of the developer can be prevented in relation with a configuration of a close-contact portion4hprovided around the shutter opening4fof the shutter4which will be described hereinafter, to which the main assembly seal13is connected with the mounting operation of the developer supply container1. (Lower Flange) FIG.20shows the lower flange portion3bin Embodiment 2. Part (a) ofFIG.20is a perspective view (upward direction) of the lower flange portion3b, and part (b) ofFIG.20is a perspective view (downward direction) of lower flange portion3b. The lower flange portion3bin this embodiment is provided with a shielding portion3b6for shielding the shutter opening4fwhich will be described hereinafter, when the developer supply container1is not mounted to the developer receiving apparatus8. The provision of the shielding portion3b6is different from the above-described lower flange portion3bof Embodiment 1. In this embodiment, the shielding portion3b6is provided in the downstream side of the lower flange portion3bwith respect to the mounting direction of the developer supply container1. Also in this example, similarly to the above-described embodiment, the lower flange portion3bis provided with engaging portions3b2and3b4engageable with an engaging portion11b(FIG.19) of the developer receiving portion11as shown inFIG.20. In this example, of the engaging portions3b2and3b4, the first engaging portion3b2displaces the developer receiving portion11toward the developer supply container1so that the main assembly seal13provided in the developer receiving portion11is connected with the shutter4which will be described hereinafter, with the mounting operation of the developer supply container1. The first engaging portion3b2displaces the developer receiving portion11toward the developer supply container1with the mounting operation of the developer supply container1so that the developer receiving port11aformed in the developer receiving portion11is connected with the shutter opening (communication port)4f. In addition, the first engaging portion3b2guides the developer receiving portion11away from the developer supply container1so that the connection state between the developer receiving portion11and the shutter opening4fof the shutter4is broken, with the dismounting operation of the developer supply container1. On the other hand, a second engaging portion3b4holds the connected state between the shutter4and the main assembly seal13of the developer receiving portion11in the movement of the developer supply container1relative to the shutter4, so that a discharge opening3a4is brought into fluid communication with the developer receiving port11aof the developer receiving portion11, with the mounting operation of the developer supply container1. The second engaging portion3b4maintains the connected state between the developer receiving port11aand the shutter opening4fin the movement of the lower flange portion3brelative to the shutter4with the mounting operation of the developer supply container1, so that the discharge opening3a4is brought into fluid communication with the shutter opening4f. In addition, the second engaging portion3b4holds the connected state between the developer receiving portion11and the shutter4in the movement of the developer supply container1relative to the shutter4so that the discharge opening3a4is resealed, with the dismounting operation of the developer supply container1. (Shutter) FIG.21-FIG.25show the shutter4in Embodiment 2. Part (a) ofFIG.21is a perspective view of the shutter4, part (b) ofFIG.21illustrates a modified example 1 of the shutter4, part (c) ofFIG.21illustrates a connection relation between the shutter4and the developer receiving portion11, part (d) ofFIG.21is a illustration similar to the part (c) ofFIG.21. As shown in part (a) ofFIG.21, the shutter4of Embodiment 2 is provided with the shutter opening (communication port)4fcommunicatable with the discharge opening3a4. Further, the shutter4is provided with a close-contact portion (projected portion, projection)4hsurrounding an outside of the shutter opening4f, and the misalignment prevention taper engaging portion4gfurther outside the close-contact portion4h. The close-contact portion4hhas a projection height such that it is lower than a sliding surface4iof the shutter4, and a diameter of the shutter opening4fis approx. 02 mm. The size is selected for the same reason as with Embodiment 1, and therefore, the explanation is omitted for simplicity. The shutter4is provided with a recess at a substantially central portion with respect to the longitudinal direction of the shutter4, as a retraction space for the supporting portion4dat the time when the supporting portion4dof shutter4displaces in the direction C (part (c) ofFIG.26) with the dismounting operation. A gap between the recessed configuration and the supporting portion4dis larger than an amount of overlapping between the first stopper portion4band a first shutter stopper portion8aof the developer replenishing apparatus8, so that the shutter4can be engaged with and disengaged from the developer receiving apparatus8smoothly. Referring toFIG.22-FIG.24, the configuration of the shutter4will be described. Part (a) ofFIG.22shows a position (the same position asFIG.27) where the developer supply container1is engaged with the developer receiving apparatus8, which will be described hereinafter, and part (b) ofFIG.22shows a position (the same position asFIG.31) where the developer supply container1is completely mounted to the developer receiving apparatus8. As shown inFIG.22, a length D2of supporting portion4dis set such that it is larger than a displacement amount D1of the developer supply container1with the mounting operation of the developer supply container1(D1≤D2). The displacement amount D1is the amount of the displacement of the developer supply container1relative to the shutter in the mounting operation of the developer supply container1. That is, it is the displacement amount of the developer supply container1in the state (part (a) ofFIG.22) in which stopper portions (holding portions)4band4cof the shutter4is in engagement with shutter stopper portions8aand8bof the developer receiving apparatus8. With such a structure, the interference between a regulation rib3b3of the lower flange3band the supporting portion4dof the shutter4in the process of mounting of the developer supply container1can be reduced. On the other hand, for the case in which D2is smaller than D1, the supporting portion4dof the shutter4may be provided with a regulated projection (projection)4kpositively engageable with the regulation rib3b3as shown inFIG.23to prevent the interference between the supporting portion4dand the regulation rib3b3. With such a structure, the developer supply container1can be mounted to the developer receiving apparatus8irrespective of the size relation between the displacement amount D1in the mounting operation of the developer supply container1and the length D2of the supporting portion4dof the shutter4. On the other hand, when the structure shown inFIG.23is used, the size of the developer supply container1is larger only a height D4of the regulated projection4k.FIG.23is a perspective view of the shutter4for the developer supply container1when D1>D2. Therefore, if the position of the developer receiving apparatus8inner the main assembly of the image forming apparatus100is the same, a cross-sectional area is larger by S than of the developer supply container1of this embodiment as shown inFIG.24, and therefore, a corresponding larger space is required. The foregoing applies to the above-described Embodiment 1, and the embodiments described hereinafter. Part (b) ofFIG.21shows a modified example 1 of the shutter4in which the misalignment prevention taper engaging portion4gis divided into a plurality of parts, as is different from the shutter4of this embodiment. In the other respects, substantially the equivalent performance is provided. Referring to, part (c) ofFIG.21and part (d) ofFIG.21, the engaging relation between the shutter4and the developer receiving portion11will be described. Part (c) ofFIG.21shows the engaging relation between the misalignment prevention taper engaging portion4gof the shutter4and the misalignment prevention tapered portion11cof the developer receiving portion11in Embodiment 2. As shown in part (c) ofFIG.21and part (d) ofFIG.21, distances of the corner lines constituting the close-contact portion4hand the misalignment prevention taper engaging portion4gof the shutter4from a center R of the shutter opening4f(part (a) ofFIG.21) are L1, L2, L3, L4. Similarly, as shown in part (c) ofFIG.21, distances of corner lines constituting the misalignment prevention tapered portion11cof the developer receiving portion11from the center R of the developer receiving port11a(FIG.19) are M1, M2, M3. The positions of the centers of the shutter opening4fand the developer receiving port11aare set to be aligned with each other. In this embodiment, the positions of the corner lines are selected to satisfy L1<L2<M1<L3<M2<L4<M3. As shown in part (c)FIG.21, the corner lines at the distance M2from the center R of the developer receiving port11aof the developer receiving portion11abuts to the misalignment prevention taper engaging portion4gof the shutter4. Therefore, even if the positional relation between the shutter4and the developer receiving portion11is deviated more or less due to the vibration from the driving source of the main assembly of the apparatus and/or part accuracies, the misalignment prevention taper engaging portion4gand the misalignment prevention are guided by the tapered surfaces to align with each other. Therefore, the deviation between the center shafts of and opening4fand the developer receiving port11acan be suppressed. Similarly, part (d) ofFIG.21shows a modified example of the engaging relation between the misalignment prevention taper engaging portion4gof the shutter4and the misalignment prevention tapered portion11cof the developer receiving portion11, according to Embodiment 2. As shown in part (d) ofFIG.21, the structure of this modified example is different from the structure shown in part (c) ofFIG.21only in that the positional relation of the corner lines is L1<L2<M1<M2<L3<L4<M3. In this modified example, the corner lines at the position L4away from the center R of the shutter opening4fof the misalignment prevention taper engaging portion4gabuts to the tapered surface of the tapered portion11c. Also in this case, the deviation of the center shafts of the shutter and the developer receiving port11acan be suppressed, similarly. Referring toFIG.25, a modified example 2 of the shutter4will be described. Part (a) ofFIG.25shows modified example 2 of the shutter4, and the part (b) ofFIG.25and part (c) ofFIG.25show the connection relation between the shutter4and the developer receiving portion11in the modified example 2. As shown in part (a) ofFIG.25, the shutter4of modified example 2 is provided with the misalignment prevention taper engaging portion4gin the close-contact portion4h. The other configurations are the same as those of the shutter4(part (a) ofFIG.21) of this embodiment. The close-contact portion4his provided in order to control the amount of compression of the main assembly seal13(part (a) ofFIG.19). In this modified example, as shown in part (b) ofFIG.25, distances of the corner lines constituting the close-contact portion4hand the misalignment prevention taper engaging portion4gof the shutter4from the center R of the shutter opening4f(part (a) ofFIG.25). Similarly, distances of the corner lines constituting the misalignment prevention tapered portion11cof the developer receiving portion11from the center R of the developer receiving port11a(FIG.19) are M1, M2, M3(FIGS.21,25). As shown in part (b) ofFIG.25, the positional relation of the corner lines satisfy L1<M1<M2<L2<M3<L3<L4. As shown in part (c) ofFIG.25, the positional relation of the corner lines may be M1<L1<L2<M2<M3<L3<L4. Similarly to the relation between the shutter4and the developer receiving portion11shown in part (a) ofFIG.21, by an aligning function by the misalignment prevention taper engaging portion4gand the misalignment prevention tapered portion11c, the misalignment between the center axes of the opening4fand the developer receiving port11acan be prevented. In this example, the misalignment prevention taper engaging portion4gof the shutter4is monotonically linearly tapered, but the tapered surface portion may be curved, that is, may be an arcuate. Furthermore, it may be a contiguous taper, having a cut-away portion or portions. The same applies to the configuration of the misalignment prevention tapered portion11cof the developer receiving portion11corresponding to the misalignment prevention taper engaging portion4g. With such structures, when the main assembly seal13(FIG.19) and the close-contact portion4hof the shutter4are connected with each other, the centers of the developer receiving port11aand the shutter opening4fare aligned, and therefore, the developer can be discharged smoothly from the developer supply container1into the sub-hopper8c. If the center positions of them are deviated even by 1 mm when the shutter opening4fand the developer receiving port11ahave small diameters, such as Φ2 mm and Φ3 mm, respectively, the effective opening area is only one half of the intended area, and therefore, the smooth discharge of the developer is not expected. Using the structures of this example, the deviation between the shutter opening4fand the developer receiving port11acan be suppressed to 0.2 mm or less (approx. The tolerances of the parts), and therefore, the effective through opening area can be assured. Therefore, the developer can be discharged smoothly. (Mounting Operation of Developer Supply Container) Referring toFIG.26-FIGS.31and32, the mounting operation of the developer supply container1of this embodiment to the developer receiving apparatus8will be described.FIG.26shows the position when the developer supply container1is inserted into the developer receiving apparatus8, and the shutter4has not yet been engaged with the developer receiving apparatus8.FIG.27shows the position (corresponding toFIG.13of Embodiment 1) in which the shutter4of the developer supply container1is engaged with the developer receiving apparatus8.FIG.28shows the position in which the shutter4of the developer supply container1is exposed from the shielding portion3b6.FIG.29shows a position (corresponding toFIG.14of Embodiment 1) in the process of connection between the developer supply container1and the developer receiving portion11.FIG.30shows the position (corresponding toFIG.15of Embodiment 1) in which the developer supply container1has been connected with the developer receiving portion11.FIG.31shows the position in which the developer supply container1is completely mounted to the developer receiving apparatus8, and the developer receiving port11a, the shutter opening4fand the discharge opening3a4are in fluid communication therethrough, thus enabling supply of the developer.FIG.32is a timing chart of operations of each elements relating to the mounting operation of the developer supply container1to the developer receiving apparatus8as shown inFIG.27-FIG.31. As shown in part (a) ofFIG.26, in the mounting operation of the developer supply container1, the developer supply container1is inserted in the direction of an arrow A in the Figure toward the developer receiving apparatus8. At this time, as shown in part (b) ofFIG.26, the shutter opening4fof the shutter4and the close-contact portion4his shielded by the shielding portion3b6of the lower flange. By this, the operator is protected from contacting to the shutter opening4fand/or the close-contact portion4hcontaminated by the developer. In addition, as shown in part (c) ofFIG.26, in the inserting operation, a first stopper portion4bprovided in the upstream side, with respect to the mounting direction, of the supporting portion4dof the shutter4abuts to an insertion guide8eof the developer receiving apparatus8, so that the supporting portion4ddisplaces in the direction of an arrow C in the Figure. In addition, as shown in part (d)FIG.26, and first engaging portion3b2of the lower flange portion3band the engaging portion11bof the developer receiving portion11are not engaged with each other. Therefore, as shown in part (b) ofFIG.26, the developer receiving portion11is held in the initial position by an urging force of an urging member12in the direction of an arrow F. In addition, the developer receiving port11ais sealed by a main assembly shutter15, so that entering of a foreign matter or the like through the developer receiving port11aand scattering of the developer through the developer receiving port11afrom the sub-hopper8c(FIG.4) are prevented. When the developer supply container1is inserted to the developer receiving apparatus8in the direction of an arrow A to the position shown in part (a) ofFIG.27, the shutter4is engaged with the developer receiving apparatus8. That is, similarly to the developer supply container1of Embodiment 1 the supporting portion4dof the shutter4is released from the insertion guide8eand displaces in the direction of an arrow D in the Figure by an elastic restoring force, as shown in part (c) ofFIG.27. Therefore, the first stopper portion4bof the shutter4and the first shutter stopper portion8aof the developer receiving apparatus8are engaged with each other. Then, in the insertion process of the developer supply container1, the shutter4is held immovably relative to the developer receiving apparatus8by the relation between the supporting portion4dand the regulation rib3b3having been described with Embodiment 1. At this time, the positional relation between the shutter4and the lower flange portion3bremains unchanged from the position shown inFIG.26. Therefore, as shown in part (b) ofFIG.27, the shutter opening4fof the shutter4keeps shielded by the shielding portion3b6of the lower flange portion3b, and the discharge opening3a4keeps sealed by the shutter4. Also in this position, as shown in part (d) ofFIG.27, the engaging portion11bof the developer receiving portion11is not engaged with the first engaging portion3b2of the lower flange portion3b. In other words, as shown in part (b) ofFIG.27, the developer receiving portion11is kept in the initial position, and therefore, is spaced from the developer supply container1. Therefore, the developer receiving port11ais sealed by the main assembly shutter15. The center axes of the shutter opening4fand the developer receiving port11aare substantially coaxial. Then, the developer supply container1is further inserted into the developer receiving apparatus8in the direction of an arrow A to the position shown in part (a) ofFIG.28. At this time, since the position of the shutter4is retained relative to the developer receiving apparatus8the developer supply container1moves relative to the shutter4, and therefore, the close-contact portion4h(FIG.25) and the shutter opening4fof the shutter4are exposed through the shielding portion3b6. Here, at this time, the shutter4still seals the discharge opening3a4. In addition, as shown in part (d) ofFIG.28, the engaging portion11bof the developer receiving portion11is in the neighborhood of bottom end portion of the first engaging portion3b2of the lower flange portion3b. Therefore, the developer receiving portion11is held at the initial position as shown in part (b) ofFIG.28, and is spaced from the developer supply container1, and therefore, the developer receiving port11ais sealed by the main assembly shutter15. Then, the developer supply container1is further inserted into the developer receiving apparatus8in the direction of an arrow A to the position shown in part (a) ofFIG.29. At this time, similarly to the foregoing, the position of the shutter4is held relative to the developer receiving apparatus8, and therefore, as shown in part (b) ofFIG.29, the developer supply container1moves relative the shutter4in the direction of an arrow A. As shown in part (b) ofFIG.29, at this time, the shutter4still seals the discharge opening3a4. At this time, as shown in part (d) ofFIG.29, the engaging portion11bof the developer receiving portion11is substantially in a middle part of the first engaging portion3b2of the lower flange portion3b. Thus, as shown in part (b) ofFIG.29, the developer receiving portion11moves in the direction of an arrow E in the Figure toward the exposed shutter opening4fand the close-contact portion4h(FIG.25) with the mounting operation by the engagement with the first engaging portion3b2. Therefore, as shown in part (b) ofFIG.29, the developer receiving port11ahaving been sealed by the main assembly shutter15starts opening gradually. Then, the developer supply container1is further inserted into the developer receiving apparatus8in the direction of an arrow A to the position shown in part (a) ofFIG.30. Then, as shown in part (d) ofFIG.30, by the direct engagement between the engaging portion11bof the developer receiving portion11and the first engaging portion3b2, the developer supply container1displaces to the upper end of the first engaging portion3b2in the direction of the arrow E in the Figure, which is a direction crossing with the mounting direction. In other words, as shown in part (b) ofFIG.30, the developer receiving portion11displaces in the direction of the arrow E in the Figure, that is, in the direction crossing with the mounting direction of the developer supply container1, so that the main assembly seal13connects with the shutter4in the state of being closely contacted with the close-contact portion4hof the shutter4(FIG.25). At this time, as described hereinbefore, the misalignment prevention tapered portion11cof the developer receiving portion11and the misalignment prevention taper engaging portion4gof the shutter4are engaged with each other (part (c) ofFIG.21), and therefore, the developer receiving port11aand the shutter opening4fare brought into fluid communication with each other. In addition, by the displacement of the developer receiving portion11in the direction of the arrow E, the main assembly shutter15is further spaced from the developer receiving port11a, and therefore, the developer receiving port11ais completely unsealed. Here, also at this time, the shutter4still seals the discharge opening3a4. In this embodiment, the start timing of the displacement of the developer receiving portion11is after the shutter opening4fof the shutter4and the close-contact portion4hare exposed assuredly, but this is not inevitable. For example, it may be before the completion of the exposure, if the shutter opening4fand the close-contact portion4hare completely uncovered by the shielding portion3b6by the time the developer receiving portion11reaches the neighborhood of the position of connecting to the shutter4, that is, the engaging portion11bof the developer receiving portion11comes to the neighborhood of the upper end of the first engaging portion3b2. However, in order to connect the developer receiving portion11and the shutter4with each other assuredly, it is desired that the developer receiving portion11is displaced as described above after the shutter opening4fand the close-contact portion4hof the shutter4are uncovered by the shielding portion3b6, as in this embodiment. Subsequently, as shown in part (a) ofFIG.31, the developer supply container1is further inserted in the direction of the arrow A into the developer receiving apparatus8. Then, as shown in part (c) ofFIG.31, similarly to the foregoing, the developer supply container1moves relative to the shutter4in the direction of the arrow A and reaches a supply position. At this time, as shown in part (d) ofFIG.31, the engaging portion11bof the developer receiving portion11displaces relative to the lower flange portion3bto the downstream end of the second engaging portion3b4with respect to the mounting direction, and the position of the developer receiving portion11is kept at the position wherein it is connected with the shutter4. Further, as shown in part (b) ofFIG.31, the shutter4unseals the discharge opening3a4. In other words, the discharge opening3a4, the shutter opening4fand the developer receiving port11aare in fluid communication with each other. In addition, as shown in part (a) ofFIG.31, a drive receiving portion2dis engaged with a driving gear9so that the developer supply container1is capable of receiving a drive from the developer receiving apparatus8. A detecting mechanism (unshown) provided in the developer receiving apparatus8detects that the developer supply container1is in the predetermined position (position) capable of supplying. When the driving gear9rotates in the direction of an arrow Q in the Figure, the container body2rotates in the direction of an arrow R, and the developer it supplied into the sub-hopper8cby the operation of the above-described pump portion5. As described above, the main assembly seal13of the developer receiving portion11is connected with the close-contact portion4hof the shutter4in the state that the position of the developer receiving portion11with respect to the mounting direction of the developer supply container1. In addition, by the developer supply container1moves relative to the shutter4thereafter, the discharge opening3a4, the shutter opening4fand the developer receiving port11aa brought into fluid communication with each other. Therefore, as compared with Embodiment 1, the positional relation, with respect to the mounting direction of the developer supply container1between the main assembly seal13forming the developer receiving port11aand the shutter4is maintained, and therefore, the main assembly seal13does not slide on the shutter4. In other words, in the mounting operation of the developer supply container1to the developer receiving apparatus8, no direct sliding dragging action in the mounting direction occurs between the developer receiving portion11and the developer supply container1from the start of connection therebetween to the developer suppliable state. Therefore, in addition to the advantageous effects of the above-described embodiment, the contamination of the main assembly seal13of the developer receiving portion11with the developer which may be caused by the dragging of the developer supply container1can be prevented. In addition, wearing of main assembly seal13of the developer receiving portion11attributable to the dragging can be prevented. Therefore, a reduction of the durability, due to the wearing, of the main assembly seal13of the developer receiving portion11can be suppressed, and the reduction of the sealing property of the main assembly seal13due to the wearing can be suppressed. (Dismounting Operation of Developer Supply Container) Referring toFIG.26toFIG.31andFIG.32, the operation of removing the developer supply container1from the developer receiving apparatus8will be described.FIG.32is a timing chart of operations of each elements relating to the dismounting operation of the developer supply container1from the developer receiving apparatus8as shown inFIG.27-FIG.31. Similarly to the Embodiment 1, the removing operation of developer supply container1(dismounting operation) is a reciprocal of the mounting operation. As described hereinbefore, in the position of part (a) ofFIG.31, when the amount of the developer in the developer supply container1decreases, the operator dismounts the developer supply container1in the direction of an arrow B in the Figure. The position of the shutter4relative to the developer receiving apparatus8is maintained by the relation between the supporting portion4dand the regulation rib3b3, as described above. Therefore, the developer supply container1moves relative to the shutter4. When the developer supply container1is moved to the position shown in part (a) ofFIG.30, the discharge opening3a4is sealed by the shutter4, as shown in part (b) ofFIG.30. That is, in such a position, the developer is not supplied from the developer supply container1. In addition, by the discharge opening3a4sealed, the developer does not scatter through the discharge opening3a4from the developer supply container1due to the vibration or the like resulting from the dismounting operation. The developer receiving portion11keeps connected with the shutter4, and therefore, the developer receiving port11aand the shutter are still in communication with each other. Then, when the developer supply container1is moved to the position shown in part (a) ofFIG.28, the engaging portion11bof the developer receiving portion11displaces in the direction of the arrow F along the first engaging portion3b2by the urging force in the direction of the arrow F of the urging member12, as shown in part (d) ofFIG.28. By this, as shown in part (b) ofFIG.28, the shutter4and the developer receiving portion11are spaced from each other. Therefore, in the process of reaching this position, the developer receiving portion11displaces in the direction of the arrow F (downwardly). Therefore, even if the developer is in the state of being packed in the neighborhood of the developer receiving port11a, the developer is accommodated in the sub-hopper8cby the vibration or the like resulting from the dismounting operation. By this, the developer is prevented from scattering to the outside. Thereafter, as shown in part (b) ofFIG.28, the developer receiving port11ais sealed by the main assembly shutter15. Then when the developer supply container1is removed to the position shown in part (a) ofFIG.27, the shutter opening4fis shielded by the shielding portion3b6of the lower flange portion3b. More particularly, the neighborhood of the shutter opening4fand the close-contact portion4hwhich is the only contaminated part is shielded by the shielding portion3b6. Therefore, the neighborhood of the shutter opening4fand the close-contact portion4hare not seen by the operator handling the developer supply container1. In addition, the operator is protected from touching inadvertently the neighborhood of the shutter opening4fand the close-contact portion4hcontaminated with the developer. Furthermore, the close-contact portion4hof the shutter4is stepped lower than the sliding surface4i. Therefore, when the shutter opening4fand the close-contact portion4hare shielded by the shielding portion3b6, a downstream side end surface X (part (b) ofFIG.20) of the shielding portion3b6with respect to the dismounting direction of the developer supply container1is not contaminated by the developer deposited on the shutter opening4fand the close-contact portion4h. Moreover, with the dismounting operation of the above-described developer supply container1, the space operation of the developer receiving portion11by the engaging portions3b2,3b4is completed, and thereafter, the supporting portion4dof the shutter4is disengaged from the regulation rib3b3so as to become elastically deformable. Therefore, the shutter4is released from the developer receiving apparatus8, so that it becomes displaceable (movable) together with the developer supply container1. When the developer supply container1is moved to the position of part (a) ofFIG.26, supporting portion4dof shutter4contacts to the insertion guide8eof the developer receiving apparatus8by which it is displaced in the direction of the arrow C in the Figure, as shown in part (c) ofFIG.26. By this, the second stopper portion4cof the shutter4is disengaged from the second shutter stopper portion8bof the developer receiving apparatus8, so that the lower flange portion3bof the developer supply container1and the shutter4displace integrally in the direction of the arrow B. By further moving the developer supply container1away from the developer receiving apparatus8in the direction of the arrow B, by which the developer supply container1is completely taken out of the developer receiving apparatus8. The shutter4of the developer supply container1thus taken out has returned to the initial position, and therefore, even if the developer receiving apparatus8is remounted, no problem arises. As described hereinbefore, the shutter opening4fand the close-contact portion4hof shutter4are shielded by the shielding portion3b6, and therefore, the portion contaminated with the developer is not seen by the operator handling the developer supply container1. Therefore, by the only portion of the developer supply container1that is contaminated with the developer is shielded, and therefore, the taken-out developer supply container1looks as if it is an unused developer supply container1. FIG.32shows flow of the mounting operation of the developer supply container1to the developer receiving apparatus8(FIGS.26-31) and the flow of the dismounting operation of the developer supply container1from the developer receiving apparatus8. When the developer supply container1is mounted to the developer receiving apparatus8, the engaging portion11bof the developer receiving portion11is engaged with the first engaging portion3b2of the developer supply container1, by which the developer receiving port displaces toward the developer supply container. On the other hand, when the image material supply container1is dismounted from the developer receiving apparatus8, the engaging portion11bof the developer receiving portion11engages with the first engaging portion3b2of the developer supply container1, by which the developer receiving port displaces away from the developer supply container. As described in the foregoing, according to this embodiment of the developer supply container1, the following advantageous effects can be provided in addition to the same advantageous effects of Embodiment 1. The developer supply container1of this embodiment the developer receiving portion11and the developer supply container1are connected with each other through the shutter opening4f. And, by the connection, the misalignment prevention of the developer receiving portion11and the misalignment prevention taper engaging portion4gof the shutter4are engaged with each other. By the aligning function of such engagement, the discharge opening3a4is assuredly unsealed, and therefore, the discharge amount of the developer is stabilized. In the case of Embodiment 1, the discharge opening3a4formed in the part of the opening seal3a5moves on the shutter4the become in fluid communication with the developer receiving port11a. In this case, the developer might enter into a seam existing between the developer receiving portion11and the shutter4in the process to completely connect with the developer receiving port11aafter the discharge opening3a4is uncovered by the shutter4with the result that a small amount of the developer scatters to the developer receiving apparatus8. However, according to this example, the shutter opening4fand the discharge opening3a4are brought into communication with each other after completion of the connection (communication) between the developer receiving port11aof the developer receiving portion11and the shutter opening4fof the shutter4. For this reason, there is no seam between the developer receiving portion11and the shutter4. In addition, positional relation between the shutter and the developer receiving port11adoes not change. Therefore, the developer contamination by the developer entered into the gap between the developer receiving portion11and the shutter4and the developer contamination caused by the dragging of the main assembly seal13on the surface of the opening seal3a5can be avoided. Therefore, this example is preferable to Embodiment 1 from the standpoint of the reduction of the contamination with the developer. In addition, by the provision of the shielding portion3b6, the shutter opening4fand the close-contact portion4hthat are the only portion contaminated by the developer are shielded, the developer contamination dye portion is not exposed to the outside, similarly to the Embodiment 1 in which the developer contamination dye portion of the opening seal3a5is shielded by the shutter4. Therefore, similarly to Embodiment 1, the portion contaminated with the developer is not seen from the outside by the operator. Furthermore, as described in the foregoing, with respect to Embodiment 1, the connecting side (developer receiving portion11) and the connected side (developer supply container1) are directly engaged to establish the connection relation therebetween. More specifically, the timing of the connection between the developer receiving portion11and the developer supply container1can be controlled easily by the positional relation, with respect to mounting direction, among the engaging portion11bof the developer receiving portion11, the first engaging portion3b2and the second engaging portion3b4of the lower flange portion3bof the developer supply container1, and the shutter opening4fof the shutter4. In other words, the timing may deviate within the tolerances of the three elements, and therefore, very high accuracy control can be performed. Therefore, the connecting operation of the developer receiving portion11to the developer supply container1and the spacing operation from the developer supply container1can be carried out assuredly, with the mounting operation and the dismounting operation of the developer supply container1. Regarding the displacement amount of the developer receiving portion11in the direction crossing with the mounting direction of the developer supply container1can be controlled by the positions of the engaging portion11bof the developer receiving portion11and the second engaging portion3b4of the lower flange portion3b. Similarly to the foregoing, the deviation of the displacement amount may deviate within the tolerances of the two elements, and therefore, very high accuracy control can be performed. Therefore, for example, the close-contact state between the main assembly seal13and the shutter4can be controlled easily, so that the developer discharged from the opening4fcan be fed into the developer receiving port11aassuredly. Embodiment 3 Referring toFIGS.33,34, a structure of the Embodiment 3 will be described Part (a) ofFIG.33is a partial enlarged view around a first engaging portion3b2of a developer supply container1, and part (b) ofFIG.33is a partial enlarged view of a developer receiving apparatus8. Part (a)-part (c) ofFIG.34are schematic view illustrating the movement of a developer receiving portion11in a dismounting operation. The position of part (a) ofFIG.34corresponding to the position ofFIGS.15,30, the position of part (c) ofFIG.34corresponds to the position ofFIGS.13and28, the position of part (b) ofFIG.34is therebetween and corresponds to the position ofFIGS.14,29. As shown in part (a) ofFIG.33, in this example, the structure of the first engaging portion3b2is partly different from those of Embodiment 1 and Embodiment 2. The other structures are substantially similar to Embodiment 1 and/or Embodiment 2. In this example, the same reference numerals as in the foregoing Embodiment 1 are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. As shown in part (a) ofFIG.33, above engaging portions3b2,3b4for moving the developer receiving portion11upwardly, an engaging portion3b7for moving the developer receiving portion11downwardly is provided. Here, the engaging portion comprising the first engaging portion3b2and the second engaging portion3b4for moving the developer receiving portion11upwardly is called a lower engaging portion. On the other hand, the engaging portion3b7provided in this embodiment to move the developer receiving portion11downwardly is called an upper engaging portion. The engaging relation between the developer receiving portion11and the lower engaging portion comprising the first engaging portion3b2and the second engaging portion3b4are similar to the above-described embodiments, and therefore, the description thereof is omitted. The engaging relation between the developer receiving portion11and the upper engaging portion comprising the engaging portion3b7will be described. If, for example, the developer supply container1is extremely quickly dismounted (quick dismounting, not practical though), in the developer supply container1of Embodiment 1 or Embodiment 2, the developer receiving portion11might not be guided by the first engaging portion3b2and would be lowered at delayed timing, with the result of a slight contamination with the developer to a practically no problem extent on the lower surface of the developer supply container1, the developer receiving portion11and/or the main assembly seal13. This was confirmed. In view of this, the developer supply container1of Embodiment 3 is improved in this respect by providing it with the upper engaging portion3b7. When the developer supply container1is dismounted, the developer receiving portion11reaches a region contacting the first engaging portion. Even if the developer supply container1is taken out extremely quickly, an engaging portion11bof the developer receiving portion11is engaged with the upper engaging portion3b7and is guided thereby, with the dismounting operation of the developer supply container1, so that the developer receiving portion11is positively moved in the direction of an arrow F in the Figure. The upper engaging portion3b7extends to an upstream side beyond the first engaging portion3b2in the direction (arrow B) in which the developer supply container1is taken out. More particularly, a free end portion3b70of the upper engaging portion3b7is upstream of a free end portion3b20of the first engaging portion3b2with respect to the direction (arrow B) in which the developer supply container1is taken out. The start timing of the downward movement of the developer receiving portion11in the dismounting of the developer supply container1is after the sealing of the discharge opening3a4by the shutter4similarly to Embodiment 2. The movement start timing is controlled by the position of the upper engaging portion3b7shown in part (a) ofFIG.33. If the developer receiving portion11is spaced from the developer supply container1before the discharge opening3a4is sealed by the shutter4, the developer may scatter in the developer receiving apparatus8from the discharge opening3a4by vibration or the like during the dismounting. Therefore, it is preferable to space the developer receiving portion11after the discharge opening3a4is sealed assuredly by the shutter4. Using the developer supply container1of this embodiment, the developer receiving portion11can be spaced assuredly from the discharge opening3a4in the dismounting operation of the developer supply container1. In addition, with the structure of this example, the developer receiving portion11can be moved assuredly by the upper engaging portion3b7without using the urging member12for moving the developer receiving portion11downwardly. Therefore, as described above, even in the case of the quick dismounting of the developer supply container1, the upper engaging portion3b7assuredly guides the developer receiving portion11so that the downward movement can be effected at the predetermined timing. Therefore, the contamination of the developer supply container1with the developer can be prevented even in the quick dismounting. With the structures of Embodiment 1 and Embodiment 2, the developer receiving portion11is moved against the urging force of the urging member12in the mounting of the developer supply container1. Therefore, a manipulating force required to the operator in the mounting increases correspondingly, and on the contrary, in the dismounting, it can be dismounted smoothly with the aid of the urging force of the urging member12. Using this example, as shown in part (b) ofFIG.3, it may be unnecessary to provide the developer receiving apparatus8with a member for urging the developer receiving portion11downwardly. In this case, the urging member12is not provided, and therefore, the required manipulating force is the same irrespective of whether the developer supply container1is mounted or dismounted relative to the developer receiving apparatus8. In addition, irrespective of the provision of the urging member12, the developer receiving portion11of the developer receiving apparatus8can be connected and spaced in the direction crossing with the mounting and dismounting directions with the mounting and dismounting operation of the developer supply container1. In other words, the contamination, with the developer, of the downstream side end surface Y (part (b) ofFIG.5) with respect to the mounting direction of the developer supply container1, as compared with the case in which the developer supply container1is connected with and spaced from the developer receiving portion11in the direction of mounting and dismounting directions of the developer supply container1. In addition, the developer contamination caused by the main assembly seal13dragging on the lower surface of the lower flange portion3bcan be prevented. From the standpoint of suppression of the maximum value of the manipulating force in the mounting and dismounting of the developer supply container1of this example, the omission of the urging member12is desired. On the other hand, from the standpoint of reduction of the manipulating force in the dismounting or from the standpoint of assuring the initial position of the developer receiving portion11, the developer receiving apparatus8is desirably provided with the urging member12. A proper selection therebetween can be made depending on the specifications of the main assembly and/or the developer supply container. Comparison Example Referring toFIG.35, a comparison example will be described. Part (a) ofFIG.35is a sectional view of a developer supply container1and a developer receiving apparatus8prior to the mounting, parts (b) and (c) ofFIG.35are sectional views during the process of mounting the developer supply container1to the developer receiving apparatus8, part (d) ofFIG.35is a sectional view thereof after the developer supply container1is connected to the developer receiving apparatus8. In the description of this comparison example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted for simplicity. In the comparison example, the developer receiving portion11is fixed to the developer receiving apparatus8and is immovable in the upward or downward direction, as contrasted to Embodiment 1 or Embodiment 2. In other words, the developer receiving portion11and the developer supply container1are connected and spaced relative to each other in the mounting and dismounting direction of the developer supply container1. Therefore, in order to prevent an interference of the developer receiving portion11with the shielding portion3b6provided in the downstream side of the lower flange portion3bwith respect to the mounting direction in Embodiment 2, for example, an upper end of the developer receiving portion11is lower than the shielding portion3b6as shown in part (a) ofFIG.35. In addition, to provide a compression state equivalent to that of Embodiment 2 between the shutter4and the main assembly seal13, the main assembly seal13of the comparison example is longer than that of the main assembly seal13of Embodiment 2 in the vertical direction. As described above, the main assembly seal13is made of an elastic member or foam member or the like, and therefore, even if the interference occurs between the developer supply container1and the developer supply container1in the mounting and dismounting operations, the interference does not prevent the mounting and dismounting operations of the developer supply container1because of the elastic deformation as shown in part (b) ofFIG.35and part (c) ofFIG.35. Experiments have been carried out about a discharge amount and an operationality as well as the developer contamination using the developer supply container1of the comparison example and the developer supply containers1of Embodiment 1-Embodiment 3. In the experiments, the developer supply container1is filled with a predetermined amount of a predetermined developer, and the developer supply container1is once mounted to the developer receiving apparatus8. Thereafter, the developer supplying operation is carried out to the extent of one tenth of the filled amount, and the discharge amount during the supplying operation is measured. Then, the developer supply container1is taken out of the developer receiving apparatus8, and the contamination of the developer supply container1and the developer receiving apparatus8with the developer is observed. Further, the operationality such as the manipulating force and the operation feeling during the mounting and dismounting operations of the developer supply container1are checked. In the experiments, the developer supply container1of Embodiment 3 was based on the developer supply container1of Embodiment 2. The experiments were carried out five times for each case for the purpose of reliability of the evaluations. Table 1 shows the results of the experiments and evaluations. TABLE 1Developer contaminationpreventionDeveloperDeveloperDischargesupplysupplyperfor-Structuresdevice sidecontainer sicemanceOperativityComp. exampleNNFGEmb. 1FGFGEmb. 2GGGGEmb. 3EEGGDeveloper contamination prevention:E: Hardly any contamination even in extreme condition use;G: Hardly any contamination in normal condition use;F: Slight contamination (no problem practically) in normal use; andN: Contaminated (problematic practically) in normal use.Discharge performance:G: Sufficient discharge amount per unit time;F: 70% (based on G case) (no problem practically); andN: Less than 50% (based on G case) (problematic pracctically).Operativity:G: Required force is less than 20N with good operation feeling;F: Required force is 20N or larger with good operation feeling; andN: Required force is 20N or larger with no good operation feeling. As to the level of the developer contamination of the developer supply container1or the developer receiving apparatus8taken out of the developer receiving apparatus8after the supplying operation, the developer deposited on the main assembly seal13is transferred onto the lower surface of the lower flange portion3band/or the sliding surface4i(FIG.35) of the shutter4, in the developer supply container1of the comparison example. In addition, the developer is deposited on the end surface Y (part (b) ofFIG.5) of the developer supply container1. Therefore, in this state, if the operator touches inadvertently the developer deposited portion, the operator's finger will be contaminated with the developer. In addition, a large amount of the developer is scattered on the developer receiving apparatus8. With the structure of the comparison example, when the developer supply container1is mounted in the mounting direction (arrow A) in the Figure) from the position shown in part (a) ofFIG.35, the upper surface of the main assembly seal13of the developer receiving portion11first contacts the end surface Y the part (b) ofFIG.5) in the downstream side, with respect to the mounting direction, of the developer supply container1. Thereafter, as shown in part (c) ofFIG.35, the developer supply container1displaces in the direction of an arrow A, in the state that the upper surface of the main assembly seal13of the developer receiving portion11is in contact with the lower surface of the lower flange portion3band the sliding surface4iof the shutter4. Therefore, the developer contamination by the dragging remains on the contact portions, and the developer contamination is exposed in the outside of the developer supply container1and scatters with the result of contamination of the developer receiving apparatus8. It has been confirmed that the levels of the developer contamination in the developer supply containers1of Embodiment 1-Embodiment 3 are much improved over that in the comparison example. In Embodiment 1, by the mounting operation of the developer supply container1, the connecting portion3a6of the opening seal3a5having been shielded by the shutter4is exposed, and the main assembly seal13of the developer receiving portion11is connected to the exposed portion in the direction crossing with the mounting direction. With the structure of Embodiment 2 and Embodiment 3, the shutter opening4fand the close-contact portion4hare uncovered by the shielding portion3b6, and by the time immediately before the alignment between the discharge opening3a4and the shutter opening4f, the developer receiving portion11displaces in the (upwardly in the embodiments) direction crossing with the mounting direction to connect with the shutter4. Therefore, the developer contamination of the downstream end surface Y (part (b) ofFIG.5) with respect to the mounting direction of the developer supply container1can be prevented. In addition, in the developer supply container1of Embodiment 1, the connecting portion3a6formed on the opening seal3a5which is contaminated by the developer to be connected by the main assembly seal13of the developer receiving portion11is shielded in the shutter4, with the dismounting operation of the developer supply container1. Therefore, the connecting portion3a6of the opening seal3a5of the taken-out developer supply container1is not seen from the outside. In addition, the scattering of the developer deposited on the connecting portion3a6of the opening seal3a5of the taken-out developer supply container1can prevented. Similarly, in the developer supply container1of Embodiment 2 or Embodiment 3, the close-contact portion4hof the shutter4and the shutter opening4fcontaminated with the developer in the connection of the developer receiving portion11is shielded in the shielding portion3b6with the dismounting operation of the developer supply container1. Therefore, close-contact portion4hof the shutter4and the shutter opening4fcontaminated with the developer is not seen from the outside. In addition, the scattering of the developer deposited on the close-contact portion4hand the shutter of the shutter4can be prevented. The levels of the contaminations with the developer are checked in the case of the quick dismounting of the developer supply container1. With the structures of Embodiment 1 and Embodiment 2, a slight level of developer contamination is seen, and with the structure of Embodiment 3, no developer contamination is seen on the developer supply container1or the developer receiving portion11. This is because even if the quick dismounting of the developer supply container1of Embodiment 3 is carried out, the developer receiving portion11is assuredly guiding downwardly at the predetermined timing by the upper engaging portion3b7, and therefore, no deviation of the timing of the movement of the developer receiving portion11occurs. It has been confirmed that the structure of Embodiment 3 is better than the structures of Embodiment 1 and Embodiment 2 with respect to the developer contamination level in the quick dismounting. Discharging performance during the supplying operation of the developer supply containers1is checked. For this checking, the discharge amount of the developer discharged from the developer supply container1per unit time is measured, and the repeatability is checked. The results show that in Embodiment 2 and Embodiment 3, the discharge amount from the developer supply container1per unit time is sufficient the and the repeatability is excellent. With Embodiment 1 and the comparison example, the discharge amount from the developer supply container1per unit time are sufficient is an occasion and is 70% in another occasion. When the developer supply container1is observed during the supplying operation, the developer supply containers1sometimes slightly offset in the dismounting direction from the mounting position by the vibration during the operation. The developer supply container1of Embodiment 1 is mounted and demounted relative to the developer receiving apparatus8a plurality of times, and the connection state is checked each time, and in one case out of five, the positions of the discharge opening3a4of the developer supply container1and the developer receiving port11aare offset with the result that the opening communication area is relatively small. It is considered that the discharge amount from the developer supply container1per unit time is relatively small. From the phenomenon- and the structure, it is understood that in the developer supply containers1of Embodiment 2 and Embodiment 3, by the aligning function of the engaging effect between the misalignment prevention tapered portion11cand the misalignment prevention taper engaging portion4gthe shutter opening4fand the developer receiving port11acommunicate with each other without the misalignment, even if the position of the developer receiving apparatus8is slightly offset. Therefore, it is considered that the discharging performance (discharge amount per unit time) is stabilized. The operationalities are checked. A mounting force for the developer supply container1to the developer receiving apparatus8is slightly higher in Embodiment 1, Embodiment 2 and Embodiment 3 than the comparison example. This is because, as described above, the developer receiving portion11is displaced upwardly against the urging force of the urging member12urging the developer receiving portion11downwardly. The manipulating force in Embodiment 1 to Embodiment 3 is approx. 8 N-15 N, which is not a problem. With the structure of Embodiment 3, the mounting force was checked with the structure not having the urging member12. At this time, the manipulating force in the mounting operation is substantially the same as that of the comparison example and was approx. 5 N-10 N. The demounting force in the dismounting operation of the developer supply container1was measured. The results show that the demounting force is smaller than the mounting force in the case of the developer supply containers1of Embodiment 1, Embodiment 2 and Embodiment 3 and is approx. 5 N-9 N. As described above, this is because the developer receiving portion11moves downwardly by the assisting of the urging force of the urging member12. Similarly to the foregoing, when the urging member12is not provided in Embodiment 3, there is no significant difference between the mounting force and the demounting force and is approx. 6 N-10 N. In any of the developer supply containers1, the operation feeling has no problem. By the checking described in the foregoing, it has been confirmed that the developer supply container1of this embodiment is overwhelmingly better than the developer supply container1of the comparison example from the standpoint of prevention of the developer contamination. In addition, the developer supply container1of these embodiments have solved to various problems with conventional developer supply container. In the developer supply container of this embodiment, the mechanism for displacing the developer receiving portion11and connecting it with the developer supply container1can be simplified, as compared with the conventional art. More particularly, a driving source or a drive transmission mechanism for moving the entirety of the developing device upwardly is not required, and therefore, the structure of the image forming apparatus side is not complicated, and increase in cost due to the increase of the number of parts can be avoided. In the conventional art, in order to avoid the interference with the developing device when the entirety of the developing device moves up and down, a large space is required, but such upsizing of the image forming apparatus can be prevented in the present invention. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with the minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. In addition, with the developer supply container1of this embodiment, the timing of displacing the developer receiving portion11in the direction crossing with the mounting and demounting direction by the developer supply container1in the mounting and dismounting operation of the developer supply container1can be controlled assuredly by the engaging portion comprising the first engaging portion3b2and the second engaging portion3b4. In other words, the developer supply container1and the developer receiving portion11can be connected and spaced relative to each other without relying on the operation of the operator. Embodiment 4 Referring to the drawings, Embodiment 4 will be described. In Embodiment 4, the structure of the developer receiving apparatus and the developer supply container are partly different from those of Embodiment 1 and Embodiment 2. The other structures are substantially the same as with Embodiment 1 or Embodiment 2. In the description of this embodiment, the same reference numerals as in Embodiments 1 and 2 are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted for simplicity. (Image Forming Apparatus) FIGS.36and37illustrate an example of the image forming apparatus comprising a developer receiving apparatus to which a developer supply container (so-called toner cartridge) is detachably mounted. The structure of the image forming apparatus is substantially the same as with Embodiment 1 or Embodiment 2 except for a structure of a part of the developer supply container and a part of the developer receiving apparatus, and therefore, the detailed description of the common parts is omitted for simplicity. (Developer Receiving Apparatus) Referring toFIGS.38,39and40, the developer receiving apparatus8will be described.FIG.3is a schematic perspective view of the developer receiving apparatus8.FIG.39is a schematic perspective view of the developer receiving apparatus8as seen from a back side ofFIG.38.FIG.40is a schematic sectional view of the developer receiving apparatus8. The developer receiving apparatus8is provided with a mounting portion (mounting space)8fto which the developer supply container1is detachably mounted. Further, there is provided an developer receiving portion11for receiving a developer discharged from the developer supply container1through a discharge opening (opening)1c(FIG.43). The developer receiving portion11is mounted so as to be movable (displaceable) relative to the developer receiving apparatus8in the vertical direction. As shown inFIG.40, the upper end surface of the developer receiving portion11is provided with a main assembly seal13having a developer receiving port11aat the central portion. The main assembly seal13comprises an elastic member, a foam member or the like, and the main assembly seal13is closely-contacted with an opening seal (unshown) provided with a discharge opening1cfor the developer supply container1which will be described hereinafter to prevent leakage of the developer from the discharge opening1cand/or the developer receiving port11a. In order to prevent the contamination in the mounting portion8fby the developer as much as possible, a diameter of the developer receiving port11ais desirably substantially the same as or slightly larger than a diameter of the discharge opening3a4of the developer supply container1. This is because if the diameter of the developer receiving port11ais smaller than the diameter of the discharge opening1c, the developer discharged from the developer supply container1is deposited on the upper surface of developer receiving port11a, and the deposited developer is transferred onto the lower surface of the developer supply container1during the dismounting operation of the developer supply container1, with the result of contamination with the developer. In addition, the developer transferred onto the developer supply container1may be scattered to the mounting portion8fwith the result of contamination of the mounting portion8fwith the developer. On the contrary, if the diameter of the developer receiving port11ais quite larger than the diameter of the discharge opening1c, an area in which the developer scattered from the developer receiving port11ais deposited on the neighborhood of the discharge opening1cis large. That is, the contaminated area of the developer supply container1by the developer is large, which is not preferable. Under the circumstances, the difference between the diameter of the developer receiving port11aand the diameter of the discharge opening1cis preferably substantially 0 to approx. 2 mm. In this example, the diameter of the discharge opening1cof the developer supply container1is approx. Φ2 mm (pin hole), and therefore, the diameter of the developer receiving port11ais approx. φ3 mm. As shown inFIG.40, the developer receiving portion11is urged downwardly by an urging member12. When the developer receiving portion11moves upwardly, it has to move against an urging force of the urging member12. Below the developer receiving apparatus8, there is provided a sub-hopper8cfor temporarily storing the developer. As shown inFIG.40, in the sub-hopper8c, there are provided a feeding screw14for feeding the developer into the developer hopper portion201a(FIG.36) which is a part of the developing device201, and an opening8dwhich is in fluid communication with the developer hopper portion201a. The developer receiving port11ais closed so as to prevent foreign matter and/or dust entering the sub-hopper8cin a state that the developer supply container1is not mounted. More specifically, the developer receiving port11ais closed by a main assembly shutter15in the state that the developer receiving portion11is away to the upside. The developer receiving portion11moves upwardly (arrow E) from the position shown inFIG.43toward the developer supply container1with the mounting operation of the developer supply container1. By this, the developer receiving port11aand the main assembly shutter15are spaced from each other to unseal the developer receiving port11a. With this open state, the developer is discharged from the developer supply container1through the discharge opening1c, so that the developer received by the developer receiving port11ais movable to the sub-hopper8c. A side surface of the developer receiving portion11is provided with an engaging portion11b(FIGS.4,19). The engaging portion11bis directly engaged with an engaging portion3b2,3b4(FIGS.8and20) provided on the developer supply container1which will be described hereinafter, and is guided thereby so that the developer receiving portion11is raised toward the developer supply container1. As shown inFIG.38, mounting portion8fof the developer receiving apparatus8is provided with a positioning guide (holding member)81having a L-like shape to fix the position of the developer supply container1. The mounting portion8fof the developer receiving apparatus8is provided with an insertion guide8efor guiding the developer supply container1in the mounting and demounting direction. By the positioning guide81and the insertion guide8e, the mounting direction of the developer supply container1is determined as being the direction of an arrow A. The dismounting direction of the developer supply container1is the opposite (arrow B) to the direction of the arrow A. The developer receiving apparatus8is provided with a driving gear9(FIG.39) functioning as a driving mechanism for driving the developer supply container1and is provided with a locking member10(FIG.38). The locking member10is locked with a locking portion18(FIG.44the functioning as a drive inputting portion of the developer supply container1when the developer supply container1is mounted to the mounting portion8fed of the developer receiving apparatus8. As shown inFIG.38, the locking member10is loose fitted in an elongate hole portion8gformed in the mounting portion8fof the developer receiving apparatus8, and is movable relative to the mounting portion8fin the up and down directions in the Figure. The locking member10is in the form of a round bar configuration and is provided at the free end with a tapered portion10din consideration of easy insertion into a locking portion18(FIG.44) of the developer supply container1which will be described hereinafter. The locking portion10a(engaging portion engageable with locking portion18) of the locking member10is connected with a rail portion10bshown inFIG.39. The sides of the rail portion10bare held by a guide portion8jof the developer receiving apparatus8and is movable in the up and down direction in the Figure. The rail portion10bis provided with a gear portion10cwhich is engaged with a driving gear9. The driving gear9is connected with a driving motor500. By a control device600effecting such a control that the rotational moving direction of a driving motor500provided in the image forming apparatus100is periodically reversed, the locking member10reciprocates in the up and down directions in the Figure along the elongated hole8g. (Developer Supply Control of Developer Receiving Apparatus) Referring toFIGS.41and42, a developer supply control by the developer receiving apparatus8will be described.FIG.41is a block diagram illustrating the function and the structure of the control device600, andFIG.42is a flow chart illustrating a flow of the supplying operation. In this example, an amount of the developer temporarily accumulated in the hopper8c(height of the developer level) is limited so that the developer does not flow reversely into the developer supply container1from the developer receiving apparatus8by the sucking operation of the developer supply container1which will be described hereinafter. For this purpose, in this example, a developer sensor8k(FIG.40) is provided to detect the amount of the developer accommodated in the hopper8g. As shown inFIG.41, the control device600controls the operation/non-operation of the driving motor500in accordance with an output of the developer sensor8kby which the developer is not accommodated in the hopper8cbeyond a predetermined amount. The control flow will be described. First, as shown inFIG.42, the developer sensor8kchecks the accommodated developer amount in the hopper8c. When the accommodated developer amount detected by the developer sensor8kis discriminated as being less than a predetermined amount, that is, when no developer is detected by the developer sensor8k, the driving motor500is actuated to execute a developer supplying operation for a predetermined time period (S101). When the accommodated developer amount detected with developer sensor8kis discriminated as having reached the predetermined amount, that is, when the developer is detected by the developer sensor8k, as a result of the developer supplying operation, the driving motor500is deactuated to stop the developer supplying operation (S102). By the stop of the supplying operation, a series of developer supplying steps is completed. Such developer supplying steps are carried out repeatedly whenever the accommodated developer amount in the hopper8cbecomes less than a predetermined amount as a result of consumption of the developer by the image forming operations. In this example, the developer discharged from the developer supply container1is stored temporarily in the hopper8c, and then is supplied into the developing device, but the following structure of the developer receiving apparatus can be employed. Particularly in the case of a low speed image forming apparatus100, the main assembly is required to be compact and low in cost. In such a case, it is desirable that the developer is supplied directly to the developing device201, as shown inFIG.43. More particularly, the above-described hopper8cis omitted, and the developer is supplied directly into the developing device201afrom the developer supply container1.FIG.43shows an example using a two-component type developing device201as the developer receiving apparatus. The developing device201comprises a stirring chamber into which the developer is supplied, and a developer chamber for supplying the developer to the developing roller201f, wherein the stirring chamber and the developer chamber are provided with screws201drotatable in such directions that the developer is fed in the opposite directions from each other. The stirring chamber and the developer chamber are communicated with each other in the opposite longitudinal end portions, and the two component developer are circulated the two chambers. The stirring chamber is provided with a magnetometric sensor201gfor detecting a toner content of the developer, and on the basis of the detection result of the magnetometric sensor201g, the control device600controls the operation of the driving motor500. In such a case, the developer supplied from the developer supply container is non-magnetic toner or non-magnetic toner plus magnetic carrier. The developer receiving portion is not illustrated inFIG.43, but in the case where the hopper8cis omitted, and the developer is supplied directly to the developing device201from the developer supply container1, the developer receiving portion11is provided in the developing device201. The arrangement of the developer receiving portion11in the developing device201may be properly determined. In this example, as will be described hereinafter, the developer in the developer supply container1is hardly discharged through the discharge opening1conly by the gravitation, but the developer is by a discharging operation by a pump portion2, and therefore, variation in the discharge amount can be suppressed. Therefore, the developer supply container1which will be described hereinafter is usable for the example ofFIG.8lacking the hopper8c. (Developer Supply Container) Referring toFIGS.44and45, the developer supply container1according to this embodiment will be described.FIG.44is a schematic perspective view of the developer supply container1.FIG.45is a schematic sectional view of the developer supply container1. As shown inFIG.44, the developer supply container1has a container body1a(developer discharging chamber) functioning as a developer accommodating portion for accommodating the developer. Designated by1binFIG.45is a developer accommodating space in which the developer is accommodated in the container body1a. In the example, the developer accommodating space1bfunctioning as the developer accommodating portion is the space in the container body1aplus an inside space in the pump portion5. In this example, the developer accommodating space1baccommodates toner which is dry powder having a volume average particle size of 5 μm-6 μm. In this example, the pump portion is a displacement type pump portion5in which the volume changes. More particularly, the pump portion5has a bellow-like expansion-and-contraction portion5a(bellow portion, expansion-and-contraction member) which can be contracted and expanded by a driving force received from the developer receiving apparatus8. As shown inFIGS.44and45, the bellow-like pump portion5of this example is folded to provide crests and bottoms which are provided alternately and periodically, and is contractable and expandable. When the bellow-like pump portion2as in this example, a variation in the volume change amount relative to the amount of expansion and contraction can be reduced, and therefore, a stable volume change can be accomplished. In this embodiment, the entire volume of the developer accommodating space1bis 480 cm{circumflex over ( )}3, of which the volume of the pump portion2is 160 cm{circumflex over ( )}3 (in the free state of the expansion-and-contraction portion5a), and in this example, the pumping operation is effected in the pump portion (2) expansion direction from the length in the free state. The volume change amount by the expansion and contraction of the expansion-and-contraction portion5aof the pump portion5is 15 cm{circumflex over ( )}3, and the total volume at the time of maximum expansion of the pump portion5is 495 cm{circumflex over ( )}3. The developer supply container1filled with 240 g of developer. The driving motor500for driving the locking member10shown inFIG.43is controlled by the control device600to provide a volume change speed of 90 cm{circumflex over ( )}3/s. The volume change amount and the volume change speed may be properly selected in consideration of a required discharge amount of the developer receiving apparatus8. The pump portion5in this example is a bellow-like pump, but another pump is usable if the air amount (pressure) in the developer accommodating space1bcan be changed. For example, the pump portion5may be a single-shaft eccentric screw pump. In this case, an opening for suction and discharging of the single-shaft eccentric screw pump is required, and such an opening requires a additional filter or the like in addition to the above-described filter, in order to prevent the leakage of the developer therethrough. In addition, a single-shaft eccentric screw pump requires a very high torque to operate, and therefore, the load to the main assembly100of the image forming apparatus increases. Therefore, the bellow-like pump is preferable since it is free of such problems. The developer accommodating space1bmay be only the inside space of the pump portion5. In such a case, the pump portion5functions simultaneously as the developer accommodating space1b. A connecting portion5bof the pump portion5and the connected portion1iof the container body1aare unified by welding to prevent leakage of the developer, that is, to keep the hermetical property of the developer accommodating space1b. The developer supply container1is provided with a locking portion18as a drive inputting portion (driving force receiving portion, drive connecting portion, engaging portion) which is engageable with the driving mechanism of the developer receiving apparatus8and which receives a driving force for driving the pump portion5from the driving mechanism. More particularly, the locking portion18engageable with the locking member10of the developer receiving apparatus8is mounted to an upper end of the pump portion5. The locking portion18is provided with a locking hole18ain the center portion as shown inFIG.44. When the developer supply container1is mounted to the mounting portion8f(FIG.38), the locking member10is inserted into a locking hole18a, so that they are unified (slight play is provided for easy insertion). As shown inFIG.44, the relative position between the locking portion18and the locking member10in arrow p direction and arrow q direction which are expansion and contracting directions of the expansion-and-contraction portion5a. It is preferable that the pump portion5and the locking portion18are molded integrally using an injection molding method or a blow molding method. The locking portion18unified substantially with the locking member10in this manner receives a driving force for expanding and contracting the expansion-and-contraction portion5aof the pump portion2from the locking member10. As a result, with the vertical movement of the locking member10, the expansion-and-contraction portion5aof the pump portion5is expanded and contracted. The pump portion5functions as an air flow generating mechanism for producing alternately and repeatedly the air flow into the developer supply container and the air flow to the outside of the developer supply container through the discharge opening1cby the driving force received by the locking portion18functioning as the drive inputting portion. In this embodiment, the use is made with the round bar locking member10and the round hole locking portion18to substantially unify them, but another structure is usable if the relative position therebetween can be fixed with respect to the expansion and contracting direction (arrow p direction and arrow q direction) of the expansion-and-contraction portion5a. For example, the locking portion18is a rod-like member, and the locking member10is a locking hole; the cross-sectional configurations of the locking portion18and the locking member10may be triangular, rectangular or another polygonal, or may be ellipse, star shape or another shape. Or, another known locking structure is usable. The bottom end portion of the container body1ais provided with an upper flange portion1gconstituting a flange held by the developer receiving apparatus8so as to be non-rotatable. The upper flange portion1gis provided with a discharge opening1cfor permitting discharging of the developer to the outer of the developer supply container1from the developer accommodating space1b. The discharge opening1cwill be described in detail hereinafter. As shown inFIG.45, an inclined surface1fis formed toward the discharge opening1cin a lower portion of the container body1a, the developer accommodated in the developer accommodating space1bslides down on the inclined surface1fby the gravity toward a neighborhood of the discharge opening1c. In this embodiment, the inclination angle of the inclined surface1f(angle relative to a horizontal surface in the state that the developer supply container1is set in the developer receiving apparatus8) is larger than an angle of rest of the toner (developer). As for the configuration of the peripheral portion of the discharge opening1c, as shown inFIG.46, the configuration of the connecting portion between the discharge opening1cand the inside of the container body1amay be flat (1 W inFIG.45), or as shown inFIG.46, the discharge opening1cmay be connected with the inclined surface1f. The flat configuration shown inFIG.45provides high space efficiency in the direction of the height of the developer supply container1, and the configuration connecting with the inclined surface1fshown inFIG.46provides the reduction of the remaining developer because the developer remaining on the inclined surface1ffalls to the discharge opening1c. As described above, the configuration of the peripheral portion of the discharge opening1cmay be selected properly depending on the situation. In this embodiment, the flat configuration shown inFIG.45is used. The developer supply container1is in fluid communication with the outside of the developer supply container1only through the discharge opening1c, and is sealed substantially except for the discharge opening1c. Referring toFIGS.38and45, a shutter mechanism for opening and closing the discharge opening1cwill be described. An opening seal (sealing member)3a5of a elastic material is fixed by bonding to a lower surface of the upper flange portion1gso as to surround the circumference of the discharge opening1cto prevent developer leakage. The opening seal3a5is provided with a circular discharge opening (opening)3a4for discharging the developer into the developer receiving apparatus8similarly to the above-described embodiments. There is provided a shutter4for sealing the discharge opening3a4(discharge opening1c) so that the opening seal3a5is compressed between the lower surface of the upper flange portion1g. In this manner, the opening seal3a5is stuck on the lower surface of the upper flange portion1g, and is nipped by the upper flange portion1gand the shutter4which will be described hereinafter. In this example, the discharge opening3a4is provided on the opening seal3a5is unintegral with the upper flange portion1g, but the discharge opening3a4may be provided directly on the upper flange portion1g(discharge opening1c). Also in this case, in order to prevent the leakage of the developer, it is desired to nip the opening seal3a5by the upper flange portion1gand the shutter4. Below the upper flange portion1g, a lower flange portion3bconstituting a flange through the shutter4is mounted. The lower flange portion3bincludes engaging portions3b2,3b4engageable with the developer receiving portion11(FIG.4) similarly to the lower flange shown inFIG.8orFIG.20. The structure of the lower flange portion3bhaving the engaging portions3b2and3b4is similar to the above-described embodiments, and the description thereof is omitted. The shutter4is provided with a stopper portion (holding portion) held by a shutter stopper portion of the developer receiving apparatus8so that the developer supply container1is movable relative to the shutter4, similarly to the shutter shown inFIG.9orFIG.21. The structure of the shutter4having the stopper portion (holding portion) is similar to that of the above-described embodiments, and the description thereof is omitted. The shutter4is fixed to the developer receiving apparatus8by the stopper portion engaging with the shutter stopper portion formed on the developer receiving apparatus8, with the operation of mounting the developer supply container1. Then, the developer supply container1starts the relative movement relative to the fixed shutter4. At this time, similarly to the above-described embodiments, the engaging portion3b2of the developer supply container1is first engaged directly with the engaging portion11bof the developer receiving portion11to move the developer receiving portion11upwardly. By this, the developer receiving portion11is close-contacted to the developer supply container1(or the shutter opening4fof the shutter4), and the developer receiving port11aof the developer receiving portion11is unsealed. Thereafter, the engaging portion3b4of the developer supply container1is engaged directly with the engaging portion11bof the developer receiving portion11, and the developer supply container1moves relative to the shutter4while maintaining the above-described close-contact state, with the mounting operation. By this, the shutter4is unsealed, and the discharge opening1cof the developer supply container1and the developer receiving port11aof the developer receiving portion11are aligned with each other. At this time, the upper flange portion1gof the developer supply container1is guided by the positioning guide81of the developer receiving apparatus8so that a side surface1k(FIG.44) of the developer supply container1abuts to the stopper portion8iof the developer receiving apparatus8. As a result, the position of the developer supply container1relative to the developer receiving apparatus8in the mounting direction (A direction) is determined (FIG.52). In this manner, the upper flange portion1gof the developer supply container1is guided by the positioning guide81, and at the time when the inserting operation of the developer supply container1is completed, the discharge opening1cof the developer supply container1and the developer receiving port11aof the developer receiving portion11are aligned with each other. At the time when the inserting operation of the developer supply container1is completed, the opening seal3a5(FIG.52) seals between the discharge opening1cand the developer receiving port11ato prevent leakage of the developer to the outside. With the inserting operation of the developer supply container1, the locking member109is inserted into the locking hole18aof the locking portion18of the developer supply container1so that they are unified. At this time, the position thereof is determined by the L shape portion of the positioning guide81in the direction (up and down direction inFIG.38) perpendicular to the mounting direction (A direction), relative to the developer receiving apparatus8, of the developer supply container1. The flange portion1gas the positioning portion also functions to prevent movement of the developer supply container1in the up and down direction (reciprocating direction of the pump portion5). The operations up to here are the series of mounting steps for the developer supply container1. By the operator closing the front cover40, the mounting step is finished. The steps for dismounting the developer supply container1from the developer receiving apparatus8are opposite from those in the mounting step. The steps for dismounting the developer supply container1from the developer receiving apparatus8are opposite from those in the mounting step. More specifically, the steps described as the mounting operation and the dismounting operation of the developer supply container1in the above-described embodiments apply. More specifically, the steps described in conjunction withFIGS.13-17by Embodiment 1, or the steps described in conjunction withFIGS.26-29by Embodiment 2 apply here. In this example, the state (decompressed state, negative pressure state) in which the internal pressure of the container body1a(developer accommodating space1b) is lower than the ambient pressure (external air pressure) and the state (compressed state, positive pressure state) in which the internal pressure is higher than the ambient pressure are alternately repeated at a predetermined cyclic period. Here, the ambient pressure (external air pressure) is the pressure under the ambient condition in which the developer supply container1is placed. Thus, the developer is discharged through the discharge opening1cby changing a pressure (internal pressure) of the container body1a. In this example, it is changed (reciprocated) between 480-495 cm{circumflex over ( )}3 at a cyclic period of 0.3 sec. The material of the container body1ais preferably such that it provides an enough rigidity to avoid collision or extreme expansion. In view of this, this example employs polystyrene resin material as the materials of the developer container body1aand employs polypropylene resin material as the material of the pump portion2. As for the material for the container body1a, other resin materials such as ABS (acrylonitrile, butadiene, styrene copolymer resin material), polyester, polyethylene, polypropylene, for example are usable if they have enough durability against the pressure. Alternatively, they may be metal. As for the material of the pump portion2, any material is usable if it is expansible and contractable enough to change the internal pressure of the space in the developer accommodating space1bby the volume change. The examples includes thin formed ABS (acrylonitrile, butadiene, styrene copolymer resin material), polystyrene, polyester, polyethylene materials. Alternatively, other expandable-and-contractable materials such as rubber are usable. They may be integrally molded of the same material through an injection molding method, a blow molding method or the like if the thicknesses are properly adjusted for the pump portion5band the container body1a. In this example, the developer supply container1is in fluid communication with the outside only through the discharge opening1c, and therefore, it is substantially sealed from the outside except for the discharge opening1c. That is, the developer is discharged through discharge opening1cby compressing and decompressing the inside of the developer supply container1by the pump portion5, and therefore, the hermetical property is desired to maintain the stabilized discharging performance. On the other hand, there is a liability that during transportation (air transportation) of the developer supply container1and/or in long term unused period, the internal pressure of the container may abruptly changes due to abrupt variation of the ambient conditions. For an example, when the apparatus is used in a region having a high altitude, or when the developer supply container1kept in a low ambient temperature place is transferred to a high ambient temperature room, the inside of the developer supply container1may be pressurized as compared with the ambient air pressure. In such a case, the container may deform, and/or the developer may splash when the container is unsealed. In view of this, the developer supply container1is provided with an opening of a diameter φ 3 mm, and the opening is provided with a filter, in this example. The filter is TEMISH (registered Trademark) available from Nitto Denko Kabushiki Kaisha, Japan, which is provided with a property preventing developer leakage to the outside but permitting air passage between inside and outside of the container. Here, in this example, despite the fact that such a countermeasurement is taken, the influence thereof to the sucking operation and the discharging operation through the discharge opening1cby the pump portion5can be ignored, and therefore, the hermetical property of the developer supply container1is kept in effect. (Discharge Opening of Developer Supply Container) In this example, the size of the discharge opening1cof the developer supply container1is so selected that in the orientation of the developer supply container1for supplying the developer into the developer receiving apparatus8, the developer is not discharged to a sufficient extent, only by the gravitation. The opening size of the discharge opening1cis so small that the discharging of the developer from the developer supply container is insufficient only by the gravitation, and therefore, the opening is called pin hole hereinafter. In other words, the size of the opening is determined such that the discharge opening1cis substantially clogged. This is expectedly advantageous in the following points:1) the developer does not easily leak through the discharge opening1c;2) excessive discharging of the developer at time of opening of the discharge opening1ccan be suppressed; and3) the discharging of the developer can rely dominantly on the discharging operation by the pump portion. The inventors have investigated as to the size of the discharge opening1cnot enough to discharge the toner to a sufficient extent only by the gravitation. The verification experiment (measuring method) and criteria will be described. A rectangular parallelepiped container of a predetermined volume in which a discharge opening (circular) is formed at the center portion of the bottom portion is prepared, and is filled with 200 g of developer; then, the filling port is sealed, and the discharge opening is plugged; in this state, the container is shaken enough to loosen the developer. The rectangular parallelepiped container has a volume of 1000 cm{circumflex over ( )}3, 90 mm in length, 92 mm width and 120 mm in height. Thereafter, as soon as possible the discharge opening is unsealed in the state that the discharge opening is directed downwardly, and the amount of the developer discharged through the discharge opening is measured. At this time, the rectangular parallelepiped container is sealed completely except for the discharge opening. In addition, the verification experiments were carried out under the conditions of the temperature of 24 degree C. and the relative humidity of 55%. Using these processes, the discharge amounts are measured while changing the kind of the developer and the size of the discharge opening. In this example, when the amount of the discharged developer is not more than 2 g, the amount is negligible, and therefore, the size of the discharge opening at that time is deemed as being not enough to discharge the developer sufficiently only by the gravitation. The developers used in the verification experiment are shown in Table 1. The kinds of the developer are one component magnetic toner, non-magnetic toner for two component developer developing device and a mixture of the non-magnetic toner and the magnetic carrier. As for property values indicative of the property of the developer, the measurements are made as to angles of rest indicating flowabilities, and fluidity energy indicating easiness of loosing of the developer layer, which is measured by a powder flowability analyzing device (Powder Rheometer FT4 available from Freeman Technology). TABLE 2VolumeFluidityaverage particleAngleenergyDevel-size of tonerDeveloperof rest(Bulk densityopers(μm)component(deg.)of 0.5 g/cm3)A7Two-component182.09 × 10−3Jnon-magneticB6.5Two-component226.80 × 10−4Jnon-magnetictoner + carrierC7One-component354.30 × 10−4Jmagnetic tonerD5.5Two-component403.51 × 10−3Jnon-magnetictoner + carrierE5Two-component274.14 × 10−3Jnon-magnetictoner + carrier Referring toFIG.47, a measuring method for the fluidity energy will be described. Here,FIG.47is a schematic view of a device for measuring the fluidity energy. The principle of the powder flowability analyzing device is that a blade is moved in a powder sample, and the energy required for the blade to move in the powder, that is, the fluidity energy, is measured. The blade is of a propeller type, and when it rotates, it moves in the rotational axis direction simultaneously, and therefore, a free end of the blade moves helically. The propeller type blade51is made of SUS (type=C210) and has a diameter of 48 mm, and is twisted smoothly in the counterclockwise direction. More specifically, from a center of the blade of 48 mm×10 mm, a rotation shaft extends in a normal line direction relative to a rotation plane of the blade, a twist angle of the blade at the opposite outermost edge portions (the positions of 24 mm from the rotation shaft) is 70°, and a twist angle at the positions of 12 mm from the rotation shaft is 35°. The fluidity energy is total energy provided by integrating with time a total sum of a rotational torque and a vertical load when the helical rotating blade51enters the powder layer and advances in the powder layer. The value thus obtained indicates easiness of loosening of the developer powder layer, and large fluidity energy means less easiness and small fluidity energy means greater easiness. In this measurement, as shown inFIG.12, the developer T is filled up to a powder surface level of 70 mm (L2inFIG.47) into the cylindrical container53having a diameter φ of 50 mm (volume=200 cc, L1(FIG.47)=50 mm) which is the standard part of the device. The filling amount is adjusted in accordance with a bulk density of the developer to measure. The blade54of φ48 mm which is the standard part is advanced into the powder layer, and the energy required to advance from depth 10 mm to depth 30 mm is displayed. The set conditions at the time of measurement are, The set conditions at the time of measurement are, The rotational speed of the blade51(tip speed=peripheral speed of the outermost edge portion of the blade) is 60 mm/s: The blade advancing speed in the vertical direction into the powder layer is such a speed that an angle θ (helix angle) formed between a track of the outermost edge portion of the blade51during advancement and the surface of the powder layer is 10°: The advancing speed into the powder layer in the perpendicular direction is 11 mm/s (blade advancement speed in the powder layer in the vertical direction=(rotational speed of blade)×tan (helix angle×n/180)): and The measurement is carried out under the condition of temperature of 24 degree C. and relative humidity of 55% The bulk density of the developer when the fluidity energy of the developer is measured is close to that when the experiments for verifying the relation between the discharge amount of the developer and the size of the discharge opening, is less changing and is stable, and more particularly is adjusted to be 0.5 g/cm{circumflex over ( )}3. The verification experiments were carried out for the developers (Table 2) with the measurements of the fluidity energy in such a manner.FIG.48is a graph showing relations between the diameters of the discharge openings and the discharge amounts with respect to the respective developers From the verification results shown inFIG.48, it has been confirmed that the discharge amount through the discharge opening is not more than 2 g for each of the developers A-E, if the diameter φ of the discharge opening is not more than 4 mm (12.6 mm{circumflex over ( )}2 in the opening area (circle ratio=3.14)). When the diameter φ discharge opening exceeds 4 mm, the discharge amount increases sharply. The diameter φ of the discharge opening is preferably not more than 4 mm (12.6 mm{circumflex over ( )}2 of the opening area) when the fluidity energy of the developer (0.5 g/cm{circumflex over ( )}3 of the bulk density) is not less than 4.3×10-4 kg-m{circumflex over ( )}2/s{circumflex over ( )}2 (J) and not more than 4.14×10{circumflex over ( )}−3 kg-m{circumflex over ( )}2/s{circumflex over ( )}2 (J). As for the bulk density of the developer, the developer has been loosened and fluidized sufficiently in the verification experiments, and therefore, the bulk density is lower than that expected in the normal use condition (left state), that is, the measurements are carried out in the condition in which the developer is more easily discharged than in the normal use condition. The verification experiments were carries out as to the developer A with which the discharge amount is the largest in the results ofFIG.48, wherein the filling amount in the container were changed in the range of 30-300 g while the diameter ϕ of the discharge opening is constant at 4 mm. The verification results are shown in part (b) ofFIG.49. From the results ofFIG.49, it has been confirmed that the discharge amount through the discharge opening hardly changes even if the filling amount of the developer changes. From the foregoing, it has been confirmed that by making the diameter φ of the discharge opening not more than 4 mm (12.6 mm{circumflex over ( )}2 in the area), the developer is not discharged sufficiently only by the gravitation through the discharge opening in the state that the discharge opening is directed downwardly (supposed supplying attitude into the developer receiving apparatus201irrespective of the kind of the developer or the bulk density state. On the other hand, the lower limit value of the size of the discharge opening1cis preferably such that the developer to be supplied from the developer supply container1(one component magnetic toner, one component non-magnetic toner, two component non-magnetic toner or two component magnetic carrier) can at least pass therethrough. More particularly, the discharge opening is preferably larger than a particle size of the developer (volume average particle size in the case of toner, number average particle size in the case of carrier) contained in the developer supply container1. For example, in the case that the supply developer comprises two component non-magnetic toner and two component magnetic carrier, it is preferable that the discharge opening is larger than a larger particle size, that is, the number average particle size of the two component magnetic carrier. Specifically, in the case that the supply developer comprises two component non-magnetic toner having a volume average particle size of 5.5 μm and a two component magnetic carrier having a number average particle size of 40 μm, the diameter of the discharge opening1cis preferably not less than 0.05 mm (0.002 mm{circumflex over ( )}2 in the opening area). If, however, the size of the discharge opening1cis too close to the particle size of the developer, the energy required for discharging a desired amount from the developer supply container1, that is, the energy required for operating the pump portion5is large. It may be the case that a restriction is imparted to the manufacturing of the developer supply container1. When the discharge opening1cis formed in a resin material part using an injection molding method, a durable of a metal mold part forming the portion of the discharge opening1chas to be high. From the foregoing, the diameter φ of the discharge opening1cis preferably not less than 0.5 mm. In this example, the configuration of the discharge opening1cis circular, but this is not inevitable. A square, a rectangular, an ellipse or a combination of lines and curves or the like are usable if the opening area is not more than 12.6 mm{circumflex over ( )}2 which is the opening area corresponding to the diameter of 4 mm. However, a circular discharge opening has a minimum circumferential edge length among the configurations having the same opening area, the edge being contaminated by the deposition of the developer. Therefore, the amount of the developer dispersing with the opening and closing operation of the shutter5is small, and therefore, the contamination is decreased. In addition, with the circular discharge opening, a resistance during discharging is also small, and a discharging property is high. Therefore, the configuration of the discharge opening1cis preferably circular which is excellent in the balance between the discharge amount and the contamination prevention. From the foregoing, the size of the discharge opening1cis preferably such that the developer is not discharged sufficiently only by the gravitation in the state that the discharge opening1cis directed downwardly (supposed supplying attitude into the developer receiving apparatus8). More particularly, a diameter φ of the discharge opening1cis not less than 0.05 mm (0.002 mm{circumflex over ( )}2 in the opening area) and not more than 4 mm (12.6 mm{circumflex over ( )}2 in the opening area). Furthermore, the diameter φ of the discharge opening1cis preferably not less than 0.5 mm (0.2 mm{circumflex over ( )}2 in the opening area and not more than 4 mm (12.6 mm{circumflex over ( )}2 in the opening area). In this example, on the basis of the foregoing investigation, the discharge opening1cis circular, and the diameter Φ of the opening is 2 mm. In this example, the number of discharge openings1cis one, but this is not inevitable, and a plurality of discharge openings1ca total opening area of the opening areas satisfies the above-described range. For example, in place of one developer receiving port8ahaving a diameter φ of 2 mm, two discharge openings3aeach having a diameter φ of 0.7 mm are employed. However, in this case, the discharge amount of the developer per unit time tends to decrease, and therefore, one discharge opening1chaving a diameter φ of 2 mm is preferable. (Developer Supplying Step) Referring toFIGS.50-53, a developer supplying step by the pump portion will be described.FIG.50is a schematic perspective view in which the expansion-and-contraction portion5aof the pump portion5is contracted.FIG.51is a schematic perspective view in which the expansion-and-contraction portion5aof the pump portion5is expanded.FIG.52is a schematic sectional view in which the expansion-and-contraction portion5aof the pump portion5is contracted.FIG.53is a schematic sectional view in which the expansion-and-contraction portion5aof the pump portion5is expanded. In this example, as will be described hereinafter, the drive conversion of the rotational force is carries out by the drive converting mechanism so that the suction step (sucking operation through discharge opening3a) and the discharging step (discharging operation through the discharge opening3a) are repeated alternately. The suction step and the discharging step will be described. The description will be made as to a developer discharging principle using a pump. The operation principle of the expansion-and-contraction portion5aof the pump portion5is as has been in the foregoing. Stating briefly, as shown inFIG.45, the lower end of the expansion-and-contraction portion5ais connected to the container body1a. The container body1ais prevented in the movement in the arrow p direction and in the arrow q direction (FIG.44) by the positioning guide81of the developer supplying apparatus8through the upper flange portion1gat the lower end. Therefore, the vertical position of the lower end of the expansion-and-contraction portion5aconnected with the container body1ais fixed relative to the developer receiving apparatus8. On the other hand, the upper end of the expansion-and-contraction portion5ais engaged with the locking member10through the locking portion18, and is reciprocated in the arrow p direction and in the arrow q direction by the vertical movement of the locking member10. Since the lower end of the expansion-and-contraction portion5aof the pump portion5is fixed, the portion thereabove expands and contracts. The description will be made as to expanding-and-contracting operation (discharging operation and sucking operation) of the expansion-and-contraction portion5aof the pump portion5and the developer discharging. (Discharging Operation) First, the discharging operation through the discharge opening1cwill be described. With the downward movement of the locking member10, the upper end of the expansion-and-contraction portion5adisplaces in the p direction (contraction of the expansion-and-contraction portion), by which discharging operation is effected. More particularly, with the discharging operation, the volume of the developer accommodating space1bdecreases. At this time, the inside of the container body1ais sealed except for the discharge opening1c, and therefore, until the developer is discharged, the discharge opening1cis substantially clogged or closed by the developer, so that the volume in the developer accommodating space1bdecreases to increase the internal pressure of the developer accommodating space1b. Therefore, the volume of the developer accommodating space1bdecreases, so that the internal pressure of the developer accommodating space1bincreases. Then, the internal pressure of the developer accommodating space1bbecomes higher than the pressure in the hopper8c(substantially equivalent to the ambient pressure). Therefore, as shown inFIG.52, the developer T is pushed out by the air pressure due to the pressure difference (difference pressure relative to the ambient pressure). Thus, the developer T is discharged from the developer accommodating space1binto the hopper8c. An arrow inFIG.52indicates a direction of a force applied to the developer T in the developer accommodating space1b. Thereafter, the air in the developer accommodating space1bis also discharged together with the developer, and therefore, the internal pressure of the developer accommodating space1bdecreases. (Sucking Operation) □ The sucking operation through the discharge opening1cwill be described. With upward movement of the locking member10, the upper end of the expansion-and-contraction portion5aof the pump portion5displaces in the p direction (the expansion-and-contraction portion expands) so that the sucking operation is effected. More particularly, the volume of the developer accommodating space1bincreases with the sucking operation. At this time, the inside of the container body1ais sealed except of the discharge opening1c, and the discharge opening1cis clogged by the developer and is substantially closed. Therefore, with the increase of the volume in the developer accommodating space1b, the internal pressure of the developer accommodating space1bdecreases. The internal pressure of the developer accommodating space1bat this time becomes lower than the internal pressure in the hopper8c(substantially equivalent to the ambient pressure). Therefore, as shown inFIG.53, the air in the upper portion in the hopper8centers the developer accommodating space1bthrough the discharge opening1cby the pressure difference between the developer accommodating space1band the hopper8gc. An arrow inFIG.53indicates a direction of a force applied to the developer T in the developer accommodating space1b. Ovals Z inFIG.53schematically show the air taken in from the hopper8c. At this time, the air is taken-in from the outside of the developer receiving device8side, and therefore, the developer in the neighborhood of the discharge opening1ccan be loosened. More particularly, the air impregnated into the developer powder existing in the neighborhood of the discharge opening1c, reduces the bulk density of the developer powder and fluidizing. In this manner, by the fluidization of the developer T, the developer T does not pack or clog in the discharge opening3a, so that the developer can be smoothly discharged through the discharge opening3ain the discharging operation which will be described hereinafter. Therefore, the amount of the developer T (per unit time) discharged through the discharge opening1ccan be maintained substantially at a constant level for a long term. (Change of Internal Pressure of Developer Accommodating Portion) Verification experiments were carried out as to a change of the internal pressure of the developer supply container1The verification experiments will be described The developer is filled such that the developer accommodating space1bin the developer supply container1is filled with the developer; and the change of the internal pressure of the developer supply container1is measured when the pump portion5is expanded and contracted in the range of 15 cm{circumflex over ( )}3 of volume change. The internal pressure of the developer supply container1is measured using a pressure gauge (AP-C40 available from Kabushiki Kaisha KEYENCE) connected with the developer supply container1. FIG.54shows a pressure change when the pump portion5is expanded and contracted in the state that the shutter4of the developer supply container1filled with the developer is open, and therefore, in the communicatable state with the outside air. InFIG.54, the abscissa represents the time, and the ordinate represents a relative pressure in the developer supply container1relative to the ambient pressure (reference (0)) (+ is a positive pressure side, and − is a negative pressure side). When the internal pressure of the developer supply container1becomes negative relative to the outside ambient pressure by the increase of the volume of the developer supply container1, the air is taken in through the discharge opening1cby the pressure difference. When the internal pressure of the developer supply container1becomes positive relative to the outside ambient pressure by the decrease of the volume of the developer supply container1, a pressure is imparted to the inside developer by the pressure difference. At this time, the inside pressure eases corresponding to the discharged developer and air. By the verification experiments, it has been confirmed that by the increase of the volume of the developer supply container1, the internal pressure of the developer supply container1becomes negative relative to the outside ambient pressure, and the air is taken in by the pressure difference. In addition, it has been confirmed that by the decrease of the volume of the developer supply container1, the internal pressure of the developer supply container1becomes positive relative to the outside ambient pressure, and the pressure is imparted to the inside developer so that the developer is discharged. In the verification experiments, an absolute value of the negative pressure is 1.3 kPa, and an absolute value of the positive pressure is 3.0 kPa. As described in the foregoing, with the structure of the developer supply container1of this example, the internal pressure of the developer supply container1switches between the negative pressure and the positive pressure alternately by the sucking operation and the discharging operation of the pump portion5, and the discharging of the developer is carried out properly. As described in the foregoing, in this example, a simple and easy pump capable of effecting the sucking operation and the discharging operation of the developer supply container1is provided, by which the discharging of the developer by the air can be carries out stably while providing the developer loosening effect by the air. In other words, with the structure of the example, even when the size of the discharge opening1cis extremely small, a high discharging performance can be assured without imparting great stress to the developer since the developer can be passed through the discharge opening1cin the state that the bulk density is small because of the fluidization. In addition, in this example, the inside of the displacement type pump portion5is utilized as a developer accommodating space, and therefore, when the internal pressure is reduced by increasing the volume of the pump portion5, an additional developer accommodating space can be formed. Therefore, even when the inside of the pump portion5is filled with the developer, the bulk density can be decreased (the developer can be fluidized) by impregnating the air in the developer powder. Therefore, the developer can be filled in the developer supply container1with a higher density than in the conventional art. In the foregoing, the inside space in the pump portion5is used as a developer accommodating space1b, but in an alternative, a filter which permits passage of the air but prevents passage of the toner may be provided to partition between the pump portion5and the developer accommodating space1b. However, the embodiment described in the form of is preferable in that when the volume of the pump5increases, an additional developer accommodating space can be provided (Developer Loosening Effect in Suction Step) Verification has been carried out as to the developer loosening effect by the sucking operation through the discharge opening1cin the suction step. When the developer loosening effect by the sucking operation through the discharge opening1cis significant, a low discharge pressure (small volume change of the pump) is enough, in the subsequent discharging step, to start immediately the discharging of the developer from the developer supply container1. This verification is to demonstrate remarkable enhancement of the developer loosening effect in the structure of this example. This will be described in detail. Part (a) ofFIG.55and part (a) ofFIG.56are block diagrams schematically showing a structure of the developer supplying system used in the verification experiment. Part (b) ofFIG.55and part (b) ofFIG.56are schematic views showing a phenomenon-occurring in the developer supply container. The system ofFIG.55is analogous to this example, and a developer supply container C is provided with a developer accommodating portion C1and a pump portion P. By the expanding-and-contracting operation of the pump portion P, the sucking operation and the discharging operation through a discharge opening (the discharge opening1cof this example (unshown)) of the developer supply container C are carried out alternately to discharge the developer into a hopper H. On the other hand, the system ofFIG.56is a comparison example wherein a pump portion P is provided in the developer receiving apparatus side, and by the expanding-and-contracting operation of the pump portion P, an air-supply operation into the developer accommodating portion C1and the sucking operation from the developer accommodating portion C1are carried out alternately to discharge the developer into a hopper H. InFIGS.55and56, the developer accommodating portions C1have the same internal volumes, the hoppers H have the same internal volumes, and the pump portions P have the same internal volumes (volume change amounts). First, 200 g of the developer is filled into the developer supply container C. Then, the developer supply container C is shaken for 15 minutes in view of the state after transportation, and thereafter, it is connected to the hopper H. The pump portion P is operated, and a peak value of the internal pressure in the sucking operation is measured as a condition of the suction step required for starting the developer discharging immediately in the discharging step. In the case ofFIG.55, the start position of the operation of the pump portion P corresponds to 480 cm{circumflex over ( )}3 of the volume of the developer accommodating portion C1, and in the case ofFIG.56, the start position of the operation of the pump portion P corresponds to 480 cm{circumflex over ( )}3 of the volume of the hopper H. In the experiments of the structure ofFIG.56, the hopper H is filled with 200 g of the developer beforehand to make the conditions of the air volume the same as with the structure ofFIG.55. The internal pressures of the developer accommodating portion C1and the hopper H are measured by the pressure gauge (AP-C40 available from Kabushiki Kaisha KEYENCE) connected to the developer accommodating portion C1. As a result of the verification, according to the system analogous to this example shown inFIG.55, if the absolute value of the peak value (negative pressure) of the internal pressure at the time of the sucking operation is at least 1.0 kPa, the developer discharging can be immediately started in the subsequent discharging step. In the comparison example system shown inFIG.56, on the other hand, unless the absolute value of the peak value (positive pressure) of the internal pressure at the time of the sucking operation is at least 1.7 kPa, the developer discharging cannot be immediately started in the subsequent discharging step. It has been confirmed that using the system ofFIG.55similar to the example, the suction is carries out with the volume increase of the pump portion P, and therefore, the internal pressure of the developer supply container C can be lower (negative pressure side) than the ambient pressure (pressure outside the container), so that the developer solution effect is remarkably high. This is because as shown in part (b) ofFIG.55, the volume increase of the developer accommodating portion C1with the expansion of the pump portion P provides pressure reduction state (relative to the ambient pressure) of the upper portion air layer of the developer layer T. For this reason, the forces are applied in the directions to increase the volume of the developer layer T due to the decompression (wave line arrows), and therefore, the developer layer can be loosened efficiently. Furthermore, in the system ofFIG.55, the air is taken in from the outside into the developer supply container C1by the decompression (white arrow), and the developer layer T is solved also when the air reaches the air layer R, and therefore, it is a very good system. As a proof of the loosening of the developer in the developer supply container C in the, experiments, it has been confirmed that in the sucking operation, the apparent volume of the whole developer increases (the level of the developer rises). In the case of the system of the comparison example shown inFIG.56, the internal pressure of the developer supply container C is raised by the air-supply operation to the developer supply container C up to a positive pressure (higher than the ambient pressure), and therefore, the developer is agglomerated, and the developer solution effect is not obtained. This is because as shown in part (b) ofFIG.56, the air is fed forcedly from the outside of the developer supply container C, and therefore, the air layer R above the developer layer T becomes positive relative to the ambient pressure. For this reason, the forces are applied in the directions to decrease the volume of the developer layer T due to the pressure (wave line arrows), and therefore, the developer layer T is packed. Actually, a phenomenon- has been confirmed that the apparent volume of the whole developer in the developer supply container C increases upon the sucking operation in this comparison example. Accordingly, with the system ofFIG.56, there is a liability that the packing of the developer layer T disables subsequent proper developer discharging step. In order to prevent the packing of the developer layer T by the pressure of the air layer R, it would be considered that an air vent with a filter or the like is provided at a position corresponding to the air layer R thereby reducing the pressure rise. However, in such a case, the flow resistance of the filter or the like leads to a pressure rise of the air layer R. However, in such a case, the flow resistance of the filter or the like leads to a pressure rise of the air layer R. Even if the pressure rise were eliminated, the loosening effect by the pressure reduction state of the air layer R described above cannot be provided. From the foregoing, the significance of the function of the sucking operation a discharge opening with the volume increase of the pump portion by employing the system of this example has been confirmed. As described above, by the repeated alternate sucking operation and the discharging operation of the pump portion2, the developer can be discharged through the discharge opening1cof the developer supply container1. That is, in this example, the discharging operation and the sucking operation are not in parallel or simultaneous, but are alternately repeated, and therefore, the energy required for the discharging of the developer can be minimized. On the other hand, in the case that the developer receiving apparatus includes the air-supply pump and the suction pump, separately, it is necessary to control the operations of the two pumps, and in addition it is not easy to rapidly switch the air-supply and the suction alternately. In this example, one pump is effective to efficiently discharge the developer, and therefore, the structure of the developer discharging mechanism can be simplified. In the foregoing, the discharging operation and the sucking operation of the pump are repeated alternately to efficiently discharge the developer, but in an alternative structure, the discharging operation or the sucking operation is temporarily stopped and then resumed. For example, the discharging operation of the pump is not effected monotonically, but the compressing operation may be once stopped partway and then resumed to discharge. The same applies to the sucking operation. Each operation may be made in a multi-stage form as long as the discharge amount and the discharging speed are enough. It is still necessary that after the multi-stage discharging operation, the sucking operation is effected, and they are repeated. In this example, the internal pressure of the developer accommodating space1bis reduced to take the air through the discharge opening1cto loosen the developer. On the other hand, in the above-described conventional example, the developer is loosened by feeding the air into the developer accommodating space1bfrom the outside of the developer supply container1, but at this time, the internal pressure of the developer accommodating space1bis in a compressed state with the result of agglomeration of the developer. This example is preferable since the developer is loosened in the pressure reduced state in which is the developer is not easily agglomerated. Furthermore, also according to this example, the mechanism for connecting and separating the developer receiving portion11relative to the developer supply container1by displacing the developer receiving portion11can be simplified, similarly to Embodiments 1 and 2. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. In a conventional structure, a large space is required to avoid an interference with the developing device in the upward and downward movement, but according to this example, such a large space is unnecessary so that the upsizing of the image forming apparatus can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 5 Referring toFIGS.57,58, a structure of the Embodiment 5 will be described.FIG.57is a schematic perspective view of a developer supply container1, andFIG.58is a schematic sectional view of the developer supply container1. In this example, the structure of the pump is different from that of Embodiment 4, and the other structures are substantially the same as with Embodiment 4. In the description of this embodiment, the same reference numerals as in Embodiment 4 are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, as shown inFIGS.57,58, a plunger type pump is used in place of the bellow-like displacement type pump as in Embodiment 4. More specifically, the plunger type pump of this example includes an inner cylindrical portion1hand an outer cylindrical portion6extending outside the outer surface of the inner cylindrical portion1hand movable relative to the inner cylindrical portion1h. The upper surface of the outer cylindrical portion36is provided with a locking portion18, fixed by bonding similarly to Embodiment 4. More particularly, the locking portion18fixed to the upper surface of the outer cylindrical portion36receives a locking member10of the developer receiving apparatus8, by which they a substantially unified, the outer cylindrical portion36can move in the up and down directions (reciprocation) together with the locking member10. The inner cylindrical portion1his connected with the container body1a, and the inside space thereof functions as a developer accommodating space1b. In order to prevent leakage of the air through a gap between the inner cylindrical portion1hand the outer cylindrical portion36(to prevent leakage of the developer by keeping the hermetical property), a sealing member (elastic seal7) is fixed by bonding on the outer surface of the inner cylindrical portion1h. The elastic seal37is compressed between the inner cylindrical portion1hand the outer cylindrical portion35. Therefore, by reciprocating the outer cylindrical portion36in the arrow p direction and the arrow q direction relative to the container body1a(inner cylindrical portion1h) fixed non-movably to the developer receiving apparatus8, the volume in the developer accommodating space1bcan be changed (increased and decreased). That is, the internal pressure of the developer accommodating space1bcan be repeated alternately between the negative pressure state and the positive pressure state. Thus, also in this example, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a decompressed state (negative pressure state) can be provided in the developer accommodation supply container, and therefore, the developer can be efficiently loosened. In this example, the configuration of the outer cylindrical portion36is cylindrical, but may be of another form, such as a rectangular section. In such a case, it is preferable that the configuration of the inner cylindrical portion1hmeets the configuration of the outer cylindrical portion36. The pump is not limited to the plunger type pump, but may be a piston pump. When the pump of this example is used, the seal structure is required to prevent developer leakage through the gap between the inner cylinder and the outer cylinder, resulting in a complicated structure and necessity for a large driving force for driving the pump portion, and therefore, Embodiment 4 is preferable. In addition, in this example, the developer supply container1is provided with the engaging portion similar to Embodiment 4, and therefore, similarly to the above-described embodiments, the mechanism for connecting and separating the developer receiving portion11relative to the developer supply container1by displacing the developer receiving portion11of the developer receiving apparatus8can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 6 Referring toFIGS.59,60, a structure of the Embodiment 6 will be described.FIG.59is a perspective view of an outer appearance in which a pump portion38of a developer supply container1according to this embodiment is in an expanded state, andFIG.60is a perspective view of an outer appearance in which the pump portion38of the developer supply container1is in a contracted state. In this example, the structure of the pump is different from that of Embodiment 4, and the other structures are substantially the same as with Embodiment 4. In the description of this embodiment, the same reference numerals as in Embodiment 4 are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, as shown inFIGS.59,60, in place of a bellow-like pump having folded portions of Embodiment 4, a film-like pump portion38capable of expansion and contraction not having a folded portion is used. The film-like portion of the pump portion38is made of rubber. The material of the film-like portion of the pump portion12may be a flexible material such as resin film rather than the rubber. The film-like pump portion38is connected with the container body1a, and the inside space thereof functions as a developer accommodating space1b. The upper portion of the film-like pump portion38is provided with a locking portion18fixed thereto by bonding, similarly to the foregoing embodiments. Therefore, the pump portion38can alternately repeat the expansion and the contraction by the vertical movement of the locking member10(FIG.38). In this manner, also in this example, one pump is enough to effect both of the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In the case of this example, as shown inFIG.61, it is preferable that a plate-like member39having a higher rigid than the film-like portion is mounted to the upper surface of the film-like portion of the pump portion38, and the locking member18is provided on the plate-like member39. With such a structure, it can be suppressed that the amount of the volume change of the pump portion38decreases due to deformation of only the neighborhood of the locking portion18of the pump portion38. That is, the followability of the pump portion38to the vertical movement of the locking member10can be improved, and therefore, the expansion and the contraction of the pump portion38can be effected efficiently. Thus, the discharging property of the developer can be improved. In addition, in this example, the developer supply container1is provided with the engaging portion similar to Embodiment 4, and therefore, similarly to the above-described embodiments, the mechanism for connecting and separating the developer receiving portion11relative to the developer supply container1by displacing the developer receiving portion11of the developer receiving apparatus8can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 7 Referring toFIGS.62-64, a structure of the Embodiment 7 will be described.FIG.62is a perspective view of an outer appearance of a developer supply container1,FIG.63is a sectional perspective view of the developer supply container1, andFIG.64is a partially sectional view of the developer supply container1. In this example, the structure is different from that of Embodiment 4 only in the structure of a developer accommodating space, and the other structure is substantially the same. In the description of this embodiment, the same reference numerals as in Embodiment 4 are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. As shown inFIGS.62,63, the developer supply container1of this example comprises two components, namely, a portion X including a container body1aand a pump portion5and a portion Y including a cylindrical portion24. The structure of the portion X of the developer supply container1is substantially the same as that of Embodiment 4, and therefore, detailed description thereof is omitted. (Structure of Developer Supply Container) In the developer supply container1of this example, as contrasted to Embodiment 4, the cylindrical portion24is connected by a connecting portion14cto a side of the portion X (a discharging portion in which a discharge opening1cis formed), as shown inFIG.63. The cylindrical portion (developer accommodation rotatable portion)24has a closed end at one longitudinal end thereof and an open end at the other end which is connected with an opening of the portion X, and the space therebetween is a developer accommodating space1b. In this example, an inside space of the container body1a, an inside space of the pump portion5and the inside space of the cylindrical portion24are all developer accommodating space1b, and therefore, a large amount of the developer can be accommodated. In this example, the cylindrical portion24as the developer accommodation rotatable portion has a circular cross-sectional configuration, but the circular shape is not restrictive to the present invention. For example, the cross-sectional configuration of the developer accommodation rotatable portion may be of non-circular configuration such as a polygonal configuration as long as the rotational motion is not obstructed during the developer feeding operation. A inside of the cylindrical portion (developer feeding chamber)24is provided with a helical feeding projection (feeding portion)24a, which has a function of feeding the inside developer accommodated therein toward the portion X (discharge opening1c) when the cylindrical portion24rotates in a direction indicated by an arrow R. In addition, the inside of the cylindrical portion24is provided with a receiving-and-feeding member (feeding portion)16for receiving the developer fed by the feeding projection24aand supplying it to the portion X side by rotation of the cylindrical portion24in the direction of arrow R (the rotational axis is substantially extends in the horizontal direction), the moving member upstanding from the inside of the cylindrical portion24. The receiving-and-feeding member16is provided with a plate-like portion16afor scooping the developer up, and inclined projections16bfor feeding (guiding) the developer scooped up by the plate-like portion16atoward the portion X, the inclined projections16bbeing provided on respective sides of the plate-like portion16a. The plate-like portion16ais provided with a through-hole16cfor permitting passage of the developer in both directions to improve the stirring property for the developer. In addition, a gear portion24bas a drive inputting mechanism is fixed by bonding on an outer surface at the other longitudinal end (with respect to the feeding direction of the developer) of the cylindrical portion24. When the developer supply container1is mounted to the developer receiving apparatus8, the gear portion24bengages with the driving gear (driving portion)9functioning as a driving mechanism provided in the developer receiving apparatus8. When the rotational force is inputted to the gear portion14bas the driving force receiving portion from the driving gear9, the cylindrical portion24rotates in the direction or arrow R (FIG.63). The gear portion24bis not restrictive to the present invention, but another drive inputting mechanism such as a belt or friction wheel is usable as long as it can rotate the cylindrical portion24. As shown inFIG.64, one longitudinal end of the cylindrical portion24(downstream end with respect to the developer feeding direction) is provided with a connecting portion24cas a connecting tube for connection with portion X. The above-described inclined projection16bextends to a neighborhood of the connecting portion24c. Therefore, the developer fed by the inclined projection16bis prevented as much as possible from falling toward the bottom side of the cylindrical portion24again, so that the developer is properly supplied to the connecting portion24c. The cylindrical portion24rotates as described above, but on the contrary, the container body1aand the pump portion5are connected to the cylindrical portion24through a flange portion1gso that the container body1aand the pump portion5are non-rotatable relative to the developer receiving apparatus8(non-rotatable in the rotational axis direction of the cylindrical portion24and non-movable in the rotational moving direction), similarly to Embodiment 4. Therefore, the cylindrical portion24is rotatable relative to the container body1a. A ring-like elastic seal25is provided between the cylindrical portion24and the container body1aand is compressed by a predetermined amount between the cylindrical portion24and the container body1a. By this, the developer leakage there is prevented during the rotation of the cylindrical portion24. In addition, the structure, the hermetical property can be maintained, and therefore, the loosening and discharging effects by the pump portion5are applied to the developer without loss. The developer supply container1does not have an opening for substantial fluid communication between the inside and the outside except for the discharge opening1c. (Developer Supplying Step) A developer supplying step will be described. When the operator inserts the developer supply container1into the developer receiving apparatus8, similarly to Embodiment 4, the locking portion18of the developer supply container1is locked with the locking member10of the developer receiving apparatus8, and the gear portion24bof the developer supply container1is engaged with the driving gear9of the developer receiving apparatus8. Thereafter, the driving gear9is rotated by another driving motor (not shown) for rotation, and the locking member10is driven in the vertical direction by the above-described driving motor500. Then, the cylindrical portion24rotates in the direction of the arrow R, by which the developer therein is fed to the receiving-and-feeding member16by the feeding projection24a. In addition, by the rotation of the cylindrical portion24in the direction R, the receiving-and-feeding member16scoops the developer, and feeds it to the connecting portion24c. The developer fed into the container body1afrom the connecting portion24cis discharged from the discharge opening1cby the expanding-and-contracting operation of the pump portion5, similarly to Embodiment 4. These are a series of the developer supply container1mounting steps and developer supplying steps. Here, the developer supply container1is exchanged, the operator takes the developer supply container1out of the developer receiving apparatus8, and a new developer supply container1is inserted and mounted. In the case of a vertical container having a developer accommodating space1bwhich is long in the vertical direction as in Embodiment 4-Embodiment 6, if the volume of the developer supply container1is increased to increase the filling amount, the developer results in concentrating to the neighborhood of the discharge opening1cby the weight of the developer. As a result, the developer adjacent the discharge opening1ctends to be compacted, leading to difficulty in suction and discharge through the discharge opening1c. In such a case, in order to loosen the developer compacted by the suction through the discharge opening1cor to discharge the developer by the discharging, the internal pressure (negative pressure/positive pressure) of the developer accommodating space1bhas to be enhanced by increasing the amount of the change of the pump portion5volume. Then, the driving forces or drive the pump portion5has to be increased, and the load to the main assembly of the image forming apparatus100may be excessive. According to this embodiment, however, container body1aand the portion X of the pump portion5and the portion Y of the cylindrical portion24are arranged in the horizontal direction, and therefore, the thickness of the developer layer above the discharge opening1cin the container body1acan be thinner than in the structure ofFIG.44. By doing so, the developer is not easily compacted by the gravity, and therefore, the developer can be stably discharged without load to the main assembly of the image forming apparatus100. As described, with the structure of this example, the provision of the cylindrical portion24is effective to accomplish a large capacity developer supply container1without load to the main assembly of the image forming apparatus. In this manner, also in this example, one pump is enough to effect both of the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. The developer feeding mechanism in the cylindrical portion24is not restrictive to the present invention, and the developer supply container1may be vibrated or swung, or may be another mechanism. Specifically, the structure ofFIG.65is usable. As shown inFIG.65, the cylindrical portion24per se is not movable substantially relative to the developer receiving apparatus8(with slight play), and a feeding member17is provided in the cylindrical portion in place of the feeding projection24a, the feeding member17being effective to feed the developer by rotation relative to the cylindrical portion24. The feeding member17includes a shaft portion17aand flexible feeding blades17bfixed to the shaft portion17a. The feeding blade17bis provided at a free end portion with an inclined portion S inclined relative to an axial direction of the shaft portion17a. Therefore, it can feed the developer toward the portion X while stirring the developer in the cylindrical portion24. One longitudinal end surface of the cylindrical portion24is provided with a coupling portion24eas the rotational driving force receiving portion, and the coupling portion24eis operatively connected with a coupling member (not shown) of the developer receiving apparatus8, by which the rotational force can be transmitted. The coupling portion24eis coaxially connected with the shaft portion17aof the feeding member17to transmit the rotational force to the shaft portion17a. By the rotational force applied from the coupling member (not shown) of the developer receiving apparatus8, the feeding blade17bfixed to the shaft portion17ais rotated, so that the developer in the cylindrical portion24is fed toward the portion X while being stirred. However, with the modified example shown inFIG.65, the stress applied to the developer in the developer feeding step tends to be large, and the driving torque is also large, and for this reason, the structure of the embodiment is preferable. Thus, also in this example, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in this example, the developer supply container1is provided with the engaging portion similar to Embodiment 4, and therefore, similarly to the above-described embodiments, the mechanism for connecting and separating the developer receiving portion11relative to the developer supply container1by displacing the developer receiving portion11of the developer receiving apparatus8can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 8 Referring toFIGS.66-68, the description will be made as to structures of Embodiment 8. Part (a) ofFIG.66is a front view of a developer receiving apparatus8, as seen in a mounting direction of a developer supply container1, and (b) is a perspective view of an inside of the developer receiving apparatus8. Part (a) ofFIG.67is a perspective view of the entire developer supply container1, (b) is a partial enlarged view of a neighborhood of a discharge opening21aof the developer supply container1, and (c)-(d) are a front view and a sectional view illustrating a state that the developer supply container1is mounted to a mounting portion8f. Part (a) ofFIG.68is a perspective view of the developer accommodating portion20, (b) is a partially sectional view illustrating an inside of the developer supply container1, (c) is a sectional view of a flange portion21, and (d) is a sectional view illustrating the developer supply container1. In the above-described Embodiment 4-7, the pump is expanded and contracted by moving the locking member10(FIG.38) of the developer receiving apparatus8vertically. In this example, the developer supply container1receives only a rotational force from the developer receiving apparatus8, similarly to the Embodiment 1-Embodiment 3. In the other respects, the structure is similar to the foregoing embodiments, and therefore, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted for simplicity. Specifically, in this example, the rotational force inputted from the developer receiving apparatus8is converted to the force in the direction of reciprocation of the pump, and the converted force is transmitted to the pump portion5. In the following, the structure of the developer receiving apparatus8and the developer supply container1will be described in detail. (Developer Receiving Apparatus) Referring toFIG.66, the developer receiving apparatus8will be described. The developer receiving apparatus8is provided with a mounting portion (mounting space)8fto which the developer supply container1is detachably mounted. As shown in part (b) ofFIG.66, the developer supply container1is mountable in a direction indicated by an arrow A to the mounting portion8f. Thus, a longitudinal direction (rotational axis direction) of the developer supply container1is substantially the same as the direction of an arrow A. The direction of the arrow A is substantially parallel with a direction indicated by X of part (b) ofFIG.68which will be described hereinafter. In addition, a dismounting direction of the developer supply container1from the mounting portion8fis opposite (the direction of arrow B) the direction of the arrow A. As shown in part (a) ofFIG.66, the mounting portion8fof the developer receiving apparatus8is provided with a rotation regulating portion (holding mechanism)29for limiting movement of the flange portion21in the rotational moving direction by abutting to a flange portion21(FIG.67) of the developer supply container1when the developer supply container1is mounted. Furthermore, as shown in part (b) ofFIG.66, the mounting portion8fis provided with a regulating portion (holding mechanism)30for regulating the movement of the flange portion21in the rotational axis direction by locking with the flange portion21of the developer supply container1when the developer supply container1is mounted. The rotational axis direction regulating portion30elastic deforms with the interference with the flange portion21, and thereafter, upon release of the interference with the flange portion21(part (b) ofFIG.67), it elastically restores to lock the flange portion21(resin material snap locking mechanism). The mounting portion8fof the developer receiving apparatus8is provided with a developer receiving portion11for receiving the developer discharged through the discharge opening (opening)21a(part (b) ofFIG.68) of the developer supply container1which will be described hereinafter. Similarly to the above-described Embodiment 1 or Embodiment 2, the developer receiving portion11is movable (displaceable) in the vertical direction relative to the developer receiving apparatus8. An upper end surface of the developer receiving portion11is provided with a main assembly seal13having a developer receiving port11ain the central portion thereof. The main assembly seal13is made of an elastic member, a foam member or the like, and is close-contacted with an opening seal3a5(part (b) ofFIG.7) having a discharge opening3a4of the developer supply container1, by which the developer discharged through the discharge opening3a4is prevented from leaking out of a developer feeding path including developer receiving port11a. Or, it is close-contacted with the shutter4(part (a) ofFIG.25) having a shutter opening4fto prevent leakage of the developer through the discharge opening21a, the shutter opening4fand the developer receiving port11a. In order to prevent the contamination in the mounting portion8fby the developer as much as possible, a diameter of the developer receiving port11ais desirably substantially the same as or slightly larger than a diameter of the discharge opening21aof the developer supply container1. This is because if the diameter of the developer receiving port11ais smaller than the diameter of the discharge opening21a, the developer discharged from the developer supply container1is deposited on the upper surface of developer receiving port11a, and the deposited developer is transferred onto the lower surface of the developer supply container1during the dismounting operation of the developer supply container1, with the result of contamination with the developer. In addition, the developer transferred onto the developer supply container1may be scattered to the mounting portion8fwith the result of contamination of the mounting portion8fwith the developer. On the contrary, if the diameter of the developer receiving port11ais quite larger than the diameter of the discharge opening21a, an area in which the developer scattered from the developer receiving port11ais deposited on the neighborhood of the discharge opening21ais large. That is, the contaminated area of the developer supply container1by the developer is large, which is not preferable. Under the circumstances, the difference between the diameter of the developer receiving port11aand the diameter of the discharge opening21ais preferably substantially 0 to approx. 2 mm. In this example, the diameter of the discharge opening21aof the developer supply container1is approx. Φ2 mm (pin hole), and therefore, the diameter of the developer receiving port11ais approx. φ3 mm. Further, the developer receiving portion11is urged downwardly by an urging member12(FIGS.3and4). When the developer receiving portion11moves upwardly, it has to move against an urging force of the urging member12. As shown inFIGS.3and4, below the developer receiving apparatus8, there is provided a sub-hopper8cfor temporarily storing the developer. In the sub-hopper8c, there are provided a feeding screw14for feeding the developer into the developer hopper portion201awhich is a part of the developing device201, and an opening8dwhich is in fluid communication with the developer hopper portion201a. The developer receiving port11ais closed so as to prevent foreign matter and/or dust entering the sub-hopper8cin a state that the developer supply container1is not mounted. More specifically, the developer receiving port11ais closed by a main assembly shutter15in the state that the developer receiving portion11is away to the upside. The developer receiving portion11moves upwardly (arrow E) from the position spaced from the developer supply container1toward the developer supply container1. By this, the developer receiving port11aand the main assembly shutter15are spaced from each other so that the developer receiving port11ais open. With this open state, the developer discharged from the developer supply container1through the discharge opening21aor the shutter and received by the developer receiving port11abecomes movable to the sub-hopper8c. A side surface of the developer receiving portion11is provided with an engaging portion11b(FIGS.3and4). The engaging portion11bis directly engaged with an engaging portion3b2,3b4(FIG.8or20) provided on the developer supply container1which will be described hereinafter, and is guided thereby so that the developer receiving portion11is raised toward the developer supply container1. The mounting portion8fof the developer receiving apparatus8is provided with an insertion guide8efor guiding the developer supply container1in the mounting and demounting direction, and by the insertion guide8e(FIGS.3and4), the mounting direction of the developer supply container1is made along the arrow A. The dismounting direction of the developer supply container1is the opposite (arrow B) to the direction of the arrow A. As shown in part (a) ofFIG.66, the developer receiving apparatus8is provided with a driving gear9functioning as a driving mechanism for driving the developer supply container1. The driving gear9receives a rotational force from a driving motor500through a driving gear train, and functions to apply a rotational force to the developer supply container1which is set in the mounting portion8f. As shown inFIG.66, the driving motor500is controlled by a control device (CPU)600. In this example, the driving gear9is rotatable unidirectionally to simplify the control for the driving motor500. The control device600controls only ON (operation) and OFF (non-operation) of the driving motor500. This simplifies the driving mechanism for the developer replenishing apparatus8as compared with a structure in which forward and backward driving forces are provided by periodically rotating the driving motor500(driving gear9) in the forward direction and backward direction. (Developer Supply Container) Referring toFIGS.67and68, the structure of the developer supply container1which is a constituent-element of the developer supplying system will be described. As shown in part (a) ofFIG.67, the developer supply container1includes a developer accommodating portion20(container body) having a hollow cylindrical inside space for accommodating the developer. In this example, a cylindrical portion20kand the pump portion20bfunctions as the developer accommodating portion20. Furthermore, the developer supply container1is provided with a flange portion21(non-rotatable portion) at one end of the developer accommodating portion20with respect to the longitudinal direction (developer feeding direction). The developer accommodating portion20is rotatable relative to the flange portion21. In this example, as shown in part (d) ofFIG.68, a total length L1of the cylindrical portion20kfunctioning as the developer accommodating portion is approx. 300 mm, and an outer diameter R1is approx. 70 mm. A total length L2of the pump portion20b(in the state that it is most expanded in the expansible range in use) is approx. 50 mm, and a length L3of a region in which a gear portion20aof the flange portion21is provided is approx. 20 mm. A length L4of a region of a discharging portion21hfunctioning as a developer discharging portion is approx. 25 mm. A maximum outer diameter R2(in the state that it is most expanded in the expansible range in use in the diametrical direction) of the pump portion20bis approx. 65 mm, and a total volume capacity accommodating the developer in the developer supply container1is the 1250 cm{circumflex over ( )}3. In this example, the developer can be accommodated in the cylindrical portion20kand the pump portion20band in addition the discharging portion21h, that is, they function as a developer accommodating portion. As shown inFIGS.67and68, in this example, in the state that the developer supply container1is mounted to the developer receiving apparatus8, the cylindrical portion20kand the discharging portion21hare substantially on line along a horizontal direction. That is, the cylindrical portion20khas a sufficiently long length in the horizontal direction as compared with the length in the vertical direction, and one end part with respect to the horizontal direction is connected with the discharging portion21h. For this reason, the suction and discharging operations can be carried out smoothly as compared with the case in which the cylindrical portion20kis above the discharging portion21hin the state that the developer supply container1is mounted to the developer receiving apparatus8. This is because the amount of the toner existing above the discharge opening21ais small, and therefore, the developer in the neighborhood of the discharge opening21ais less compressed. As shown in part (b) ofFIG.67, the flange portion21is provided with a hollow discharging portion (developer discharging chamber)21hfor temporarily storing the developer having been fed from the inside of the developer accommodating portion (inside of the developer accommodating chamber)20(see parts (b) and (c) ofFIG.33if necessary). A bottom portion of the discharging portion21his provided with the small discharge opening21afor permitting discharge of the developer to the outside of the developer supply container1, that is, for supplying the developer into the developer receiving apparatus8. The size of the discharge opening21ais as has been described hereinbefore. An inner shape of the bottom portion of the inner of the discharging portion21h(inside of the developer discharging chamber) is like a funnel converging toward the discharge opening21ain order to reduce as much as possible the amount of the developer remaining therein (parts (b) and (c) ofFIG.68, if necessary). In addition, as shown inFIG.67, the flange portion21is provided with engaging portions3b2,3b4engageable with the developer receiving portion11displacably provided in the developer receiving apparatus8, similarly to the above-described Embodiment 1 or Embodiment 2. The structures of the engaging portions3b2,3b4are similar to those of above-described Embodiment 1 or Embodiment 2, and therefore, the description is omitted. Further, the flange portion21is provided therein with the shutter4for opening and closing discharge opening21a, similarly to the above-described Embodiment 1 or Embodiment 2. The structure of the shutter4and the movement of the developer supply container1in the mounting and demounting operation are similar to the above-described Embodiment 1 or Embodiment 2, and therefore, the description thereof is omitted. The flange portion21is constructed such that when the developer supply container1is mounted to the mounting portion8fof the developer receiving apparatus8, it is stationary substantially. More particularly, as shown in part (c) ofFIG.67, the flange portion21is regulated (prevented) from rotating in the rotational direction about the rotational axis of the developer accommodating portion20by a rotational moving direction regulating portion29provided in the mounting portion8f. In other words, the flange portion21is retained such that it is substantially non-rotatable by the developer receiving apparatus8(although the rotation within the play is possible). Furthermore, the flange portion21is locked by the rotational axis direction regulating portion30provided in the mounting portion8fwith the mounting operation of the developer supply container1. More specifically, the flange portion21contacts to the rotational axis direction regulating portion30in the process of the mounting operation of the developer supply container1to elastically deform the rotational axis direction regulating portion30. Thereafter, the flange portion21abuts to an inner wall portion28a(part (d) ofFIG.67) which is a stopper provided in the mounting portion8f, by which the mounting step of the developer supply container1is completed. At this time, substantially simultaneously with and completion of the mounting, the interference by the flange portion21is released, so that the elastic deformation of the regulating portion30is released. As a result, as shown in part (d) ofFIG.67, the rotational axis direction regulating portion30is locked with the edge portion (functioning as a locking portion) of the flange portion21so that the movement in the rotational axis direction (rotational axis direction of the developer accommodating portion20) is substantially prevented (regulated). At this time, a slight negligible movement within the play is possible. As described in the foregoing, in this example, the flange portion21is retained by the rotational axis direction regulating portion30of the developer receiving apparatus8so that it does not move in the rotational axis direction of the developer accommodating portion20. Furthermore, the flange portion21is retained by the rotational moving direction regulating portion29of the developer receiving apparatus8such that it does not rotate in the rotational moving direction of the developer accommodating portion20. When the operator takes the developer supply container1out of the mounting portion8f, the rotational axis direction regulating portion30elastically deforms by the flange portion21so as to be released from the flange portion21. The rotational axis direction of the developer accommodating portion20is substantially coaxial with the rotational axis direction of the gear portion20a(FIG.68). Therefore, in the state that the developer supply container1is mounted to the developer receiving apparatus8, the discharging portion21hprovided in the flange portion21is prevented substantially in the movement of the developer accommodating portion20in the axial direction and in the rotational moving direction (movement within the play is permitted). On the other hand, the developer accommodating portion20is not limited in the rotational moving direction by the developer receiving apparatus8, and therefore, is rotatable in the developer supplying step. However, the movement of the developer accommodating portion20in the rotational axis direction is substantially prevented by the flange portion21(the movement within the play is permitted). (Pump Portion) Referring toFIGS.68and69, the description will be made as to the pump portion (reciprocable pump)20bin which the volume thereof changes with reciprocation. Part (a) ofFIG.69is a sectional view of the developer supply container1in which the pump portion20bis expanded to the maximum extent in operation of the developer supplying step, and part (b) ofFIG.69is a sectional view of the developer supply container1in which the pump portion20bis compressed to the maximum extent in operation of the developer supplying step. The pump portion20bof this example functions as a suction and discharging mechanism for repeating the sucking operation and the discharging operation alternately through the discharge opening21a. As shown in part (b) ofFIG.68, the pump portion20bis provided between the discharging portion21hand the cylindrical portion20k, and is fixedly connected to the cylindrical portion20k. Thus, the pump portion20bis rotatable integrally with the cylindrical portion20k. In the pump portion20bof this example, the developer can be accommodated therein. The developer accommodating space in the pump portion20bhas a significant function of fluidizing the developer in the sucking operation, as will be described hereinafter. In this example, the pump portion20bis a displacement type pump (bellow-like pump) of resin material in which the volume thereof changes with the reciprocation. More particularly, as shown in (a)-(b) ofFIG.68, the bellow-like pump includes crests and bottoms periodically and alternately. The pump portion20brepeats the compression and the expansion alternately by the driving force received from the developer receiving apparatus8. In this example, the volume change of the pump portion20bby the expansion and contraction is 15 cm{circumflex over ( )}3 (cc). As shown in part (d) ofFIG.68, a total length L2(most expanded state within the expansion and contraction range in operation) of the pump portion20bis approx. 50 mm, and a maximum outer diameter (largest state within the expansion and contraction range in operation) R2of the pump portion20bis approx. 65 mm. With use of such a pump portion20b, the internal pressure of the developer supply container1(developer accommodating portion20and discharging portion21h) higher than the ambient pressure and the internal pressure lower than the ambient pressure are produced alternately and repeatedly at a predetermined cyclic period (approx. 0.9 sec in this example). The ambient pressure is the pressure of the ambient condition in which the developer supply container1is placed. As a result, the developer in the discharging portion21hcan be discharged efficiently through the small diameter discharge opening21a(diameter of approx. 2 mm). As shown in part (b) ofFIG.68, the pump portion20bis connected to the discharging portion21hrotatably relative thereto in the state that a discharging portion21hside end is compressed against a ring-like sealing member27provided on an inner surface of the flange portion21. By this, the pump portion20brotates sliding on the sealing member27, and therefore, the developer does not leak from the pump portion20b, and the hermetical property is maintained, during rotation. Thus, in and out of the air through the discharge opening21aare carries out properly, and the internal pressure of the developer supply container1(pump portion20b, developer accommodating portion20and discharging portion21h) are changed properly, during supply operation. (Drive Transmission Mechanism) The description will be made as to a drive receiving mechanism (drive inputting portion, driving force receiving portion) of the developer supply container1for receiving the rotational force for rotating the feeding portion20cfrom the developer receiving apparatus8. As shown in part (a) ofFIG.68, the developer supply container1is provided with a gear portion20awhich functions as a drive receiving mechanism (drive inputting portion, driving force receiving portion) engageable (driving connection) with a driving gear9(functioning as driving portion, driving mechanism) of the developer receiving apparatus8. The gear portion20ais fixed to one longitudinal end portion of the pump portion20b. Thus, the gear portion20a, the pump portion20b, and the cylindrical portion20kare integrally rotatable. Therefore, the rotational force inputted to the gear portion20afrom the driving gear9is transmitted to the cylindrical portion20k(feeding portion20c) a pump portion20b. In other words, in this example, the pump portion20bfunctions as a drive transmission mechanism for transmitting the rotational force inputted to the gear portion20ato the feeding portion20cof the developer accommodating portion20. For this reason, the bellow-like pump portion20bof this example is made of a resin material having a high property against torsion or twisting about the axis within a limit of not adversely affecting the expanding-and-contracting operation. In this example, the gear portion20ais provided at one longitudinal end (developer feeding direction) of the developer accommodating portion20, that is, at the discharging portion21hside end, but this is not inevitable, and for example, it may be provided in the other longitudinal end portion of the developer accommodating portion20, that is, most rear part. In such a case, the driving gear9is provided at a corresponding position. In this example, a gear mechanism is employed as the driving connection mechanism between the drive inputting portion of the developer supply container1and the driver of the developer receiving apparatus8, but this is not inevitable, and a known coupling mechanism, for example is usable. More particularly, in such a case, the structure may be such that a non-circular recess is provided in a bottom surface of one longitudinal end portion (righthand side end surface of (d) ofFIG.68) as a drive inputting portion, and correspondingly, a projection having a configuration corresponding to the recess as a driver for the developer receiving apparatus8, so that they are in driving connection with each other. (Drive Converting Mechanism) A drive converting mechanism (drive converting portion) for the developer supply container1will be described. The developer supply container1is provided with the cam mechanism for converting the rotational force for rotating the feeding portion20creceived by the gear portion20ato a force in the reciprocating directions of the pump portion20b. That is, in the example, the description will be made as to an example using a cam mechanism as the drive converting mechanism, but the present invention is not limited to this example, and other structures such as with Embodiments 9 et seqq. Are usable. In this example, one drive inputting portion (gear portion20a) receives the driving force for driving the feeding portion20cand the pump portion20b, and the rotational force received by the gear portion20ais converted to a reciprocation force in the developer supply container1side. Because of this structure, the structure of the drive inputting mechanism for the developer supply container1is simplified as compared with the case of providing the developer supply container1with two separate drive inputting portions. In addition, the drive is received by a single driving gear of developer receiving apparatus8, and therefore, the driving mechanism of the developer receiving apparatus8is also simplified. In the case that the reciprocation force is received from the developer receiving apparatus8, there is a liability that the driving connection between the developer receiving apparatus8and the developer supply container1is not proper, and therefore, the pump portion20bis not driven. More particularly, when the developer supply container1is taken out of the image forming apparatus100and then is mounted again, the pump portion20bmay not be properly reciprocated. For example, when the drive input to the pump portion20bstops in a state that the pump portion20bis compressed from the normal length, the pump portion20brestores spontaneously to the normal length when the developer supply container is taken out. In this case, the position of the drive inputting portion for the pump portion20bchanges when the developer supply container1is taken out, despite the fact that a stop position of the drive outputting portion of the image forming apparatus100side remains unchanged. As a result, the driving connection is not properly established between the drive outputting portion of the image forming apparatus100sides and pump portion20bdrive inputting portion of the developer supply container1side, and therefore, the pump portion20bcannot be reciprocated. Then, the developer supply is not carries out, and sooner or later, the image formation becomes impossible. Such a problem may similarly arise when the expansion and contraction state of the pump portion20bis changed by the user while the developer supply container1is outside the apparatus. Such a problem similarly arises when developer supply container1is exchanged with a new one. The structure of this example is substantially free of such a problem. This will be described in detail. As shown inFIGS.68and69, the outer surface of the cylindrical portion20kof the developer accommodating portion20is provided with a plurality of cam projections20dfunctioning as a rotatable portion substantially at regular intervals in the circumferential direction. More particularly, two cam projections20dare disposed on the outer surface of the cylindrical portion20kat diametrically opposite positions, that is, approx. 180° opposing positions. The number of the cam projections20dmay be at least one. However, there is a liability that a moment is produced in the drive converting mechanism and so on by a drag at the time of expansion or contraction of the pump portion20b, and therefore, smooth reciprocation is disturbed, and therefore, it is preferable that a plurality of them are provided so that the relation with the configuration of the cam groove21bwhich will be described hereinafter is maintained. On the other hand, a cam groove21bengaged with the cam projections20dis formed in an inner surface of the flange portion21over an entire circumference, and it functions as a follower portion. Referring toFIG.70, the cam groove21bwill be described. InFIG.70, an arrow An indicates a rotational moving direction of the cylindrical portion20k(moving direction of cam projection20d), an arrow B indicates a direction of expansion of the pump portion20b, and an arrow C indicates a direction of compression of the pump portion20b. InFIG.40, an arrow An indicates a rotational moving direction of the cylindrical portion20k(moving direction of cam projection20d), an arrow B indicates a direction of expansion of the pump portion20b, and an arrow C indicates a direction of compression of the pump portion20b. Here, an angle α is formed between a cam groove21cand a rotational moving direction An of the cylindrical portion20k, and an angle β is formed between a cam groove21dand the rotational moving direction A. In addition, an amplitude (=length of expansion and contraction of pump portion20b) in the expansion and contracting directions B, C of the pump portion20bof the cam groove is L. As shown inFIG.70illustrating the cam groove21bin a developed view, a groove portion21cinclining from the cylindrical portion20kside toward the discharging portion21hside and a groove portion21dinclining from the discharging portion21hside toward the cylindrical portion20kside are connected alternately. In this example, the relation between the angles of the cam grooves21c,21dis α=β. Therefore, in this example, the cam projection20dand the cam groove21bfunction as a drive transmission mechanism to the pump portion20b. More particularly, the cam projection20dand the cam groove21bfunction as a mechanism for converting the rotational force received by the gear portion20afrom the driving gear300to the force (force in the rotational axis direction of the cylindrical portion20k) in the directions of reciprocal movement of the pump portion20band for transmitting the force to the pump portion20b. More particularly, the cylindrical portion20kis rotated with the pump portion20bby the rotational force inputted to the gear portion20afrom the driving gear9, and the cam projections20dare rotated by the rotation of the cylindrical portion20k. Therefore, by the cam groove21bengaged with the cam projection20d, the pump portion20breciprocates in the rotational axis direction (X direction ofFIG.68) together with the cylindrical portion20k. The arrow X direction is substantially parallel with the arrow M direction ofFIGS.66and67. In other words, the cam projection20dand the cam groove21bconvert the rotational force inputted from the driving gear9so that the state in which the pump portion20bis expanded (part (a) ofFIG.69) and the state in which the pump portion20bis contracted (part (b) ofFIG.69) are repeated alternately. Thus, in this example, the pump portion20brotates with the cylindrical portion20k, and therefore, when the developer in the cylindrical portion20kmoves in the pump portion20b, the developer can be stirred (loosened) by the rotation of the pump portion20b. In this example, the pump portion20bis provided between the cylindrical portion20kand the discharging portion21h, and therefore, stirring action can be imparted on the developer fed to the discharging portion21h, which is further advantageous. Furthermore, as described above, in this example, the cylindrical portion20kreciprocates together with the pump portion20b, and therefore, the reciprocation of the cylindrical portion20kcan stir (loosen) the developer inside cylindrical portion20k. (Set Conditions of Drive Converting Mechanism) In this example, the drive converting mechanism effects the drive conversion such that an amount (per unit time) of developer feeding to the discharging portion21hby the rotation of the cylindrical portion20kis larger than a discharging amount (per unit time) to the developer receiving apparatus8from the discharging portion21hby the pump function. This is because if the developer discharging power of the pump portion20bis higher than the developer feeding power of the feeding portion20cto the discharging portion21h, the amount of the developer existing in the discharging portion21hgradually decreases. In other words, it is avoided that the time period required for supplying the developer from the developer supply container1to the developer receiving apparatus8is prolonged. In the drive converting mechanism of this example, the feeding amount of the developer by the feeding portion20cto the discharging portion21his 2.0 g/s, and the discharge amount of the developer by pump portion20bis 1.2 g/s. In addition, in the drive converting mechanism of this example, the drive conversion is such that the pump portion20breciprocates a plurality of times per one full rotation of the cylindrical portion20k. This is for the following reasons. In the case of the structure in which the cylindrical portion20kis rotated inner the developer receiving apparatus8, it is preferable that the driving motor500is set at an output required to rotate the cylindrical portion20kstably at all times. However, from the standpoint of reducing the energy consumption in the image forming apparatus100as much as possible, it is preferable to minimize the output of the driving motor500. The output required by the driving motor500is calculated from the rotational torque and the rotational frequency of the cylindrical portion20k, and therefore, in order to reduce the output of the driving motor500, the rotational frequency of the cylindrical portion20kis minimized. However, in the case of this example, if the rotational frequency of the cylindrical portion20kis reduced, a number of operations of the pump portion20bper unit time decreases, and therefore, the amount of the developer (per unit time) discharged from the developer supply container1decreases. In other words, there is a possibility that the developer amount discharged from the developer supply container1is insufficient to quickly meet the developer supply amount required by the main assembly of the image forming apparatus100. If the amount of the volume change of the pump portion20bis increased, the developer discharging amount per unit cyclic period of the pump portion20bcan be increased, and therefore, the requirement of the main assembly of the image forming apparatus100can be met, but doing so gives rise to the following problem. If the amount of the volume change of the pump portion20bis increased, a peak value of the internal pressure (positive pressure) of the developer supply container1in the discharging step increases, and therefore, the load required for the reciprocation of the pump portion20bincreases. For this reason, in this example, the pump portion20boperates a plurality of cyclic periods per one full rotation of the cylindrical portion20k. By this, the developer discharge amount per unit time can be increased as compared with the case in which the pump portion20boperates one cyclic period per one full rotation of the cylindrical portion20k, without increasing the volume change amount of the pump portion20b. Corresponding to the increase of the discharge amount of the developer, the rotational frequency of the cylindrical portion20kcan be reduced. Verification experiments were carried out as to the effects of the plural cyclic operations per one full rotation of the cylindrical portion20k. In the experiments, the developer is filled into the developer supply container1, and a developer discharge amount and a rotational torque of the cylindrical portion20kare measured. Then, the output (=rotational torque×rotational frequency) of the driving motor500required for rotation a cylindrical portion20kis calculated from the rotational torque of the cylindrical portion20kand the preset rotational frequency of the cylindrical portion20k. The experimental conditions are that the number of operations of the pump portion20bper one full rotation of the cylindrical portion20kis two, the rotational frequency of the cylindrical portion20kis 30 rpm, and the volume change of the pump portion20bis 15 cm{circumflex over ( )}3. As a result of the verification experiment, the developer discharging amount from the developer supply container1is approx. 1.2 g/s. The rotational torque of the cylindrical portion20k(average torque in the normal state) is 0.64 N·m, and the output of the driving motor500is approx. 2 W (motor load (W)=0.1047× rotational torque (N·m)×rotational frequency (rpm), wherein 0.1047 is the unit conversion coefficient) as a result of the calculation. Comparative experiments were carried out in which the number of operations of the pump portion20bper one full rotation of the cylindrical portion20kwas one, the rotational frequency of the cylindrical portion20kwas 60 rpm, and the other conditions were the same as the above-described experiments. In other words, the developer discharge amount was made the same as with the above-described experiments, i.e. approx. 1.2 g/s. As a result of the comparative experiments, the rotational torque of the cylindrical portion20k(average torque in the normal state) is 0.66 N·m, and the output of the driving motor500is approx. 4 W by the calculation. From these experiments, it has been confirmed that the pump portion20bcarries out preferably the cyclic operation a plurality of times per one full rotation of the cylindrical portion20k. In other words, it has been confirmed that by doing so, the discharging performance of the developer supply container1can be maintained with a low rotational frequency of the cylindrical portion20k. With the structure of this example, the required output of the driving motor500may be low, and therefore, the energy consumption of the main assembly of the image forming apparatus100can be reduced. (Position of Drive Converting Mechanism) As shown inFIGS.68and69, in this example, the drive converting mechanism (cam mechanism constituted by the cam projection20dand the cam groove21b) is provided outside of developer accommodating portion20. More particularly, the drive converting mechanism is disposed at a position separated from the inside spaces of the cylindrical portion20k, the pump portion20band the flange portion21, so that the drive converting mechanism does not contact the developer accommodated inside the cylindrical portion20k, the pump portion20band the flange portion21. By this, a problem which may arise when the drive converting mechanism is provided in the inside space of the developer accommodating portion20can be avoided. More particularly, the problem is that by the developer entering portions of the drive converting mechanism where sliding motions occur, the particles of the developer are subjected to heat and pressure to soften and therefore, they agglomerate into masses (coarse particle), or they enter into a converting mechanism with the result of torque increase. The problem can be avoided. (Developer Discharging Principle by Pump Portion). Referring toFIG.69, a developer supplying step by the pump portion will be described. In this example, as will be described hereinafter, the drive conversion of the rotational force is carries out by the drive converting mechanism so that the suction step (sucking operation through discharge opening21a) and the discharging step (discharging operation through the discharge opening21a) are repeated alternately. The suction step and the discharging step will be described. (Suction Step) First, the suction step (sucking operation through discharge opening21a) will be described. As shown in part (a) ofFIG.69, the sucking operation is effected by the pump portion20bbeing expanded in a direction indicated by an arrow ω by the above-described drive converting mechanism (cam mechanism). More particularly, by the sucking operation, a volume of a portion of the developer supply container1(pump portion20b, cylindrical portion20kand flange portion21) which can accommodate the developer increases. At this time, the developer supply container1is substantially hermetically sealed except for the discharge opening21a, and the discharge opening21ais plugged substantially by the developer T. Therefore, the internal pressure of the developer supply container1decreases with the increase of the volume of the portion of the developer supply container1capable of containing the developer T. At this time, the internal pressure of the developer supply container1is lower than the ambient pressure (external air pressure). For this reason, the air outside the developer supply container1enters the developer supply container1through the discharge opening21aby a pressure difference between the inside and the outside of the developer supply container1. At this time, the air is taken-in from the outside of the developer supply container1, and therefore, the developer T in the neighborhood of the discharge opening21acan be loosened (fluidized). More particularly, by the air impregnated into the developer powder existing in the neighborhood of the discharge opening21a, the bulk density of the developer powder T is reduced and the developer is and fluidized. Since the air is taken into the developer supply container1through the discharge opening21aas a result, the internal pressure of the developer supply container1changes in the neighborhood of the ambient pressure (external air pressure) despite the increase of the volume of the developer supply container1. In this manner, by the fluidization of the developer T, the developer T does not pack or clog in the discharge opening21a, so that the developer can be smoothly discharged through the discharge opening21ain the discharging operation which will be described hereinafter. Therefore, the amount of the developer T (per unit time) discharged through the discharge opening3acan be maintained substantially at a constant level for a long term. (Discharging Step) As shown in part (b) ofFIG.69, the discharging operation is effected by the pump portion20bbeing compressed in a direction indicated by an arrow γ by the above-described drive converting mechanism (cam mechanism). More particularly, by the discharging operation, a volume of a portion of the developer supply container1(pump portion20b, cylindrical portion20kand flange portion21) which can accommodate the developer decreases. At this time, the developer supply container1is substantially hermetically sealed except for the discharge opening21a, and the discharge opening21ais plugged substantially by the developer T until the developer is discharged. Therefore, the internal pressure of the developer supply container1rises with the decrease of the volume of the portion of the developer supply container1capable of containing the developer T. Since the internal pressure of the developer supply container1is higher than the ambient pressure (the external air pressure), the developer T is pushed out by the pressure difference between the inside and the outside of the developer supply container1, as shown in part (b) ofFIG.69. That is, the developer T is discharged from the developer supply container1into the developer receiving apparatus8. Thereafter, the air in the developer supply container1is also discharged with the developer T, and therefore, the internal pressure of the developer supply container1decreases. As described in the foregoing, according to this example, the discharging of the developer can be effected efficiently using one reciprocation type pump, and therefore, the mechanism for the developer discharging can be simplified. (Set Condition of Cam Groove) Referring toFIGS.71-76, modified examples of the set condition of the cam groove21bwill be described.FIGS.71-76are developed views of cam grooves3b. Referring to the developed views ofFIGS.71-76, the description will be made as to the influence to the operational condition of the pump portion20bwhen the configuration of the cam groove21bis changed. Here, in each ofFIGS.71-76-41, an arrow A indicates a rotational moving direction of the developer accommodating portion20(moving direction of the cam projection20d); an arrow B indicates the expansion direction of the pump portion20b; and an arrow C indicates a compression direction of the pump portion20b. In addition, a groove portion of the cam groove21bfor compressing the pump portion20bis indicated as a cam groove21c, and a groove portion for expanding the pump portion20bis indicated as a cam groove21d. Furthermore, an angle formed between the cam groove21cand the rotational moving direction An of the developer accommodating portion20is α; an angle formed between the cam groove21dand the rotational moving direction An is β; and an amplitude (expansion and contraction length of the pump portion20b), in the expansion and contracting directions B, C of the pump portion20b, of the cam groove is L. First, the description will be made as to the expansion and contraction length L of the pump portion20b. When the expansion and contraction length L is shortened, for example, the volume change amount of the pump portion20bdecreases, and therefore, the pressure difference from the external air pressure is reduced. Then, the pressure imparted to the developer in the developer supply container1decreases, with the result that the amount of the developer discharged from the developer supply container1per one cyclic period (one reciprocation, that is, one expansion and contracting operation of the pump portion20b) decreases. From this consideration, as shown inFIG.71, the amount of the developer discharged when the pump portion20bis reciprocated once, can be decreased as compared with the structure ofFIG.70, if an amplitude L′ is selected so as to satisfy L′<L under the condition that the angles α and β are constant. On the contrary, if L′>L, the developer discharge amount can be increased. As regards the angles α and β of the cam groove, when the angles are increased, for example, the movement distance of the cam projection20dwhen the developer accommodating portion20rotates for a constant time increases if the rotational speed of the developer accommodating portion20is constant, and therefore, as a result, the expansion-and-contraction speed of the pump portion20bincreases. On the other hand, when the cam projection20dmoves in the cam groove21b, the resistance received from the cam groove21bis large, and therefore, a torque required for rotating the developer accommodating portion20increases as a result. For this reason, as shown inFIG.72, if the angle β′, of the cam groove21dof the cam groove21dis selected so as to satisfy α′>α and β′>β without changing the expansion and contraction length L, the expansion-and-contraction speed of the pump portion20bcan be increased as compared with the structure of theFIG.70. As a result, the number of expansion and contracting operations of the pump portion20bper one rotation of the developer accommodating portion20can be increased. Furthermore, since a flow speed of the air entering the developer supply container1through the discharge opening21aincreases, the loosening effect to the developer existing in the neighborhood of the discharge opening21ais enhanced. On the contrary, if the selection satisfies α′<α and β′<β, the rotational torque of the developer accommodating portion20can be decreased. When a developer having a high flowability is used, for example, the expansion of the pump portion20btends to cause the air entered through the discharge opening21ato blow out the developer existing in the neighborhood of the discharge opening21a. As a result, there is a possibility that the developer cannot be accumulated sufficiently in the discharging portion21h, and therefore, the developer discharge amount decreases. In this case, by decreasing the expanding speed of the pump portion20bin accordance with this selection, the blowing-out of the developer can be suppressed, and therefore, the discharging power can be improved. If, as shown inFIG.73, the angle of the cam groove21bis selected so as to satisfy α<β, the expanding speed of the pump portion20bcan be increased as compared with a compressing speed. On the contrary, as shown inFIG.70, if the angle α>the angle β, the expanding speed of the pump portion20bcan be reduced as compared with the compressing speed. When the developer is in a highly packed state, for example, the operation force of the pump portion20bis larger in a compression stroke of the pump portion20bthan in an expansion stroke thereof. As a result, the rotational torque for the developer accommodating portion20tends to be higher in the compression stroke of the pump portion20b. However, in this case, if the cam groove21bis constructed as shown inFIG.73, the developer loosening effect in the expansion stroke of the pump portion20bcan be enhanced as compared with the structure ofFIG.70. In addition, the resistance received by the cam projection20dfrom the cam groove21bin the compression stroke is small, and therefore, the increase of the rotational torque in the compression of the pump portion20bcan be suppressed. As shown inFIG.74, a cam groove21esubstantially parallel with the rotational moving direction (arrow A in the Figure) of the developer accommodating portion20may be provided between the cam grooves21c,21d. In this case, the cam does not function while the cam projection20dis moving in the cam groove21e, and therefore, a step in which the pump portion20bdoes not carry out the expanding-and-contracting operation can be provided. By doing so, if a process in which the pump portion20bis at rest in the expanded state is provided, the developer loosening effect is improved, since then in an initial stage of the discharging in which the developer is present always in the neighborhood of the discharge opening21a, the pressure reduction state in the developer supply container1is maintained during the rest period. On the other hand, in a last part of the discharging, the developer is not stored sufficiently in the discharging portion21h, because the amount of the developer inside the developer supply container1is small and because the developer existing in the neighborhood of the discharge opening21ais blown out by the air entered through the discharge opening21a. In other words, the developer discharge amount tends to gradually decrease, but even in such a case, by continuing to feed the developer by rotating is developer accommodating portion20during the rest period with the expanded state, the discharging portion21hcan be filled sufficiently with the developer. Therefore, a stabilization developer discharge amount can be maintained until the developer supply container1becomes empty. In addition, in the structure ofFIG.70, by making the expansion and contraction length L of the cam groove longer, the developer discharging amount per one cyclic period of the pump portion20bcan be increased. However, in this case, the amount of the volume change of the pump portion20bincreases, and therefore, the pressure difference from the external air pressure also increases. For this reason, the driving force required for driving the pump portion20balso increases, and therefore, there is a liability that a drive load required by the developer receiving apparatus8is excessively large. Under the circumstances, in order to increase the developer discharge amount per one cyclic period of the pump portion20bwithout giving rise to such a problem, the angle of the cam groove21bis selected so as to satisfy α>β, by which the compressing speed of a pump portion20bcan be increased as compared with the expanding speed, as shown inFIG.75. Verification experiments were carried out as to the structure ofFIG.75. In the experiments, the developer is filled in the developer supply container1having the cam groove21bshown inFIG.75; the volume change of the pump portion20bis carried out in the order of the compressing operation and then the expanding operation to discharge the developer; and the discharge amounts are measured. The experimental conditions are that the amount of the volume change of the pump portion20bis 50 cm{circumflex over ( )}3, the compressing speed of the pump portion20bthe 180 cm{circumflex over ( )}3/s, and the expanding speed of the pump portion20bis 60 cm{circumflex over ( )}3/s. The cyclic period of the operation of the pump portion20bis approx. 1.1 seconds. The developer discharge amounts are measured in the case of the structure ofFIG.70. However, the compressing speed and the expanding speed of the pump portion20bare 90 cm{circumflex over ( )}3/s, and the amount of the volume change of the pump portion20band one cyclic period of the pump portion20bis the same as in the example ofFIG.75. The results of the verification experiments will be described. Part (a) ofFIG.77shows the change of the internal pressure of the developer supply container1in the volume change of the pump portion50b. In part (a) ofFIG.77, the abscissa represents the time, and the ordinate represents a relative pressure in the developer supply container1(+ is positive pressure side, is negative pressure side) relative to the ambient pressure (reference (0)). Solid lines and broken lines are for the developer supply container1having the cam groove21bofFIG.75, and that ofFIG.70, respectively. In the compressing operation of the pump portion20b, the internal pressures rise with elapse of time and reach the peaks upon completion of the compressing operation, in both examples. At this time, the pressure in the developer supply container1changes within a positive range relative to the ambient pressure (external air pressure), and therefore, the inside developer is pressurized, and the developer is discharged through the discharge opening21a. Subsequently, in the expanding operation of the pump portion20b, the volume of the pump portion20bincreases for the internal pressures of the developer supply container1decrease, in both examples. At this time, the pressure in the developer supply container1changes from the positive pressure to the negative pressure relative to the ambient pressure (external air pressure), and the pressure continues to apply to the inside developer until the air is taken in through the discharge opening21a, and therefore, the developer is discharged through the discharge opening21a. That is, in the volume change of the pump portion20b, when the developer supply container1is in the positive pressure state, that is, when the inside developer is pressurized, the developer is discharged, and therefore, the developer discharge amount in the volume change of the pump portion20bincreases with a time-integration amount of the pressure. As shown in part (a) ofFIG.77, the peak pressure at the time of completion of the compressing operation of the pump portion2bis 5.7 kPa with the structure ofFIG.75and is 5.4 kPa with the structure of theFIG.70, and it is higher in the structure ofFIG.75despite the fact that the volume change amounts of the pump portion20bare the same. This is because by increasing the compressing speed of the pump portion20b, the inside of the developer supply container1is pressurized abruptly, and the developer is concentrated to the discharge opening21aat once, with the result that a discharge resistance in the discharging of the developer through the discharge opening21abecomes large. Since the discharge openings21ahave small diameters in both examples, the tendency is remarkable. Since the time required for one cyclic period of the pump portion is the same in both examples as shown in (a) ofFIG.77, the time integration amount of the pressure is larger in the example of theFIG.75. Following Table 3 shows measured data of the developer discharge amount per one cyclic period operation of the pump portion20b. TABLE 3Amount of developer discharge (g)FIG. 673.4FIG. 723.7FIG. 734.5 As shown in Table 3, the developer discharge amount is 3.7 g in the structure ofFIG.75, and is 3.4 g in the structure ofFIG.70, that is, it is larger in the case ofFIG.75structure. From these results and, the results of part (a) of theFIG.77, it has been confirmed that the developer discharge amount per one cyclic period of the pump portion20bincreases with the time integration amount of the pressure. From the foregoing, the developer discharging amount per one cyclic period of the pump portion20bcan be increased by making the compressing speed of the pump portion20bhigher as compared with the expansion speed and making the peak pressure in the compressing operation of the pump portion20bhigher as shown inFIG.75. The description will be made as to another method for increasing the developer discharging amount per one cyclic period of the pump portion20b. With the cam groove21bshown inFIG.76, similarly to the case ofFIG.74, a cam groove21esubstantially parallel with the rotational moving direction of the developer accommodating portion20is provided between the cam groove21cand the cam groove21d. However, in the case of the cam groove21bshown inFIG.76, the cam groove21eis provided at such a position that in a cyclic period of the pump portion20b, the operation of the pump portion20bstops in the state that the pump portion20bis compressed, after the compressing operation of the pump portion20b. With the structure of theFIG.76, the developer discharge amount was measured similarly. In the verification experiments for this, the compressing speed and the expanding speed of the pump portion20bis 180 cm{circumflex over ( )}3/s, and the other conditions are the same as withFIG.75example. The results of the verification experiments will be described. Part (b) of theFIG.77shows changes of the internal pressure of the developer supply container1in the expanding-and-contracting operation of the pump portion2b. Solid lines and broken lines are for the developer supply container1having the cam groove21bofFIG.76, and that ofFIG.75, respectively. Also in the case ofFIG.76, the internal pressure rises with elapse of time during the compressing operation of the pump portion20b, and reaches the peak upon completion of the compressing operation. At this time, similarly toFIG.75, the pressure in the developer supply container1changes within the positive range, and therefore, the inside developer are discharged. The compressing speed of the pump portion20bin the example of theFIG.41is the same as withFIG.75example, and therefore, the peak pressure upon completion of the compressing operation of the pump portion2bis 5.7 kPa which is equivalent to theFIG.76example. Subsequently, when the pump portion20bstops in the compression state, the internal pressure of the developer supply container1gradually decreases. This is because the pressure produced by the compressing operation of the pump portion2bremains after the operation stop of the pump portion2b, and the inside developer and the air are discharged by the pressure. However, the internal pressure can be maintained at a level higher than in the case that the expanding operation is started immediately after completion of the compressing operation, and therefore, a larger amount of the developer is discharged during it. When the expanding operation starts thereafter, similarly to the example of theFIG.40, the internal pressure of the developer supply container1decreases, and the developer is discharged until the pressure in the developer supply container1becomes negative, since the inside developer is pressed continuously. As time integration values of the pressure are compared as shown is part (b) ofFIG.77, it is larger in the case ofFIG.76, because the high internal pressure is maintained during the rest period of the pump portion20bunder the condition that the time durations in unit cyclic periods of the pump portion20bin these examples are the same. As shown in Table 3, the measured developer discharge amounts per one cyclic period of the pump portion20bis 4.5 g in the case ofFIG.76, and is larger than in the case ofFIG.75(3.7g). From the results of the Table 3 and the results shown in part (b) ofFIG.77, it has been confirmed that the developer discharge amount per one cyclic period of the pump portion20bincreases with time integration amount of the pressure. Thus, in the example ofFIG.76, the operation of the pump portion20bis stopped in the compressed state, after the compressing operation. For this reason, the peak pressure in the developer supply container1in the compressing operation of the pump portion2bis high, and the pressure is maintained at a level as high as possible, by which the developer discharging amount per one cyclic period of the pump portion20bcan be further increased. As described in the foregoing, by changing the configuration of the cam groove21b, the discharging power of the developer supply container1can be adjusted, and therefore, the apparatus of this embodiment can respond to a developer amount required by the developer receiving apparatus8and to a property or the like of the developer to use. InFIGS.70-76, the discharging operation and the sucking operation of the pump portion20bare alternately carried out, but the discharging operation and/or the sucking operation may be temporarily stopped partway, and a predetermined time after the discharging operation and/or the sucking operation may be resumed. For example, it is a possible alternative that the discharging operation of the pump portion20bis not carried out monotonically, but the compressing operation of the pump portion is temporarily stopped partway, and then, the compressing operation is compressed to effect discharge. The same applies to the sucking operation. Furthermore, the discharging operation and/or the sucking operation may be multi-step type, as long as the developer discharge amount and the discharging speed are satisfied. Thus, even when the discharging operation and/or the sucking operation are divided into multi-steps, the situation is still that the discharging operation and the sucking operation are alternately repeated. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in this example, the driving force for rotating the feeding portion (helical projection20c) and the driving force for reciprocating the pump portion (bellow-like pump portion20b) are received by a single drive inputting portion (gear portion20a). Therefore, the structure of the drive inputting mechanism of the developer supply container can be simplified. In addition, by the single driving mechanism (driving gear300) provided in the developer receiving apparatus, the driving force is applied to the developer supply container, and therefore, the driving mechanism for the developer receiving apparatus can be simplified. Furthermore, a simple and easy mechanism can be employed positioning the developer supply container relative to the developer receiving apparatus. With the structure of the example, the rotational force for rotating the feeding portion received from the developer receiving apparatus is converted by the drive converting mechanism of the developer supply container, by which the pump portion can be reciprocated properly. In other words, in a system in which the developer supply container receives the reciprocating force from the developer receiving apparatus, the appropriate drive of the pump portion is assured. In addition, in this example, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 9 Referring toFIG.78(parts (a) and (b)), structures of the Embodiment 9 will be described. Part (a) of theFIG.78is a schematic perspective view of the developer supply container1, part (b) of theFIG.78is a schematic sectional view illustrating a state in which a pump portion20bexpands, and (c) is a schematic perspective view around the regulating member56. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, a drive converting mechanism (cam mechanism) is provided together with a pump portion20bin a position dividing a cylindrical portion20kwith respect to a rotational axis direction of the developer supply container1, as is significantly different from Embodiment 8. The other structures are substantially similar to the structures of Embodiment 8. As shown in part (a) ofFIG.78, in this example, the cylindrical portion20kwhich feeds the developer toward a discharging portion21hwith rotation comprises a cylindrical portion20k1and a cylindrical portion20k2. The pump portion20bis provided between the cylindrical portion20k1and the cylindrical portion20k2. A cam flange portion19functioning as a drive converting mechanism is provided at a position corresponding to the pump portion20b. An inner surface of the cam flange portion19is provided with a cam groove19aextending over the entire circumference as in Embodiment 8. On the other hand, an outer surface of the cylindrical portion20k2is provided a cam projection20dfunctioning as a drive converting mechanism and is locked with the cam groove19a. In addition, the developer receiving apparatus8is provided with a portion similar to the rotational moving direction regulating portion29(FIG.66), which functions as a holding portion for the cam flange portion19so as to prevent the rotation. Furthermore, the developer receiving apparatus8is provided with a portion similar to the rotational moving direction regulating portion30(FIG.66), which functions as a holding portion for the cam flange portion19so as to prevent the rotation. Therefore, when a rotational force is inputted to a gear portion20a, the pump portion20breciprocates together with the cylindrical portion20k2in the directions ω and γ. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in the case that the pump portion20bis disposed at a position dividing the cylindrical portion, the pump portion20bcan be reciprocated by the rotational driving force received from the developer receiving apparatus8, as in Embodiment 8. Here, the structure of Embodiment 8 in which the pump portion20bis directly connected with the discharging portion21his preferable from the standpoint that the pumping action of the pump portion20bcan be efficiently applied to the developer stored in the discharging portion21h. In addition, this embodiment requires an additional cam flange portion (drive converting mechanism)19which has to be held substantially stationary by the developer receiving apparatus8. Furthermore, this embodiment requires an additional mechanism, in the developer receiving apparatus8, for limiting movement of the cam flange portion19in the rotational axis direction of the cylindrical portion20k. Therefore, in view of such a complication, the structure of Embodiment 8 using the flange portion21is preferable. This is because in Embodiment 8, the flange portion21is held by the developer receiving apparatus8in order to make substantially immovable the portion where the developer receiving apparatus side and the developer supply container side are directly connected (the portion corresponding to the developer receiving port11aand the shutter opening4fin Embodiment 2), and one of cam mechanisms constituting the drive converting mechanism is provided on the flange portion21. That is, the drive converting mechanism is simplified in this manner. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 10 Referring toFIG.79, a structure of the Embodiment 10 will be described. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. This example is significantly different from Embodiment 5 in that a drive converting mechanism (cam mechanism) is provided at an upstream end of the developer supply container1with respect to the feeding direction for the developer and in that the developer in the cylindrical portion20kis fed using a stirring member20m. The other structures are substantially similar to the structures of Embodiment 8. As shown inFIG.79, in this example, the stirring member20mis provided in the cylindrical portion2ktas the feeding portion and rotates relative to the cylindrical portion20k. The stirring member20mrotates by the rotational force received by the gear portion20a, relative to the cylindrical portion20kfixed to the developer receiving apparatus8non-rotatably, by which the developer is fed in a rotational axis direction toward the discharging portion21hwhile being stirred. More particularly, the stirring member20mis provided with a shaft portion and a feeding blade portion fixed to the shaft portion. In this example, the gear portion20aas the drive inputting portion is provided at one longitudinal end portion of the developer supply container1(right-hand side inFIG.79), and the gear portion20ais connected co-axially with the stirring member20m. In addition, a hollow cam flange portion21iwhich is integral with the gear portion20ais provided at one longitudinal end portion of the developer supply container (right-hand side inFIG.79) so as to rotate co-axially with the gear portion20a. The cam flange portion21iis provided with a cam groove21bwhich extends in an inner surface over the entire inner circumference, and the cam groove21bis engaged with two cam projections20dprovided on an outer surface of the cylindrical portion20kat substantially diametrically opposite positions, respectively. One end portion (discharging portion21hside) of the cylindrical portion20kis fixed to the pump portion20b, and the pump portion20bis fixed to a flange portion21at one end portion (discharging portion21hside) thereof. They are fixed by welding method. Therefore, in the state that it is mounted to the developer receiving apparatus8, the pump portion20band the cylindrical portion20kare substantially non-rotatable relative to the flange portion21. Also in this example, similarly to the Embodiment 8, when the developer supply container1is mounted to the developer receiving apparatus8, the flange portion21(discharging portion21h) is prevented from the movements in the rotational moving direction and the rotational axis direction by the developer receiving apparatus8. Therefore, when the rotational force is inputted from the developer receiving apparatus8to the gear portion20a, the cam flange portion21irotates together with the stirring member20m. As a result, the cam projection20dis driven by the cam groove21bof the cam flange portion21iso that the cylindrical portion20kreciprocates in the rotational axis direction to expand and contract the pump portion20b. In this manner, by the rotation of the stirring member20m, the developer is fed to the discharging portion21h, and the developer in the discharging portion21his finally discharged through a discharge opening21aby the suction and discharging operation of the pump portion20b. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in the structure of this example, similarly to the Embodiments 8-9, both of the rotating operation of the stirring member20mprovided in the cylindrical portion20kand the reciprocation of the pump portion20bcan be performed by the rotational force received by the gear portion20afrom the developer receiving apparatus8. In the case of this example, the stress applied to the developer in the developer feeding step at the cylindrical portion20ttends to be relatively large, and the driving torque is relatively large, and from this standpoint, the structures of Embodiment 8 and Embodiment 6 are preferable. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 11 Referring toFIG.80(parts (a)-(d)), structures of the Embodiment 11 will be described. Part (a) ofFIG.80is a schematic perspective view of a developer supply container1, (b) is an enlarged sectional view of the developer supply container1, and (c)-(d) are enlarged perspective views of the cam portions. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. This example is substantially the same as Embodiment 8 except that the pump portion20bis made non-rotatable by a developer receiving apparatus8. In this example, as shown in parts (a) and (b) ofFIG.80, relaying portion20fis provided between a pump portion20band a cylindrical portion20kof a developer accommodating portion20. The relaying portion20fis provided with two cam projections20don the outer surface thereof at the positions substantially diametrically opposed to each other, and one end thereof (discharging portion21hside) is connected to and fixed to the pump portion20b(welding method). Another end (discharging portion21hside) of the pump portion20bis fixed to a flange portion21(welding method), and in the state that it is mounted to the developer receiving apparatus8, it is substantially non-rotatable. A sealing member27is compressed between the cylindrical portion20kand the relaying portion20f, and the cylindrical portion20kis unified so as to be rotatable relative to the relaying portion20f. The outer peripheral portion of the cylindrical portion20kis provided with a rotation receiving portion (projection)20gfor receiving a rotational force from a cam gear portion7, as will be described hereinafter. On the other hand, the cam gear portion7which is cylindrical is provided so as to cover the outer surface of the relaying portion20f. The cam gear portion22is engaged with the flange portion21so as to be substantially stationary (movement within the limit of play is permitted), and is rotatable relative to the flange portion21. As shown in part (c) ofFIG.80, the cam gear portion22is provided with a gear portion22aas a drive inputting portion for receiving the rotational force from the developer receiving apparatus8, and a cam groove22bengaged with the cam projection20d. In addition, as shown in part (d) ofFIG.80, the cam gear portion22is provided with a rotational engaging portion (recess)7cengaged with the rotation receiving portion20gto rotate together with the cylindrical portion20k. Thus, by the above-described engaging relation, the rotational engaging portion (recess)7cis permitted to move relative to the rotation receiving portion20gin the rotational axis direction, but it can rotate integrally in the rotational moving direction. The description will be made as to a developer supplying step of the developer supply container1in this example. When the gear portion22areceives a rotational force from the driving gear9of the developer receiving apparatus8, and the cam gear portion22rotates, the cam gear portion22rotates together with the cylindrical portion20kbecause of the engaging relation with the rotation receiving portion20gby the rotational engaging portion7c. That is, the rotational engaging portion7cand the rotation receiving portion20gfunction to transmit the rotational force which is received by the gear portion22afrom the developer receiving apparatus8, to the cylindrical portion20k(feeding portion20c). On the other hand, similarly to Embodiments 8-10, when the developer supply container1is mounted to the developer receiving apparatus8, the flange portion21is non-rotatably supported by the developer receiving apparatus8, and therefore, the pump portion20band the relaying portion20ffixed to the flange portion21is also non-rotatable. In addition, the movement of the flange portion21in the rotational axis direction is prevented by the developer receiving apparatus8. Therefore, when the cam gear portion22rotates, a cam function occurs between the cam groove22bof the cam gear portion22and the cam projection20dof the relaying portion20f. Thus, the rotational force inputted to the gear portion22afrom the developer receiving apparatus8is converted to the force reciprocating the relaying portion20fand the cylindrical portion20kin the rotational axis direction of the developer accommodating portion20. As a result, the pump portion20bwhich is fixed to the flange portion21at one end position (left side in part (b) of theFIG.80) with respect to the reciprocating direction expands and contracts in interrelation with the reciprocation of the relaying portion20fand the cylindrical portion20k, thus effecting a pump operation. In this manner, with the rotation of the cylindrical portion20k, the developer is fed to the discharging portion21hby the feeding portion20c, and the developer in the discharging portion21his finally discharged through a discharge opening21aby the suction and discharging operation of the pump portion20b. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in this example, the rotational force received from the developer receiving apparatus8is transmitted and converted simultaneously to the force rotating the cylindrical portion20kand to the force reciprocating (expanding-and-contracting operation) the pump portion20bin the rotational axis direction. Therefore, also in this example, similarly to Embodiments 8-10, by the rotational force received from the developer receiving apparatus8, both of the rotating operation of the cylindrical portion20k(feeding portion20c) and the reciprocation of the pump portion20bcan be effected. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 12 Referring to parts (a) and (b) of theFIG.81, Embodiment 12 will be described. Part (a) of theFIG.81is a schematic perspective view of a developer supply container1, part (b) is an enlarged sectional view of the developer supply container. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. This example is significantly different from Embodiment 8 in that a rotational force received from a driving gear9of a developer receiving apparatus8is converted to a reciprocating force for reciprocating a pump portion20b, and then the reciprocating force is converted to a rotational force, by which a cylindrical portion20kis rotated. In this example, as shown in part (b) of theFIG.81, a relaying portion20fis provided between the pump portion20band the cylindrical portion20k. The relaying portion20fincludes two cam projections20dat substantially diametrically opposite positions, respectively, and one end sides thereof (discharging portion21hside) are connected and fixed to the pump portion20bby welding method. Another end (discharging portion21hside) of the pump portion20bis fixed to a flange portion21(welding method), and in the state that it is mounted to the developer receiving apparatus8, it is substantially non-rotatable. Between the one end portion of the cylindrical portion20kand the relaying portion20f, a sealing member27is compressed, and the cylindrical portion20kis unified such that it is rotatable relative to the relaying portion20f. An outer periphery portion of the cylindrical portion20kis provided with two cam projections20iat substantially diametrically opposite positions, respectively. On the other hand, a cylindrical cam gear portion22is provided so as to cover the outer surfaces of the pump portion20band the relaying portion20f. The cam gear portion22is engaged so that it is non-movable relative to the flange portion21in a rotational axis direction of the cylindrical portion20kbut it is rotatable relative thereto. The cam gear portion22is provided with a gear portion22aas a drive inputting portion for receiving the rotational force from the developer replenishing apparatus8, and a cam groove22aengaged with the cam projection20d. Furthermore, there is provided a cam flange portion19covering the outer surfaces of the relaying portion20fand the cylindrical portion20k. When the developer supply container1is mounted to a mounting portion8fof the developer receiving apparatus8, cam flange portion19is substantially non-movable. The cam flange portion19is provided with a cam projection20iand a cam groove19a. A developer supplying step in this example will be described. The gear portion22areceives a rotational force from a driving gear300of the developer receiving apparatus8by which the cam gear portion22rotates. Then, since the pump portion20band the relaying portion20fare held non-rotatably by the flange portion21, a cam function occurs between the cam groove22bof the cam gear portion22and the cam projection20dof the relaying portion20f. More particularly, the rotational force inputted to the gear portion7afrom the developer receiving apparatus8is converted to a reciprocation force the relaying portion20fin the rotational axis direction of the cylindrical portion20k. As a result, the pump portion20bwhich is fixed to the flange portion21at one end with respect to the reciprocating direction the left side of the part (b) of theFIG.81) expands and contracts in interrelation with the reciprocation of the relaying portion20f, thus effecting the pump operation. When the relaying portion20freciprocates, a cam function works between the cam groove19aof the cam flange portion19and the cam projection20iby which the force in the rotational axis direction is converted to a force in the rotational moving direction, and the force is transmitted to the cylindrical portion20k. As a result, the cylindrical portion20k(feeding portion20c) rotates. In this manner, with the rotation of the cylindrical portion20k, the developer is fed to the discharging portion21hby the feeding portion20c, and the developer in the discharging portion21his finally discharged through a discharge opening21aby the suction and discharging operation of the pump portion20b. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in this example, the rotational force received from the developer receiving apparatus8is converted to the force reciprocating the pump portion20bin the rotational axis direction (expanding-and-contracting operation), and then the force is converted to a force rotation the cylindrical portion20kand is transmitted. Therefore, also in this example, similarly to Embodiment 11, by the rotational force received from the developer receiving apparatus8, both of the rotating operation of the cylindrical portion20k(feeding portion20c) and the reciprocation of the pump portion20bcan be effected. However, in this example, the rotational force inputted from the developer receiving apparatus8is converted to the reciprocating force and then is converted to the force in the rotational moving direction with the result of complicated structure of the drive converting mechanism, and therefore, Embodiments 8-11 in which the re-conversion is unnecessary are preferable. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 13 Referring to parts (a)-(b) ofFIG.82and parts (a)-(d) ofFIG.83, Embodiment 13 will be described. Part (a) ofFIG.82is a schematic perspective view of a developer supply container, part (b) is an enlarged sectional view of the developer supply container1, and parts (a)-(d) ofFIG.83are enlarged views of a drive converting mechanism. In parts (a)-(d) ofFIG.83, a gear ring60and a rotational engaging portion8bare shown as always taking top positions for better illustration of the operations thereof. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, the drive converting mechanism employs a bevel gear, as is contrasted to the foregoing examples. As shown in part (b) ofFIG.82, a relaying portion20fis provided between a pump portion20band a cylindrical portion20k. The relaying portion20fis provided with an engaging projection20hengaged with a connecting portion62which will be described hereinafter. Another end (discharging portion21hside) of the pump portion20bis fixed to a flange portion21(welding method), and in the state that it is mounted to the developer receiving apparatus8, it is substantially non-rotatable. A sealing member27is compressed between the discharging portion21hside end of the cylindrical portion20kand the relaying portion20f, and the cylindrical portion20kis unified so as to be rotatable relative to the relaying portion20f. An outer periphery portion of the cylindrical portion20kis provided with a rotation receiving portion (projection)20gfor receiving a rotational force from the gear ring60which will be described hereinafter. On the other hand, a cylindrical gear ring60is provided so as to cover the outer surface of the cylindrical portion20k. The gear ring60is rotatable relative to the flange portion21. As shown in parts (a) and (b) ofFIG.82, the gear ring60includes a gear portion60afor transmitting the rotational force to the bevel gear61which will be described hereinafter and a rotational engaging portion (recess)60bfor engaging with the rotation receiving portion20gto rotate together with the cylindrical portion20k. Thus, by the above-described engaging relation, the rotational engaging portion (recess)60bis permitted to move relative to the rotation receiving portion20gin the rotational axis direction, but it can rotate integrally in the rotational moving direction. On the outer surface of the flange portion21, the bevel61is provided so as to be rotatable relative to the flange portion21. Furthermore, the bevel61and the engaging projection20hare connected by a connecting portion62. A developer supplying step of the developer supply container1will be described. When the cylindrical portion20krotates by the gear portion20aof the developer accommodating portion20receiving the rotational force from the driving gear9of the developer receiving apparatus8, gear ring60rotates with the cylindrical portion20ksince the cylindrical portion20kis in engagement with the gear ring60by the receiving portion20g. That is, the rotation receiving portion20gand the rotational engaging portion60bfunction to transmit the rotational force inputted from the developer receiving apparatus8to the gear portion20ato the gear ring60. On the other hand, when the gear ring60rotates, the rotational force is transmitted to the bevel gear61from the gear portion60aso that the bevel gear61rotates. The rotation of the bevel gear61is converted to reciprocating motion of the engaging projection20hthrough the connecting portion62, as shown in parts (a)-(d) of theFIG.83. By this, the relaying portion20fhaving the engaging projection20his reciprocated. As a result, the pump portion20bexpands and contracts in interrelation with the reciprocation of the relaying portion20fto effect a pump operation. In this manner, with the rotation of the cylindrical portion20k, the developer is fed to the discharging portion21hby the feeding portion20c, and the developer in the discharging portion21his finally discharged through a discharge opening21aby the suction and discharging operation of the pump portion20b. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 12, both of the reciprocation of the pump portion20band the rotating operation of the cylindrical portion20k(feeding portion20c) are effected by the rotational force received from the developer receiving apparatus8. However, in the case of using the bevel gear, the number of parts is large, and Embodiment 8-Embodiment 12 are preferable from this standpoint. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 14 Referring toFIG.84(parts (a) and (b)), structures of the Embodiment 14 will be described. Part (a) ofFIG.84is an enlarged perspective view of a drive converting mechanism, (b)-(c) are enlarged views thereof as seen from the top. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In parts (b) and (c) ofFIG.84, a gear ring60and a rotational engaging portion60bare schematically shown as being at the top for the convenience of illustration of the operation. In this embodiment, the drive converting mechanism includes a magnet (magnetic field generating means) as is significantly different from Embodiments. As shown inFIG.84(FIG.83, if necessary), the bevel gear61is provided with a rectangular parallelepiped shape magnet63, and an engaging projection20hof a relaying portion20fis provided with a bar-like magnet64having a magnetic pole directed to the magnet63. The rectangular parallelepiped shape magnet63has a N pole at one longitudinal end thereof and a S pole as the other end, and the orientation thereof changes with the rotation of the bevel gear61. The bar-like magnet64has a S pole at one longitudinal end adjacent an outside of the container and a N pole at the other end, and it is movable in the rotational axis direction. The magnet64is non-rotatable by an elongated guide groove formed in the outer peripheral surface of the flange portion21. With such a structure, when the magnet63is rotated by the rotation of the bevel gear61, the magnetic pole facing the magnet and exchanges, and therefore, attraction and repelling between the magnet63and the magnet64are repeated alternately. As a result, a pump portion20bfixed to the relaying portion20fis reciprocated in the rotational axis direction. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in the structure of this example, similarly to the Embodiment 8-Embodiment 13, both of the reciprocation of the pump portion20band the rotating operation of the feeding portion20c(cylindrical portion20k) can be effected by the rotational force received from the developer receiving apparatus8. In this example, the bevel gear61is provided with the magnet, but this is not inevitable, and another way of use of magnetic force (magnetic field) is applicable. From the standpoint of certainty of the drive conversion, Embodiments 8-13 are preferable. In the case that the developer accommodated in the developer supply container1is a magnetic developer (one component magnetic toner, two component magnetic carrier), there is a liability that the developer is trapped in an inner wall portion of the container adjacent to the magnet. Then, an amount of the developer remaining in the developer supply container1may be large, and from this standpoint, the structures of Embodiments 5-10 are preferable. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 15 Referring to parts (a)-(c) ofFIG.85and parts (a)-(b) ofFIG.86, Embodiment 15 will be described. Part (a) of theFIG.85is a schematic view illustrating an inside of a developer supply container1, (b) is a sectional view in a state that the pump portion20bis expanded to the maximum in the developer supplying step, showing (c) is a sectional view of the developer supply container1in a state that the pump portion20bis compressed to the maximum in the developer supplying step. Part (a) ofFIG.86is a schematic view illustrating an inside of the developer supply container1, (b) is a perspective view of a rear end portion of the cylindrical portion20k, and (c) is a schematic perspective view around a regulating member56. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. This embodiment is significantly different from the structures of the above-described embodiments in that the pump portion20bis provided at a leading end portion of the developer supply container1and in that the pump portion20bdoes not have the functions of transmitting the rotational force received from the driving gear9to the cylindrical portion20k. More particularly, the pump portion20bis provided outside a drive conversion path of the drive converting mechanism, that is, outside a drive transmission path extending from the coupling portion20s(part (b) ofFIG.86) received the rotational force from the driving gear9(FIG.66) to the cam groove20n. This structure is employed in consideration of the fact that with the structure of Embodiment 8, after the rotational force inputted from the driving gear9is transmitted to the cylindrical portion20kthrough the pump portion20b, it is converted to the reciprocation force, and therefore, the pump portion20breceives the rotational moving direction always in the developer supplying step operation. Therefore, there is a liability that in the developer supplying step the pump portion20bis twisted in the rotational moving direction with the results of deterioration of the pump function. This will be described in detail. As shown in part (a) ofFIG.85, an opening portion of one end portion (discharging portion21hside) of the pump portion20bis fixed to a flange portion21(welding method), and when the container is mounted to the developer receiving apparatus8, the pump portion20bis substantially non-rotatable with the flange portion21. On the other hand, a cam flange portion19is provided covering the outer surface of the flange portion21and/or the cylindrical portion20k, and the cam flange portion15functions as a drive converting mechanism. As shown inFIG.85, the inner surface of the cam flange portion19is provided with two cam projections19aat diametrically opposite positions, respectively. In addition, the cam flange portion19is fixed to the closed side (opposite the discharging portion21hside) of the pump portion20b. On the other hand, the outer surface of the cylindrical portion20kis provided with a cam groove20nfunctioning as the drive converting mechanism, the cam groove20nextending over the entire circumference, and the cam projection19ais engaged with the cam groove20n. Furthermore, in this embodiment, as is different from Embodiment 8, as shown in part (b) of theFIG.86, one end surface of the cylindrical portion20k(upstream side with respect to the feeding direction of the developer) is provided with a non-circular (rectangular in this example) male coupling portion20sfunctioning as the drive inputting portion. On the other hand, the developer receiving apparatus8includes non-circular (rectangular) female coupling portion) for driving connection with the male coupling portion20sto apply a rotational force. The female coupling portion, similarly to Embodiment 8, is driven by a driving motor500. In addition, the flange portion21is prevented, similarly to Embodiment 5, from moving in the rotational axis direction and in the rotational moving direction by the developer receiving apparatus8. On the other hand, the cylindrical portion20kis connected with the flange portion21through a sealing member27, and the cylindrical portion20kis rotatable relative to the flange portion21. The sealing member27is a sliding type seal which prevents incoming and outgoing leakage of air (developer) between the cylindrical portion20kand the flange portion21within a range not influential to the developer supply using the pump portion20band which permits rotation of the cylindrical portion20k. A developer supplying step of the developer supply container1will be described. The developer supply container1is mounted to the developer receiving apparatus8, and then the cylindrical portion20kreceptions the rotational force from the female coupling portion of the developer receiving apparatus8, by which the cam groove20nrotates. Therefore, the cam flange portion19reciprocates in the rotational axis direction relative to the flange portion21and the cylindrical portion20kby the cam projection19aengaged with the cam groove20n, while the cylindrical portion20kand the flange portion21are prevented from movement in the rotational axis direction by the developer receiving apparatus8. Since the cam flange portion19and the pump portion20bare fixed with each other, the pump portion20breciprocates with the cam flange portion19(arrow ω direction and arrow γ direction). As a result, as shown in parts (b) and (c) ofFIG.85, the pump portion20bexpands and contracts in interrelation with the reciprocation of the cam flange portion19, thus effecting a pumping operation. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening21a, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similar to the above-described Embodiments 8-14, the rotational force received from the developer receiving apparatus8is converted a force operating the pump portion20b, in the developer supply container1, so that the pump portion20bcan be operated properly. In addition, the rotational force received from the developer receiving apparatus8is converted to the reciprocation force without using the pump portion20b, by which the pump portion20bis prevented from being damaged due to the torsion in the rotational moving direction. Therefore, it is unnecessary to increase the strength of the pump portion20b, and the thickness of the pump portion20bmay be small, and the material thereof may be an inexpensive one. Further with the structure of this example, the pump portion20bis not provided between the discharging portion21hand the cylindrical portion20kas in Embodiment 8-Embodiment 14, but is provided at a position away from the cylindrical portion20kof the discharging portion21h, and therefore, the developer amount remaining in the developer supply container1can be reduced. As shown in (a) ofFIG.86, it is an usable alternative that the internal space of the pump portion20bis not uses as a developer accommodating space, and the filter 65 partitions between the pump portion20band the discharging portion21h. Here, the filter has such a property that the air is easily passed, but the toner is not passed substantially. With such a structure, when the pump portion20bis compressed, the developer in the recessed portion of the bellow portion is not stressed. However, the structure of parts (a)-(c) ofFIG.85is preferable from the standpoint that in the expanding stroke of the pump portion20b, an additional developer accommodating space can be formed, that is, an additional space through which the developer can move is provided, so that the developer is easily loosened. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 16 Referring toFIG.87(parts (a) and (b)), structures of the Embodiment 16 will be described. Parts (a)-(c) ofFIG.87are enlarged sectional views of a developer supply container1. In parts (a)-(c) ofFIG.87, the structures except for the pump are substantially the same as structures shown inFIGS.85and86, and therefore, the detailed description there of is omitted. In this example, the pump does not have the alternating peak folding portions and bottom folding portions, but it has a film-like pump portion38capable of expansion and contraction substantially without a folding portion, as shown inFIG.87. In this embodiment, the film-like pump portion38is made of rubber, but this is not inevitable, and flexible material such as resin film is usable. With such a structure, when the cam flange portion19reciprocates in the rotational axis direction, the film-like pump portion38reciprocates together with the cam flange portion19. As a result, as shown in parts (b) and (c) ofFIG.87, the film-like pump portion38expands and contracts interrelated with the reciprocation of the cam flange portion19in the directions of arrow ω and arrow γ, thus effecting a pumping operation. As described in the foregoing, also in this embodiment, one pump38is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening21a, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similar to the above-described Embodiments 8-15, the rotational force received from the developer receiving apparatus8is converted a force operating the pump portion38, in the developer supply container1, so that the pump portion38can be operated properly. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 17 Referring toFIG.88(parts (a) and (b)), structures of the Embodiment 17 will be described. Part (a) ofFIG.88is a schematic perspective view of the developer supply container1, (b) is an enlarged sectional view of the developer supply container1, (c)-(e) are schematic enlarged views of a drive converting mechanism. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, the pump portion is reciprocated in a direction perpendicular to a rotational axis direction, as is contrasted to the foregoing embodiments. (Drive Converting Mechanism) In this example, as shown in parts (a)-(e) ofFIG.88, at an upper portion of the flange portion21, that is, the discharging portion21h, a pump portion21fof bellow type is connected. In addition, to a top end portion of the pump portion21f, a cam projection21gfunctioning as a drive converting portion is fixed by bonding. On the other hand, at one longitudinal end surface of the developer accommodating portion20, a cam groove20eengageable with a cam projection21gis formed and it function as a drive converting portion. As shown in part (b) ofFIG.88, the developer accommodating portion20is fixed so as to be rotatable relative to discharging portion21hin the state that a discharging portion21hside end compresses a sealing member27provided on an inner surface of the flange portion21. Also in this example, with the mounting operation of the developer supply container1, both sides of the discharging portion21h(opposite end surfaces with respect to a direction perpendicular to the rotational axis direction X) are supported by the developer receiving apparatus8. Therefore, during the developer supply operation, the discharging portion21his substantially non-rotatable. Also in this example, the mounting portion8fof the developer receiving apparatus8is provided with a developer receiving portion11(FIG.40orFIG.66) for receiving the developer discharged from the developer supply container1through the discharge opening (opening)21awhich will be described hereinafter. The structure of the developer receiving portion11is similar to the those of Embodiment 1 or Embodiment 2, and therefore, the description thereof is omitted. In addition, the flange portion21of the developer supply container is provided with engaging portions3b2and3b4engageable with the developer receiving portion11displaceably provided on the developer receiving apparatus8similarly to the above-described Embodiment 1 or Embodiment 2. The structures of the engaging portions3b2,3b4are similar to those of above-described Embodiment 1 or Embodiment 2, and therefore, the description is omitted. Here, the configuration of the cam groove20eis elliptical configuration as shown in (c)-(e) ofFIG.88, and the cam projection21gmoving along the cam groove20echanges in the distance from the rotational axis of the developer accommodating portion20(minimum distance in the diametrical direction). As shown in (b) ofFIG.88, a plate-like partition wall32is provided and is effective to feed, to the discharging portion21h, a developer fed by a helical projection (feeding portion)20cfrom the cylindrical portion20k. The partition wall32divides a part of the developer accommodating portion20substantially into two parts and is rotatable integrally with the developer accommodating portion20. The partition wall32is provided with an inclined projection32aslanted relative to the rotational axis direction of the developer supply container1. The inclined projection32ais connected with an inlet portion of the discharging portion21h. Therefore, the developer fed from the feeding portion20cis scooped up by the partition wall32in interrelation with the rotation of the cylindrical portion20k. Thereafter, with a further rotation of the cylindrical portion20k, the developer slide down on the surface of the partition wall32by the gravity, and is fed to the discharging portion21hside by the inclined projection32a. The inclined projection32ais provided on each of the sides of the partition wall32so that the developer is fed into the discharging portion21hevery one half rotation of the cylindrical portion20k. (Developer Supplying Step) The description will be made as to developer supplying step from the developer supply container1in this example When the operator mounts the developer supply container1to the developer receiving apparatus8, the flange portion21(discharging portion21h) is prevented from movement in the rotational moving direction and in the rotational axis direction by the developer receiving apparatus8. In addition, the pump portion21fand the cam projection21gare fixed to the flange portion21, and are prevented from movement in the rotational moving direction and in the rotational axis direction, similarly. And, by the rotational force inputted from a driving gear9(FIGS.67and68) to a gear portion20a, the developer accommodating portion20rotates, and therefore, the cam groove20ealso rotates. On the other hand, the cam projection21gwhich is fixed so as to be non-rotatable receives the force through the cam groove20e, so that the rotational force inputted to the gear portion20ais converted to a force reciprocating the pump portion21fsubstantially vertically. Here, part (d) ofFIG.88illustrates a state in which the pump portion21fis most expanded, that is, the cam projection21gis at the intersection between the ellipse of the cam groove20eand the major axis La (point Y in (c) ofFIG.88). Part (e) ofFIG.88illustrates a state in which the pump portion21fis most contracted, that is, the cam projection21gis at the intersection between the ellipse of the cam groove20eand the minor axis La (point Z in (c) ofFIG.53). The state of (d) ofFIG.88and the state of (e) ofFIG.88are repeated alternately at predetermined cyclic period so that the pump portion21feffects the suction and discharging operation. That is the developer is discharged smoothly. With such rotation of the cylindrical portion20k, the developer is fed to the discharging portion21hby the feeding portion20cand the inclined projection32a, and the developer in the discharging portion21his finally discharged through the discharge opening21aby the suction and discharging operation of the pump portion21f. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 16, both of the reciprocation of the pump portion21fand the rotating operation of the feeding portion20c(cylindrical portion20k) can be effected by gear portion20areceiving the rotational force from the developer receiving apparatus8. Since, in this example, the pump portion21fis provided at a top of the discharging portion21h(in the state that the developer supply container1is mounted to the developer receiving apparatus8), the amount of the developer unavoidably remaining in the pump portion21fcan be minimized as compared with Embodiment 8. In this example, the pump portion21fis a bellow-like pump, but it may be replaced with a film-like pump described in Embodiment 13. In this example, the cam projection21gas the drive transmitting portion is fixed by an adhesive material to the upper surface of the pump portion21f, but the cam projection21gis not necessarily fixed to the pump portion21f. For example, a known snap hook engagement is usable, or a round rod-like cam projection21gand a pump portion3fhaving a hole engageable with the cam projection21gmay be used in combination. With such a structure, the similar advantageous effects can be provided. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 18 Referring toFIGS.89-91, the description will be made as to structures of Embodiment 18. Part of (a) ofFIG.89is a schematic perspective view of a developer supply container1, (b) is a schematic perspective view of a flange portion21, (c) is a schematic perspective view of a cylindrical portion20k, part art (a)-(b) ofFIG.90are enlarged sectional views of the developer supply container1, andFIG.91is a schematic view of a pump portion21f. In this example, the same reference numerals as in the foregoing embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted. In this example, a rotational force is converted to a force for forward operation of the pump portion21fwithout converting the rotational force to a force for backward operation of the pump portion, as is contrasted to the foregoing embodiments. In this example, as shown inFIGS.89-91, a bellow type pump portion21fis provided at a side of the flange portion21adjacent the cylindrical portion20k. An outer surface of the cylindrical portion20kis provided with a gear portion20awhich extends on the full circumference. At an end of the cylindrical portion20kadjacent a discharging portion21h, two compressing projections21for compressing the pump portion21fby abutting to the pump portion21fby the rotation of the cylindrical portion20kare provided at diametrically opposite positions, respectively. A configuration of the compressing projection201at a downstream side with respect to the rotational moving direction is slanted to gradually compress the pump portion21fso as to reduce the impact upon abutment to the pump portion21f. On the other hand, a configuration of the compressing projection201at the upstream side with respect to the rotational moving direction is a surface perpendicular to the end surface of the cylindrical portion20kto be substantially parallel with the rotational axis direction of the cylindrical portion20kso that the pump portion21finstantaneously expands by the restoring elastic force thereof. Similarly to Embodiment 13, the inside of the cylindrical portion20kis provided with a plate-like partition wall32for feeding the developer fed by a helical projection20cto the discharging portion21h. Also in this example, the mounting portion8fof the developer receiving apparatus8is provided with a developer receiving portion11(FIG.40orFIG.66) for receiving the developer discharged from the developer supply container1through the discharge opening (opening)21awhich will be described hereinafter. The structure of the developer receiving portion11is similar to the those of Embodiment 1 or Embodiment 2, and therefore, the description thereof is omitted. In addition, the flange portion21of the developer supply container is provided with engaging portions3b2and3b4engageable with the developer receiving portion11displaceably provided on the developer receiving apparatus8similarly to the above-described Embodiment 1 or Embodiment 2. The structures of the engaging portions3b2,3b4are similar to those of above-described Embodiment 1 or Embodiment 2, and therefore, the description is omitted. In addition, also in this example, the flange portion21is substantial stationary (non-rotatable) when the developer supply container1is mounted to the mounting portion8fof the developer receiving apparatus8. Therefore, during the developer supply, the flange portion21does not substantially rotate. The description will be made as to developer supplying step from the developer supply container1in this example. After the developer supply container1is mounted to the developer receiving apparatus8, cylindrical portion20kwhich is the developer accommodating portion20rotates by the rotational force inputted from the driving gear300to the gear portion20a, so that the compressing projection21rotates. At this time, when the compressing projections21abut to the pump portion21f, the pump portion21fis compressed in the direction of a arrow γ, as shown in part (a) ofFIG.90, so that a discharging operation is effected. On the other hand, when the rotation of the cylindrical portion20kcontinues until the pump portion21fis released from the compressing projection21, the pump portion21fexpands in the direction of an arrow ω by the self-restoring force, as shown in part (b) ofFIG.90, so that it restores to the original shape, by which the sucking operation is effected. The states shown in (a) and (b) ofFIG.90are alternately repeated, by which the pump portion21feffects the suction and discharging operations. That is the developer is discharged smoothly. With the rotation of the cylindrical portion20kin this manner, the developer is fed to the discharging portion21hby the helical projection (feeding portion)20cand the inclined projection (feeding portion)32a(FIG.88). The developer in the discharging portion21his finally discharged through the discharge opening21aby the discharging operation of the pump portion21f. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 17, both of the reciprocation of the pump portion21fand the rotating operation of the developer supply container1can be effected by the rotational force received from the developer receiving apparatus8. In this example, the pump portion21fis compressed by the contact to the compressing projection201, and expands by the self-restoring force of the pump portion21fwhen it is released from the compressing projection21, but the structure may be opposite. More particularly, when the pump portion21fis contacted by the compressing projection21, they are locked, and with the rotation of the cylindrical portion20k, the pump portion21fis forcedly expanded. With further rotation of the cylindrical portion20k, the pump portion21fis released, by which the pump portion21frestores to the original shape by the self-restoring force (restoring elastic force). Thus, the sucking operation and the discharging operation are alternately repeated. In the case of this example, the self restoring power of the pump portion21fis likely to be deteriorated by repetition of the expansion and contraction of the pump portion21ffor a long term, and from this standpoint, the structures of Embodiments 8-17 are preferable. Or, by employing the structure ofFIG.91, the likelihood can be avoided. As shown inFIG.91, compression plate20qis fixed to an end surface of the pump portion21fadjacent the cylindrical portion20k. Between the outer surface of the flange portion21and the compression plate20q, a spring20rfunctioning as an urging member is provided covering the pump portion21f. The spring20rnormally urges the pump portion21fin the expanding direction. With such a structure, the self restoration of the pump portion21fat the time when the contact between the compression projection201and the pump position is released can be assisted, the sucking operation can be carried out assuredly even when the expansion and contraction of the pump portion21fis repeated for a long term. In this example, two compressing projections201functioning as the drive converting mechanism are provided at the diametrically opposite positions, but this is not inevitable, and the number thereof may be one or three, for example. In addition, in place of one compressing projection, the following structure may be employed as the drive converting mechanism. For example, the configuration of the end surface opposing the pump portion21fof the cylindrical portion20kis not a perpendicular surface relative to the rotational axis of the cylindrical portion20kas in this example, but is a surface inclined relative to the rotational axis. In this case, the inclined surface acts on the pump portion21fto be equivalent to the compressing projection. In another alternative, a shaft portion is extended from a rotation axis at the end surface of the cylindrical portion20kopposed to the pump portion21ftoward the pump portion21fin the rotational axis direction, and a swash plate (disk) inclined relative to the rotational axis of the shaft portion is provided. In this case, the swash plate acts on the pump portion21f, and therefore, it is equivalent to the compressing projection. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 19 Referring toFIG.92(parts (a) and (b)), structures of the Embodiment 19 will be described. Parts (a) and (b) ofFIG.92are sectional views schematically illustrating a developer supply container1. In this example, the pump portion21fis provided at the cylindrical portion20k, and the pump portion21frotates together with the cylindrical portion20k. In addition, in this example, the pump portion21fis provided with a weight20v, by which the pump portion21freciprocates with the rotation. The other structures of this example are similar to those of Embodiment 17 (FIG.88), and the detailed description thereof is omitted by assigning the same reference numerals to the corresponding elements. As shown in part (a) ofFIG.92, the cylindrical portion20k, the flange portion21and the pump portion21ffunction as a developer accommodating space of the developer supply container1. The pump portion21fis connected to an outer periphery portion of the cylindrical portion20k, and the action of the pump portion21fworks to the cylindrical portion20kand the discharging portion21h. A drive converting mechanism of this example will be described. One end surface of the cylindrical portion20kwith respect to the rotational axis direction is provided with coupling portion (rectangular configuration projection)20sfunctioning as a drive inputting portion, and the coupling portion20sreceives a rotational force from the developer receiving apparatus8. On the top of one end of the pump portion21fwith respect to the reciprocating direction, the weight20vis fixed. In this example, the weight20vfunctions as the drive converting mechanism. Thus, with the integral rotation of the cylindrical portion20kand the pump portion21f, the pump portion21fexpands and contract in the up and down directions by the gravitation to the weight20v. More particularly, in the state of part (a) ofFIG.92, the weight takes a position upper than the pump portion21f, and the pump portion21fis contracted by the weight20vin the direction of the gravitation (white arrow). At this time, the developer is discharged through the discharge opening21a(black arrow). On the other hand, in the state of part (b) ofFIG.92, weight takes a position lower than the pump portion21f, and the pump portion21fis expanded by the weight20vin the direction of the gravitation (white arrow). At this time, the sucking operation is effected through the discharge opening21a(black arrow), by which the developer is loosened. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 18, both of the reciprocation of the pump portion21fand the rotating operation of the developer supply container1can be effected by the rotational force received from the developer receiving apparatus8. In this example, the pump portion21frotates about the cylindrical portion20k, and therefore, the space required by the mounting portion8fof the developer receiving apparatus8is relatively large with the result of upsizing of the device, and from this standpoint, the structures of Embodiment 8-Embodiment 18 are preferable. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 20 Referring toFIGS.93-95, the description will be made as to structures of Embodiment 20. Part (a) ofFIG.93is a perspective view of a cylindrical portion20k, and (b) is a perspective view of a flange portion21. Parts (a) and (b) ofFIG.94are partially sectional perspective views of a developer supply container1, and (a) shows a state in which a rotatable shutter is open, and (b) shows a state in which the rotatable shutter is closed.FIG.95is a timing chart illustrating a relation between operation timing of the pump portion21fand timing of opening and closing of the rotatable shutter. InFIG.95, contraction is a discharging step of the pump portion21f, expansion is a suction step of the pump portion21f. In this example, a mechanism for separating between a discharging chamber21hand the cylindrical portion20kduring the expanding-and-contracting operation of the pump portion21fis provided, as is contrasted to the foregoing embodiments. In this example, a mechanism for separating between a discharging chamber21hand the cylindrical portion20kduring the expanding-and-contracting operation of the pump portion21fis provided. The inside of the discharging portion21hfunctions as a developer accommodating portion for receiving the developer fed from the cylindrical portion20kas will be described hereinafter. The structures of this example in the other respects are substantially the same as those of Embodiment 17 (FIG.88), and the description thereof is omitted by assigning the same reference numerals to the corresponding elements. As shown in part (a) ofFIG.93, one longitudinal end surface of the cylindrical portion20kfunctions as a rotatable shutter. More particularly, said one longitudinal end surface of the cylindrical portion20kis provided with a communication opening20ufor discharging the developer to the flange portion21, and is provided with a closing portion20h. The communication opening20uhas a sector-shape. On the other hand, as shown in part (b) ofFIG.93, the flange portion21is provided with a communication opening21kfor receiving the developer from the cylindrical portion20k. The communication opening21khas a sector-shape configuration similar to the communication opening20u, and the portion other than that is closed to provide a closing portion21m. Parts (a)-(b) ofFIG.94illustrate a state in which the cylindrical portion20kshown in part (a) ofFIG.93and the flange portion21shown in part (b) ofFIG.93have been assembled. The communication opening20uand the outer surface of the communication opening21kare connected with each other so as to compress the sealing member27, and the cylindrical portion20kis rotatable relative to the stationary flange portion21. With such a structure, when the cylindrical portion20kis rotated relatively by the rotational force received by the gear portion20a, the relation between the cylindrical portion20kand the flange portion21are alternately switched between the communication state and the non-passage continuing state. That is, rotation of the cylindrical portion20k, the communication opening20uof the cylindrical portion20kbecomes aligned with the communication opening21kof the flange portion21(part (a) ofFIG.94). With a further rotation of the cylindrical portion20k, the communication opening20uof the cylindrical portion20kbecomes into non-alignment with the communication opening21k, so that the flange portion21is closed, by which the situation is switched to a non-communication state (part (b) ofFIG.94) in which the flange portion21is separated to substantially seal the flange portion21. Such a partitioning mechanism (rotatable shutter) for isolating the discharging portion21hat least in the expanding-and-contracting operation of the pump portion21fis provided for the following reasons. The discharging of the developer from the developer supply container1is effected by making the internal pressure of the developer supply container1higher than the ambient pressure by contracting the pump portion21f. Therefore, if the partitioning mechanism is not provided as in foregoing Embodiments 8-18, the space of which the internal pressure is changed is not limited to the inside space of the flange portion21but includes the inside space of the cylindrical portion20k, and therefore, the amount of volume change of the pump portion21fhas to be made eager. This is because a ratio of a volume of the inside space of the developer supply container1immediately after the pump portion21fis contracted to its end to the volume of the inside space of the developer supply container1immediately before the pump portion21fstarts the contraction is influenced by the internal pressure. However, when the partitioning mechanism is provided, there is no movement of the air from the flange portion21to the cylindrical portion20k, and therefore, it is enough to change the pressure of the inside space of the flange portion21. That is, under the condition of the same internal pressure value, the amount of the volume change of the pump portion21fmay be smaller when the original volume of the inside space is smaller. In this example, more specifically, the volume of the discharging portion21hseparated by the rotatable shutter is 40 cm{circumflex over ( )}3, and the volume change of the pump portion21f(reciprocation movement distance) is 2 cm{circumflex over ( )}3 (it is 15 cm{circumflex over ( )}3 in Embodiment 5). Even with such a small volume change, developer supply by a sufficient suction and discharging effect can be effected, similarly to Embodiment 5. As described in the foregoing, in this example, as compared with the structures of Embodiments 5-19, the volume change amount of the pump portion21fcan be minimized. As a result, the pump portion21fcan be downsized. In addition, the distance through which the pump portion21fis reciprocated (volume change amount) can be made smaller. The provision of such a partitioning mechanism is effective particularly in the case that the capacity of the cylindrical portion20kis large in order to make the filled amount of the developer in the developer supply container1is large. Developer supplying steps in this example will be described. In the state that developer supply container1is mounted to the developer receiving apparatus8and the flange portion21is fixed, drive is inputted to the gear portion20afrom the driving gear300, by which the cylindrical portion20krotates, and the cam groove20erotates. On the other hand, the cam projection21gfixed to the pump portion21fnon-rotatably supported by the developer receiving apparatus8with the flange portion21is moved by the cam groove20e. Therefore, with the rotation of the cylindrical portion20k, the pump portion21freciprocates in the up and down directions. Referring toFIG.95, the description will be made as to the timing of the pumping operation (sucking operation and discharging operation of the pump portion21fand the timing of opening and closing of the rotatable shutter, in such a structure.FIG.95is a timing chart when the cylindrical portion20krotates one full turn. InFIG.95, contraction means contracting operation of the pump portion21fthe discharging operation of the pump portion21f), expansion means the expanding operation of the pump portion21f(sucking operation of the pump portion21f). In addition, stop means a rest state of the pump portion21f. In addition, opening means the opening state of the rotatable shutter, and close means the closing state of the rotatable shutter. As shown inFIG.95, when the communication opening21kand the communication opening20uare aligned with each other, the drive converting mechanism converts the rotational force inputted to the gear portion20aso that the pumping operation of the pump portion21fstops. More specifically, in this example, the structure is such that when the communication opening21kand the communication opening20uare aligned with each other, a radius distance from the rotation axis of the cylindrical portion20kto the cam groove20eis constant so that the pump portion21fdoes not operate even when the cylindrical portion20krotates. At this time, the rotatable shutter is in the opening position, and therefore, the developer is fed from the cylindrical portion20kto the flange portion21. More particularly, with the rotation of the cylindrical portion20k, the developer is scooped up by the partition wall32, and thereafter, it slides down on the inclined projection32aby the gravity, so that the developer moves via the communication opening20uand the communication opening21kto the flange21. As shown inFIG.95, when the non-communication state in which the communication opening21kand the communication opening20uare out of alignment is established, the drive converting mechanism converts the rotational force inputted to the gear portion20bso that the pumping operation of the pump portion21fis effected. That is, with further rotation of the cylindrical portion20k, the rotational phase relation between the communication opening21kand the communication opening20uchanges so that the communication opening21kis closed by the stop portion20hwith the result that the inside space of the flange3is isolated (non-communication state). At this time, with the rotation of the cylindrical portion20k, the pump portion21fis reciprocated in the state that the non-communication state is maintained (the rotatable shutter is in the closing position). More particularly, by the rotation of the cylindrical portion20k, the cam groove20erotates, and the radius distance from the rotation axis of the cylindrical portion20kto the cam groove20echanges. By this, the pump portion21feffects the pumping operation through the cam function. Thereafter, with further rotation of the cylindrical portion20k, the rotational phases are aligned again between the communication opening21kand the communication opening20u, so that the communicated state is established in the flange portion21. The developer supplying step from the developer supply container1is carried out while repeating these operations. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening21a, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, by the gear portion20areceiving the rotational force from the developer receiving apparatus8, both of the rotating operation of the cylindrical portion20kand the suction and discharging operation of the pump portion21fcan be effected. Further, according to the structure of the example, the pump portion21fcan be downsized. Furthermore, the volume change amount (reciprocation movement distance) can be reduced, and as a result, the load required to reciprocate the pump portion21fcan be reduced. Moreover, in this example, no additional structure is used to receive the driving force for rotating the rotatable shutter from the developer receiving apparatus8, but the rotational force received for the feeding portion (cylindrical portion20k, helical projection20c) is used, and therefore, the partitioning mechanism is simplified. As described above, the volume change amount of the pump portion21fdoes not depend on the all volume of the developer supply container1including the cylindrical portion20k, but it is selectable by the inside volume of the flange portion21. Therefore, for example, in the case that the capacity (the diameter of the cylindrical portion20kis changed when manufacturing developer supply containers having different developer filling capacity, a cost reduction effect can be expected. That is, the flange portion21including the pump portion21fmay be used as a common unit, which is assembled with different kinds of cylindrical portions2k. By doing so, there is no need of increasing the number of kinds of the metal molds, thus reducing the manufacturing cost. In addition, in this example, during the non-communication state between the cylindrical portion20kand the flange portion21, the pump portion21fis reciprocated by one cyclic period, but similarly to Embodiment 8, the pump portion21fmay be reciprocated by a plurality of cyclic periods. Furthermore, in this example, throughout the contracting operation and the expanding operation of the pump portion, the discharging portion21his isolated, but this is not inevitable, and the following in an alternative. If the pump portion21fcan be downsized, and the volume change amount (reciprocation movement distance) of the pump portion21fcan be reduced, the discharging portion21hmay be opened slightly during the contracting operation and the expanding operation of the pump portion. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 21 Referring toFIGS.96-98, the description will be made as to structures of Embodiment 21.FIG.96is a partly sectional perspective view of a developer supply container1. Parts (a)-(c) ofFIG.97are a partial section illustrating an operation of a partitioning mechanism (stop valve35).FIG.98is a timing chart showing timing of a pumping operation (contracting operation and expanding operation) of the pump portion21fand opening and closing timing of the stop valve35which will be described hereinafter. InFIG.98, contraction means contracting operation of the pump portion21fthe discharging operation of the pump portion21f), expansion means the expanding operation of the pump portion21f(sucking operation of the pump portion21f). In addition, stop means a rest state of the pump portion21f. In addition, opening means an open state of the stop valve35and close means a state in which the stop valve35is closed. This example is significantly different from the above-described embodiments in that the stop valve35is employed as a mechanism for separating between a discharging portion21hand a cylindrical portion20kin an expansion and contraction stroke of the pump portion21f. The structures of this example in the other respects are substantially the same as those of Embodiment 12 (FIGS.85and86), and the description thereof is omitted by assigning the same reference numerals to the corresponding elements. In this example, as contrasted to the structure of the Embodiment 15 shown inFIGS.85and86, a plate-like partition wall32of Embodiment 17 shown inFIG.88is provided. In the above-described Embodiment 20, a partitioning mechanism (rotatable shutter) using a rotation of the cylindrical portion20kis employed, but in this example, a partitioning mechanism (stop valve) using reciprocation of the pump portion21fis employed. This will be described in detail. As shown inFIG.96, a discharging portion3his provided between the cylindrical portion20kand the pump portion21f. A wall portion33is provided at a cylindrical portion20kside of the discharging portion3h, and a discharge opening21ais provided lower at a left part of the wall portion33in the Figure. A stop valve35and an elastic member (seal)34as a partitioning mechanism for opening and closing a communication port33a(FIG.97) formed in the wall portion33are provided. The stop valve35is fixed to one internal end of the pump portion20b(opposite the discharging portion21h), and reciprocates in a rotational axis direction of the developer supply container1with expanding-and-contracting operations of the pump portion21f. The seal34is fixed to the stop valve35, and moves with the movement of the stop valve35. Referring to parts (a)-(c) of theFIG.97(FIG.97if necessary), operations of the stop valve35in a developer supplying step will be described. FIG.97illustrates in (a) a maximum expanded state of the pump portion21fin which the stop valve35is spaced from the wall portion33provided between the discharging portion21hand the cylindrical portion20k. At this time, the developer in the cylindrical portion20kis fed into the discharging portion21hthrough the communication port33aby the inclined projection32awith the rotation of the cylindrical portion20k. Thereafter, when the pump portion21fcontracts, the state becomes as shown in (b) of theFIG.97. At this time, the seal34is contacted to the wall portion33to close the communication port33a. That is, the discharging portion21hbecomes isolated from the cylindrical portion20k. When the pump portion21fcontracts further, the pump portion21fbecomes most contracted as shown in part (c) ofFIG.97. During period from the state shown in part (b) ofFIG.97to the state shown in part (c) ofFIG.97, the seal34remains contacting to the wall portion33, and therefore, the discharging portion21his pressurized to be higher than the ambient pressure (positive pressure) so that the developer is discharged through the discharge opening21a. Thereafter, during expanding operation of the pump portion21ffrom the state shown in (c) ofFIG.97to the state shown in (b) ofFIG.97, the seal34remains contacting to the wall portion33, and therefore, the internal pressure of the discharging portion21his reduced to be lower than the ambient pressure (negative pressure). Thus, the sucking operation is effected through the discharge opening21a. When the pump portion21ffurther expands, it returns to the state shown in part (a) ofFIG.97. In this example, the foregoing operations are repeated to carry out the developer supplying step. In this manner, in this example, the stop valve35is moved using the reciprocation of the pump portion, and therefore, the stop valve is opening during an initial stage of the contracting operation (discharging operation) of the pump portion21fand in the final stage of the expanding operation (sucking operation) thereof. The seal34will be described in detail. The seal34is contacted to the wall portion33to assure the sealing property of the discharging portion21h, and is compressed with the contracting operation of the pump portion21f, and therefore, it is preferable to have both of sealing property and flexibility. In this example, as a sealing material having such properties, the use is made with polyurethane foam the available from Kabushiki Kaisha INOAC Corporation, Japan (tradename is MOLTOPREN, SM-55 having a thickness of 5 mm). The thickness of the sealing material in the maximum contraction state of the pump portion21fis 2 mm (the compression amount of 3 mm). As described in the foregoing, the volume variation (pump function) for the discharging portion21hby the pump portion21fis substantially limited to the duration after the seal34is contacted to the wall portion33until it is compressed to 3 mm, but the pump portion21fworks in the range limited by the stop valve35. Therefore, even when such a stop valve35is used, the developer can be stably discharged. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 20, both of the suction and discharging operation of the pump portion21fand the rotating operation of the cylindrical portion20kcan be carried out by the gear portion20areceiving the rotational force from the developer receiving apparatus8. Furthermore, similarly to Embodiment 20, the pump portion21fcan be downsized, and the volume change volume of the pump portion21fcan be reduced. The cost reduction advantage by the common structure of the pump portion can be expected. In addition, in this example, the driving force for operating the stop valve35does not particularly received from the developer receiving apparatus8, but the reciprocation force for the pump portion21fis utilized, so that the partitioning mechanism can be simplified. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 22 Referring toFIG.99(parts (a) and (b)), structures of the Embodiment 22 will be described. Part (a) ofFIG.99is a partially sectional perspective view of the developer supply container1, and (b) is a perspective view of the flange portion21, and (c) is a sectional view of the developer supply container. This example is significantly different from the foregoing embodiments in that a buffer portion23is provided as a mechanism separating between discharging chamber21hand the cylindrical portion20k. The structures of this example in the other respects are substantially the same as those of Embodiment 17 (FIG.88), and the description thereof is omitted by assigning the same reference numerals to the corresponding elements. As shown in part (b) ofFIG.99, a buffer portion23is fixed to the flange portion21non-rotatably. The buffer portion23is provided with a receiving port23awhich opens upward and a supply port23bwhich is in fluid communication with a discharging portion21h. As shown in part (a) and (c) ofFIG.99, such a flange portion21is mounted to the cylindrical portion20ksuch that the buffer portion23is in the cylindrical portion20k. The cylindrical portion20kis connected to the flange portion21rotatably relative to the flange portion21immovably supported by the developer receiving apparatus8. The connecting portion is provided with a ring seal to prevent leakage of air or developer. In addition, in this example, as shown in part (a) ofFIG.99, an inclined projection32ais provided on the partition wall32to feed the developer toward the receiving port23aof the buffer portion23. In this example, until the developer supplying operation of the developer supply container1is completed, the developer in the developer accommodating portion20is fed through the receiving port23ainto the buffer portion23by the partition wall32and the inclined projection32awith the rotation of the developer supply container1. Therefore, as shown in part (c) ofFIG.99, the inside space of the buffer portion23is maintained full of the developer. As a result, the developer filling the inside space of the buffer portion23substantially blocks the movement of the air toward the discharging portion21hfrom the cylindrical portion20k, so that the buffer portion23functions as a partitioning mechanism. Therefore, when the pump portion21freciprocates, at least the discharging portion21hcan be isolated from the cylindrical portion20k, and for this reason, the pump portion can be downsized, and the volume change of the pump portion can be reduced. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, also in this example, similarly to the Embodiment 8-Embodiment 21, both of the reciprocation of the pump portion21fand the rotating operation of the feeding portion20c(cylindrical portion20k) can be carried out by the rotational force received from the developer receiving apparatus8. Furthermore, similarly to the Embodiment 20-Embodiment 21, the pump portion can be downsized, and the volume change amount of the pump portion can be reduced. The cost reduction advantage by the common structure of the pump portion can be expected. Moreover, in this example, the developer is used as the partitioning mechanism, and therefore, the partitioning mechanism can be simplified. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Embodiment 23 Referring toFIGS.100-101, the description will be made as to structures of Embodiment 23. Part (a) ofFIG.100is a perspective view of a developer supply container1, and (b) is a sectional view of the developer supply container1, andFIG.101is a sectional perspective view of a nozzle portion47. In this example, the nozzle portion47is connected to the pump portion20b, and the developer once sucked in the nozzle portion47is discharged through the discharge opening21a, as is contrasted to the foregoing embodiments. In the other respects, the structures are substantially the same as in Embodiment 14, and the detailed description thereof is omitted by assigning the same reference numerals to the corresponding elements. As shown in part (a) ofFIG.100, the developer supply container1comprises a flange portion21and a developer accommodating portion20. The developer accommodating portion20comprises a cylindrical portion20k. In the cylindrical portion20k, as shown in (b) ofFIG.100, a partition wall32functioning as a feeding portion extends over the entire area in the rotational axis direction. One end surface of the partition wall32is provided with a plurality of inclined projections32aat different positions in the rotational axis direction, and the developer is fed from one end with respect to the rotational axis direction to the other end (the side adjacent the flange portion21). The inclined projections32aare provided on the other end surface of the partition wall32similarly. In addition, between the adjacent inclined projections32a, a through-opening32bfor permitting passing of the developer is provided. The through-opening32bfunctions to stir the developer. The structure of the feeding portion may be a combination of the feeding portion (helical projection20c) in the cylindrical portion20kand a partition wall32for feeding the developer to the flange portion21, as in the foregoing embodiments. The flange portion21including the pump portion20bwill be described. The flange portion21is connected to the cylindrical portion20krotatably through a small diameter portion49and a sealing member48. In the state that the container is mounted to the developer receiving apparatus8, the flange portion21is immovably held by the developer receiving apparatus8(rotating operation and reciprocation is not permitted). In addition, as shown in part (a) ofFIG.66, in the flange portion21, there is provided a supply amount adjusting portion (flow rate adjusting portion)52which receives the developer fed from the cylindrical portion20k. In the supply amount adjusting portion52, there is provided a nozzle portion47which extends from the pump portion20btoward the discharge opening21a. In addition, the rotation driving force received by the gear portion20ais converted to a reciprocation force by a drive converting mechanism to vertically drive the pump portion20b. Therefore, with the volume change of the pump portion20b, the nozzle portion47sucks the developer in the supply amount adjusting portion52, and discharges it through discharge opening21a. The structure for drive transmission to the pump portion20bin this example will be described. As described in the foregoing, the cylindrical portion20krotates when the gear portion20aprovided on the cylindrical portion20kreceives the rotation force from the driving gear9. In addition, the rotation force is transmitted to the gear portion43through the gear portion42provided on the small diameter portion49of the cylindrical portion20k. Here, the gear portion43is provided with a shaft portion44integrally rotatable with the gear portion43. One end of shaft portion44is rotatably supported by the housing46. The shaft44is provided with an eccentric cam45at a position opposing the pump portion20b, and the eccentric cam45is rotated along a track with a changing distance from the rotation axis of the shaft44by the rotational force transmitted thereto, so that the pump portion20bis pushed down (reduced in the volume). By this, the developer in the nozzle portion47is discharged through the discharge opening21a. When the pump portion20bis released from the eccentric cam45, it restores to the original position by its restoring force (the volume expands). By the restoration of the pump portion (increase of the volume), sucking operation is effected through the discharge opening21a, and the developer existing in the neighborhood of the discharge opening21acan be loosened. By repeating the operations, the developer is efficiently discharged by the volume change of the pump portion20b. As described in the foregoing, the pump portion20bmay be provided with an urging member such as a spring to assist the restoration (or pushing down). The hollow conical nozzle portion47will be described. The nozzle portion47is provided with an opening53in an outer periphery thereof, and the nozzle portion47is provided at its free end with an ejection outlet54for ejecting the developer toward the discharge opening21a. In the developer supplying step, at least the opening53of the nozzle portion47can be in the developer layer in the supply amount adjusting portion52, by which the pressure produced by the pump portion20bcan be efficiently applied to the developer in the supply amount adjusting portion52. That is, the developer in the supply amount adjusting portion52(around the nozzle47) functions as a partitioning mechanism relative to the cylindrical portion20k, so that the effect of the volume change of the pump portion20bis applied to the limited range, that is, within the supply amount adjusting portion52. With such structures, similarly to the partitioning mechanisms of Embodiments 20-22, the nozzle portion47can provide similar effects. As described in the foregoing, also in this embodiment, one pump is enough to effect the sucking operation and the discharging operation, and therefore, the structure of the developer discharging mechanism can be simplified. In addition, by the sucking operation through the discharge opening, a pressure reduction state (negative pressure state) can be provided in the developer supply container, and therefore, the developer can be efficiently loosened. In addition, in this example, similarly to Embodiments 5-19, by the rotational force received from the developer receiving apparatus8, both of the rotating operations of the developer accommodating portion20(cylindrical portion20k) and the reciprocation of the pump portion20bare effected. Similarly to Embodiments 20-22, the pump portion20band/or flange portion21may be made common to the advantages. In this example, the developer does not slide on the partitioning mechanism as is different from Embodiment 20-Embodiment 21, the damage to the developer can be avoided. In addition, in this example, similarly to the foregoing embodiments, the flange portion21of the developer supply container1is provided with the engaging portions3b2,3b4similar to those of Embodiments 1 and 2, and therefore, similarly to the above-described embodiment, the mechanism for connecting and spacing the developer receiving portion11of the developer receiving apparatus8relative to the developer supply container1by displacing the developer receiving portion11can be simplified. More particularly, a driving source and/or a drive transmission mechanism for moving the entirety of the developing device upwardly is unnecessary, and therefore, a complication of the structure of the image forming apparatus side and/or the increase in cost due to increase of the number of parts can be avoided. The connection between the developer supply container1and the developer receiving apparatus8can be properly established using the mounting operation of the developer supply container1with minimum contamination with the developer. Similarly, utilizing the dismounting operation of the developer supply container1, the spacing and resealing between the developer supply container1and the developer receiving apparatus8can be carried out with minimum contamination with the developer. Comparison Example Referring toFIG.102, a comparison example will be described. Part (a) ofFIG.102is a sectional view illustrating a state in which the air is fed into a developer supply container150, and part (b) ofFIG.102is a sectional view illustrating a state in which the air (developer) is discharged from the developer supply container150. Part (c) ofFIG.102is a sectional view illustrating a state in which the developer is fed into a hopper8cfrom a storage portion123, and part (d) ofFIG.102is a sectional view illustrating a state in which the air is taken into the storage portion123from the hopper8c. In the description of this comparison example, the same reference numerals as in the foregoing Embodiments are assigned to the elements having the corresponding functions in this embodiment, and the detailed description thereof is omitted for simplicity. In this comparison example, the pump portion for effecting the suction and discharging, more specifically, a displacement type pump portion122is provided not on the side of the developer supply container150but on the side of the developer receiving apparatus180. The developer supply container150of the comparison example corresponds to the structure ofFIG.44(Embodiment 8) from which the pump portion5and the locking portion18are removed, and the upper surface of the container body1awhich is the connecting portion with the pump portion5is closed. That is, the developer supply container150is provided with the container body1a, a discharge opening1c, an upper flange portion1g, an opening seal (sealing member)3a5and a shutter4(omitted inFIG.102). In addition, the developer receiving apparatus180of this comparison example corresponds to the developer receiving apparatus8shown inFIGS.38and40(Embodiment 8) from which the locking member10and the mechanism for driving the locking member10are removed, and in place thereof, the pump portion, a storage portion and a valve mechanism or the like are added. More specifically, the developer receiving apparatus180includes the bellow-like pump portion122of a displacement type for effecting suction and discharging, and the storage portion123positioned between the developer supply container150and the hopper8cto temporarily storage the developer having been discharged from the developer supply container150. To the storage portion123, there are connected a supply pipe portion for connecting with the developer supply container150, and a supply pipe portion127for connecting with the hopper8c. In addition, the pump portion122carries out the reciprocation (expanding-and-contracting operation) by a pump driving mechanism provided in the developer receiving apparatus180. Furthermore, the developer receiving apparatus180is provided with a valve125provided in a connecting portion between the storage portion123and the supply pipe portion126on the developer supply container150side, and a valve124provided in a connecting portion between the storage portion123and the hopper8cside supply pipe portion127. The valves124,125are solenoid valves which are opened and closed by a valve driving mechanism provided in the developer receiving apparatus180. Developer discharging steps in the structure of the comparison example including is pump portion122on the developer receiving apparatus180side in this manner will be described. As shown in part (a) ofFIG.102, the valve driving mechanism is operated to close the valve124and open the valve125. In this state, the pump portion122is contracted by the pump driving mechanism. At this time, the contracting operation of the pump portion122increases the internal pressure of the storage portion123so that the air is fed from the storage portion123into the developer supply container150. As a result, the developer adjacent to the discharge opening1cin the developer supply container150is loosened. Subsequently, as shown in part (b) ofFIG.102, the pump portion122is expanded by the pump driving mechanism, while the valve124is kept closed, and the valve125is kept opened. At this time, by the expanding operation of the pump portion122, the internal pressure of the storage portion123decreases, so that the pressure of the air layer inside developer supply container150relatively rises. By a pressure difference between the storage portion123and the developer supply container150, the air in the developer supply container150is discharged into the storage portion123. With the operation, the developer is discharged together with the air from the discharge opening1cof the developer supply container150and is stored in the storage portion123temporarily. Then, as shown in part (c) ofFIG.102, the valve driving mechanism is operated to open the valve124and close the valve125. In this state, the pump portion122is contracted by the pump driving mechanism. At this time, the contracting operation of the pump portion122increases the internal pressure of the storage portion123to feed and discharge the developer from the storage portion123into the hopper8c. Then, as shown in part (d) ofFIG.102, the pump portion122is expanded by the pump driving mechanism, while the valve124is kept opened, and the valve125is kept closed. At this time, by the expanding operation of the pump portion122, the internal pressure of the storage portion123decreases, so that the air is taken into the storage portion123from the hopper8c. By repeating the steps of parts (a)-(d) ofFIG.102, the developer in the developer supply container150can be discharged through the discharge opening1cof developer supply container150while fluidizing the developer. However, with the structure of comparison example, the valves124,125and the valve driving mechanism for controlling opening and closing of the valves as shown in parts (a)-(d) ofFIG.102are required. In other words, the comparison example requires the complicated opening and closing control of the valves. Furthermore, the developer may be bitten between the valve and the seat with the result of stressed to the developer which may lead to formation of agglomeration masses. If this occurs, the properly opening and closing operation of the valves is not carried out, with the result that long term stability of the developer discharging is not expected. In addition, in the comparison example, by the supply of the air from the outside of the developer supply container150, the internal pressure of the developer supply container150is raised, tending to agglomerate the developer, and therefore, the loosening effect of the developer is very small as shown by above-described verification experiment (comparison betweenFIG.55andFIG.56). Therefore, Embodiment 1-Embodiment 23 prefers to the comparison example because the developer can be discharged from the developer supply container after it is sufficiently loosened. In addition, it may be considered to use a single shaft eccentric pump400is used in place of the pump122to effect the suction and discharging by the forward and backward rotations of the rotor401, as shown inFIG.103. However, in this case, the developer discharged from the developer supply container150may be stressed by sliding between the rotor401and a stator402of such a pump, with the result of production of agglomeration mass of the developer to an extent the image quality is deteriorated. The structures of the foregoing embodiments are preferable to the comparison example, because the developer discharging mechanism can be simplified. As compared with the comparison example ofFIG.103, the stress imparted to the developer can be decreased in the foregoing embodiments. While the invention has been described with reference to the structures disclosed herein, it is not confined to the details set forth, and this application is intended to cover such modification or changes as may come within the purposes of the improvements or the scope of the following claims. INDUSTRIAL APPLICABILITY According to the present invention, the mechanism for connecting the developer receiving portion to the developer supply container by displacing the developer receiving portion can be simplified. In addition, the connection state between the developer supply container and the developer receiving apparatus can be established properly using the mounting operation of the developer supply container. | 398,869 |
11860570 | DETAILED DESCRIPTION OF EMBODIMENTS Hereinafter, one or more embodiments of the present invention will be described. However, the scope of the invention is not limited to the disclosed embodiments. 1. Toner Set A toner set according to an embodiment of the present invention (hereinafter, also simply referred to as “toner set”) contains a plurality of toners, and the toner set contains a first toner containing a yellow colorant and a second toner containing a cyan colorant. The first toner and the second toner each contain a binder resin containing a vinyl-based resin and a crystalline resin, and a release agent containing a hydrocarbon wax, in which the content of the vinyl-based resin with respect to the total mass of the binder resin is 50% by mass or more. The first toner contains the crystalline resin having a moiety having a crystal structure and a crystal nucleating agent moiety having a crystal nucleating agent, and the second toner contains the cyan colorant that is a compound represented by the above general formula (1). The first toner may be any toner as long as the first toner contains a yellow colorant among the plurality of types of toners contained in the toner set, but is preferably a toner in which the content of the yellow colorant with respect to the total mass of the toner is the largest. The second toner may be any toner as long as the second toner contains a cyan colorant among the plurality of types of toners contained in the toner set, but is preferably a toner in which the content of the cyan colorant with respect to the total mass of the toner is the largest. For example, the toner set contains toners of four colors, yellow (Ye), magenta (Ma), cyan (Cy), and black (Bk). At this time, the first toner can be the yellow toner (Ye), and the second toner can be the cyan (Cy) toner. Note that the toner set may contain a toner of a color other than the above colors, such as blue, pink, green, or orange, or a clear toner. The toner set may contain a plurality of types of toners of the same color, and may contain, for example, a plurality of types of yellow toners having different contents of the yellow colorant. At this time, a yellow toner having a larger content of the yellow colorant with respect to the total mass of the toner only needs to be used as the first toner. Similarly, the toner set may contain a plurality of types of cyan toners having different contents of the cyan colorant. At this time, a cyan toner having a larger content of the cyan colorant with respect to the total mass of the toner only needs to be used as the second toner. The crystalline resin contained in the toner rapidly melts and largely reduces the viscosity thereof when the temperature thereof rises to a temperature higher than the melting point thereof during heating (sharp melt property). Therefore, it is said that the crystalline resin promotes softening of the toner during heating and makes fixing of the toner at a low temperature possible. However, according to findings of the present inventors, recrystallization of the crystalline resin after softening is less likely to proceed sufficiently in an image after the toner is fixed. Therefore, the toner containing the crystalline resin may maintain a state where elasticity is low after fixing until a recording medium to which the toner is fixed is loaded on a paper discharge unit. Then, when the recording medium is loaded on the paper discharge unit in a state where the elasticity of the toner is low, a tacking phenomenon occurs in which the image (toner) adheres to the adjacent recording medium. In addition, according to findings of the present inventors, in order to improve low-temperature fixability of the toner, an ester wax having a low melting point is often used as the release agent (wax) contained in the toner. In a case where an image is formed using a toner containing the ester wax, after the toner is fixed, a recording medium is easily conveyed to a paper discharge unit while the wax present on an image surface layer is melted during image conveyance. As a result, a tacking phenomenon derived from the ester wax may occur in the paper discharge unit, or the melted wax may come into contact with a member such as a conveyance roller and adhere to the member. At this time, if the adhering wax is cooled and fixed to cause conveyance failure or in-machine contamination, the wax adhering to the member or the like is transferred onto an image on a recording medium to be conveyed next, and gloss unevenness of the image also occurs disadvantageously. On the other hand, in the present invention, the first toner containing a yellow colorant, disposed on the outermost layer of a formed image, contains a crystalline resin having a moiety having a crystal structure and a crystal nucleating agent moiety having a crystal nucleating agent, and therefore recrystallization after the toner is fixed is accelerated. It is considered that this makes tacking less likely to occur. In addition, it is considered that the crystalline resin having a crystal nucleating agent moiety seeps to an image surface together with the wax, and therefore wax adhesion is also suppressed in the image formed such that the first toner is on the outermost layer. In addition, in the present invention, by using a hydrocarbon wax having a melting point higher than that of the ester wax as the release agent contained in each of the first toner and the second toner, a crystallization start temperature of the wax is high. Therefore, it is considered that wax adhesion to each unit in an image forming device can be suppressed, and tacking can also be suppressed by crystallization of the wax present on the outermost surface of an image. Furthermore, the first toner and the second toner each contain a binder resin containing a vinyl-based resin and a crystalline resin, in which the content of the vinyl-based resin with respect to the total mass of the binder resin is 50% by mass or more. As a result, the vinyl-based resin is located on a sea side of a sea-island structure, and domains of the crystalline resin can be finely dispersed. Therefore, when the toner is fixed to a base material, the entire toner can be melted rapidly, and rapid crystallization of domains of the crystalline resin is promoted. Therefore, it is considered that low-temperature fixability can be improved and tacking can be suppressed. In addition, in an image in which the second toner is located on a lower layer than the first toner, when the content of the vinyl-based resin with respect to the total mass of the binder resin contained in the second toner is 50% by mass, it is considered that excessive growth of domains of the crystalline resin can be suppressed and fold fixability is improved. For these reasons, it is considered that the toner set can suppress tacking even if the toner set contains the crystalline resin. 1-1. First Toner The first toner only needs to be a known toner containing a yellow colorant. For example, the first toner contains a binder resin, a release agent, and a colorant. 1-1-1. Binder Resin The binder resin only needs to contain a vinyl-based resin and a crystalline resin, and may contain another resin as long as the effects of the present invention are exhibited. [Amorphous Resin] <Vinyl-Based Resin> In the present invention, the vinyl-based resin refers to a polymer exhibiting amorphousness among polymers each formed by a monomer having a vinyl group (hereinafter, referred to as a vinyl monomer). In the present invention, an “amorphous” resin is defined as a resin that does not exhibit a clear endothermic peak when differential scanning calorimetry (DSC) is performed. The amorphous resin has a relatively high glass transition temperature. The glass transition point (Tg) of the vinyl-based resin is preferably 25° C. or higher and 55° C. or lower, and more preferably 30° C. or higher and 50° C. or lower from a viewpoint of achieving both sufficient low-temperature fixability and heat-resistant storage. The glass transition point (Tg) can be measured using a differential scanning calorimeter, for example, a diamond DSC (manufactured by Perkin Elmer Inc.). Specifically, 3.0 mg of a sample is sealed in an aluminum pan, and the temperature is changed in order of heating, cooling, and heating. The temperature is raised from room temperature (25° C.) at the time of first heating and from 0° C. at the time of second heating to 200° C. at a temperature rising rate of 10° C./min and maintained at 150° C. for five minutes. At the time of cooling, the temperature is lowered from 200° C. to 0° C. at a temperature falling rate of 10° C./min, and the temperature of 0° C. is maintained for five minutes. A baseline shift in a measurement curve obtained at the time of second heating is observed, and an intersection of an extension of the baseline before the shift and a tangent indicating a maximum slope of the shifted portion of the baseline is taken as a glass transition point (Tg). An empty aluminum pan is used as a reference. The vinyl-based resin is not particularly limited as long as the vinyl-based resin has the above characteristics, and a vinyl-based resin known in the present technological field can be used. Examples of the vinyl-based resin include a styrene-acrylic resin, a styrene resin, and an acrylic resin. A styrene-acrylic resin is preferable from a viewpoint of excellent heat resistance. Examples of the vinyl monomer include the following (1) to (7), and these monomers can be used singly or in combination of two or more types thereof. (1) Styrene-Based Monomer Examples of a styrene-based monomer include monomers each having a styrene structure, such as styrene, o-methylstyrene, m-methylstyrene, p-methylstyrene, α-methylstyrene, p-phenylstyrene, p-ethylstyrene, 2,4-dimethylstyrene, p-tert-butylstyrene, p-n-hexylstyrene, p-n-octylstyrene, p-n-nonylstyrene, p-n-decylstyrene, p-n-dodecylstyrene, and derivatives thereof. (2) (Meta)Acrylate-Based Monomer Examples of a (meth)acrylate-based monomer include monomers each having a (meth)acrylic group, such as methyl (meth)acrylate, ethyl (meth)acrylate, n-butyl (meth)acrylate, isopropyl (meth)acrylate, isobutyl (meth)acrylate, t-butyl (meth)acrylate, n-octyl (meth)acrylate, 2-ethylhexyl (meth)acrylate, stearyl (meth)acrylate, lauryl (meth)acrylate, phenyl (meth)acrylate, diethylaminoethyl (meth)acrylate, dimethylaminoethyl (meth)acrylate, and derivatives thereof. (3) Vinyl Ester Examples of a vinyl ester include vinyl propionate, vinyl acetate, and vinyl benzoate. (4) Vinyl Ether Examples of a vinyl ether include vinyl methyl ether and vinyl ethyl ether. (5) Vinyl Ketone Examples of a vinyl ketone include vinyl methyl ketone, vinyl ethyl ketone, and vinyl hexyl ketone. (6) N-Vinyl Compound Examples of an N-vinyl compound include N-vinylcarbazole, N-vinylindole, and N-vinylpyrrolidone. (7) Others Examples of other vinyl monomers include a vinyl compound such as vinylnaphthalene or vinylpyridine, an acrylic acid such as acrylonitrile, methacrylnitrile, or acrylamide, and a methacrylic acid derivative. As the vinyl monomer, it is preferable to use a monomer having an ionic dissociation group such as a carboxy group, a sulfonic acid group, or a phosphoric acid group because affinity with the crystalline resin can be easily controlled. Examples of the monomer having a carboxy group include acrylic acid, methacrylic acid, maleic acid, itaconic acid, cinnamic acid, fumaric acid, a maleic acid monoalkyl ester, and an itaconic acid monoalkyl ester. Examples of the monomer having a sulfonic acid group include styrene sulfonic acid, allyl sulfosuccinic acid, and 2-acrylamide-2-methylpropanesulfonic acid. Examples of the monomer having a phosphoric acid group include acidophosphoxyethyl methacrylate. Furthermore, by using a polyfunctional vinyl as the vinyl monomer, a polymer having a crosslinked structure can also be obtained. Examples of the polyfunctional vinyl include divinylbenzene, ethylene glycol dimethacrylate, ethylene glycol diacrylate, diethylene glycol dimethacrylate, diethylene glycol diacrylate, triethylene glycol dimethacrylate, triethylene glycol diacrylate, neopentyl glycol dimethacrylate, and neopentyl glycol diacrylate. The content of the vinyl-based resin with respect to the total mass of the binder resin is 50% by mass or more, preferably 50% by mass or more and 96% by mass or less, and more preferably 60% by mass or more and 90% by mass or less. When the content is 50% by mass or more, the vinyl-based resin is located on a sea side of a sea-island structure, and domains of the crystalline resin can be finely dispersed. When the content is 96% by mass or less, low-temperature fixability due to the crystalline resin can be ensured. <Amorphous Polyester Resin> In the present invention, the amorphous resin in the binder resin may contain an amorphous polyester resin. The amorphous polyester resin is a resin exhibiting amorphousness among polyester resins obtained by a polymerization reaction between a divalent or higher valent carboxylic acid (polycarboxylic acid) monomer and a divalent or higher valent alcohol (polyhydric alcohol) monomer. The amorphous polyester resin can be formed by polymerizing (esterifying) the polycarboxylic acid monomer and the polyhydric alcohol monomer using a known esterification catalyst. The polycarboxylic acid monomer is a compound having two or more carboxy groups in one molecule. Examples of the polycarboxylic acid monomer that can be used for synthesis of the amorphous polyester resin include phthalic acid, isophthalic acid, terephthalic acid, trimellitic acid, naphthalene-2,6-dicarboxylic acid, malonic acid, mesaconic acid, dimethyl isophthalate, fumaric acid, dodecenyl succinic acid, and 1,10-dodecanedicarboxylic acid. Among these monomers, dimethyl isophthalate, terephthalic acid, dodecenyl succinic acid, or trimellitic acid is preferable. The polyhydric alcohol monomer is a compound having two or more hydroxy groups in one molecule. Examples of the polyhydric alcohol monomer that can be used for synthesis of the amorphous polyester resin include, as a dihydric or trihydric alcohol, ethylene glycol, propylene glycol, 1,4-butanediol, 2,3-butanediol, diethylene glycol, triethylene glycol, 1,5-pentanediol, 1,6-hexanediol, neopentyl glycol, 1,4-cyclohexanedimethanol, dipropylene glycol, polyethylene glycol, polypropylene glycol, an ethylene oxide adduct of bisphenol A (BPA-EO), a propylene oxide adduct of bisphenol A (BPA-PO), glycerin, sorbitol, 1,4-sorbitan, and trimethylolpropane. Among these alcohols, an ethylene oxide adduct of bisphenol A and a propylene oxide adduct of bisphenol A are preferable. Examples of the esterification catalyst that can be used include an alkali metal compound of sodium or lithium; an alkaline earth metal compound of magnesium or calcium; a metal compound of aluminum, zinc, manganese, antimony, titanium, tin, zirconium, or germanium; a phosphorous acid compound; a phosphoric acid compound, and an amine compound. The polymerization temperature is not particularly limited, but is preferably 150° C. or higher and 250° C. or lower. The polymerization time is not particularly limited, but is preferably 0.5 hours or more and 10 hours or less. During polymerization, the pressure inside the reaction system may be reduced as necessary. The content of the amorphous polyester resin with respect to the total mass of the binder resin is preferably 5% by mass or more and 30% by mass or less, and more preferably 10% by mass or more and 20% by mass or less from a viewpoint of improving fold fixability. The glass transition point (Tg) of the amorphous polyester resin is preferably 25° C. or higher and 60° C. or lower, and more preferably 35° C. or higher and 55° C. or lower from a viewpoint of achieving both sufficient low-temperature fixability and heat-resistant storage. The glass transition point (Tg) can be measured using a differential scanning calorimeter, for example, a diamond DSC (manufactured by Perkin Elmer Inc.) in a similar manner to the vinyl-based resin. The weight average molecular weight (Mw) of the amorphous polyester resin can usually be 5000 or more and 100000 or less, but is preferably 10000 or more and 60000 or less, and more preferably 12000 or more and 40,000 or less. When the weight average molecular weight of the amorphous polyester resin is within the above range, both sufficient low-temperature fixability and heat-resistant storage can be achieved. The weight average molecular weight (Mw) of the amorphous polyester resin can be measured using gel permeation chromatography (GPC). [Crystalline Resin] In the present invention, the crystalline resin is not limited as long as the crystalline resin is a resin exhibiting crystallinity, and a known crystalline resin can be used. Exhibiting crystallinity means having not a stepwise endothermic change but a clear endothermic peak at a melting point, that is, when the temperature rises in an endothermic curve obtained by DSC. The clear endothermic peak refers to a peak having an endothermic peak half-width of 15° C. or less when measurement is performed at a temperature rising rate of 10° C./min in differential scanning calorimetry (DSC). The melting point of the crystalline resin is preferably 50° C. or higher and 90° C. or lower, more preferably 60° C. or higher and 85° C. or lower, and still more preferably 65° C. or higher and 80° C. or higher from a viewpoint of achieving low-temperature fixability. When the melting point is 50° C. or higher, in a storage state, melting of a part of the crystalline resin in the toner can be suppressed. When the melting point is 90° C. or lower, the amount of thermal energy required for fixing is small. The melting point of the crystalline resin can be measured by performing differential scanning calorimetry (DSC) of the toner. For example, the melting point can be determined using a differential scanning calorimeter “Diamond DSC” (manufactured by Perkin Elmer Inc.). The measurement is performed according to measurement conditions (temperature rising/falling conditions) that go through a first temperature rising process in which the temperature is raised from room temperature (25° C.) to 150° C. at a temperature rising rate of 10° C./min and isothermally maintained at 150° C. for five minutes, a temperature falling process in which the temperature is lowered from 150° C. to 0° C. at a temperature falling rate of 10° C./min and isothermally maintained at 0° C. for five minutes, and a second temperature rising process in which the temperature is raised from 0° C. to 150° C. at a temperature rising rate of 10° C./min in this order. The above measurement is performed by enclosing 3.0 mg of the toner in an aluminum pan and setting the aluminum pan in a sample holder of a differential scanning calorimeter “Diamond DSC”. An empty aluminum pan is used as a reference. In the above measurement, analysis is performed using an endothermic curve obtained from the first temperature rising process, and the top temperature of the endothermic peak derived from the crystalline resin is defined as the melting point of the crystalline resin. In the present invention, the content of the crystalline resin with respect to the total mass of the binder resin is preferably 4% by mass or more and 15% by mass or less, and more preferably 7% by mass or more and 12% by mass or less from a viewpoint of achieving both low-temperature fixability and tacking suppression. The weight average molecular weight (Mw) of the crystalline resin in the present invention can usually be 600 or more and 100000 or less, but is preferably 1000 or more and 29000 or less, more preferably 1000 or more and 20000 or less, and still more preferably 1000 or more and 15000 or less. When the weight average molecular weight is 1000 or more, crystallization proceeds without the crystalline resin being excessively compatible with the vinyl-based resin, which is excellent in terms of tacking suppression. In addition, it is possible to suppress occurrence of wax adhesion due to dissolution of the crystalline resin in the wax and difficulty in functioning as a nucleating agent. When the weight average molecular weight is 29000 or less, the crystalline resin is likely to be compatible with the vinyl-based resin when being melted, and is excellent in terms of low-temperature fixability. In addition, it is possible to suppress occurrence of wax adhesion due to a decrease in compatibility with the wax and suppression of seepage to an image surface together with the wax. The weight average molecular weight of the crystalline resin can be measured using gel permeation chromatography (GPC) in a similar manner to the vinyl-based resin. Note that the weight average molecular weight of the crystalline resin can be measured and calculated after the crystalline resin and the release agent in the toner are separated from the toner. Hereinafter, a specific separation method will be described. The toner is dispersed in ethanol, which is a poor solvent for the toner, and the temperature is raised to a temperature exceeding the melting points of the crystalline resin and the wax to melt the crystalline resin and the wax. At this time, the pressure may be increased as necessary. Thereafter, a mixture of a crystalline polyester and the wax is collected from the toner by solid-liquid separation. By fractionating this mixture for each molecular weight, the crystalline polyester and the wax are separated from the toner. The crystalline resin contained in the first toner in the present invention has a moiety having a crystal structure and a crystal nucleating agent moiety having a crystal nucleating agent. <Moiety Having a Crystal Structure> The “moiety having a crystal structure” in the present invention refers to a moiety having the above-described structure exhibiting crystallinity in the crystalline resin. The moiety having a crystal structure is present around the crystal nucleating agent moiety described later. Therefore, when the crystal nucleating agent moiety is first crystallized, crystallization is promoted starting from this crystal nucleating agent moiety. In addition, the crystalline resin seeps to a surface of the toner at the time of fixing the toner, and the crystal nucleating agent moiety promotes crystallization of the hydrocarbon wax. Therefore, it is considered that wax adhesion (contamination) is less likely to occur. As the moiety having a crystal structure, a known crystalline resin (for example, a crystalline polyester resin or a crystalline polyurethane resin) is preferably used, and a crystalline polyester resin is particularly preferably used from a viewpoint of sharp melt at the time of melting and compatibility with the binder resin. That is, the moiety having a crystal structure preferably contains a crystalline polyester resin. (Crystalline Polyester Resin) In the present invention, the crystalline polyester resin refers to a resin exhibiting crystallinity among polyester resins obtained by a polycondensation reaction between a divalent or higher valent alcohol (polyhydric alcohol component) and a divalent or higher valent carboxylic acid (polycarboxylic acid component). The polycarboxylic acid is a compound having two or more carboxy groups in one molecule. Specific examples thereof include: a saturated aliphatic dicarboxylic acid such as oxalic acid, malonic acid, succinic acid, adipic acid, sebacic acid, azelaic acid, n-dodecylsuccinic acid, nonanedicarboxylic acid, decanedicarboxylic acid, undecanedicarboxylic acid, dodecanedicarboxylic acid (dodecanedioic acid), or tetradecanedicarboxylic acid (tetradecanedioic acid); an alicyclic dicarboxylic acid such as cyclohexanedicarboxylic acid; an aromatic dicarboxylic acid such as phthalic acid, isophthalic acid, or terephthalic acid; a trivalent or higher valent polycarboxylic acid such as trimellitic acid or pyromellitic acid; and anhydrides of these carboxylic acid compounds or alkyl esters of these carboxylic acid compounds, each having 1 to 3 carbon atoms. These compounds may be used singly or in combination of two or more types thereof. The polyhydric alcohol is a compound having two or more hydroxy groups in one molecule. Specific examples thereof include: an aliphatic diol such as 1,2-propanediol, 1,3-propanediol, 1,4-butanediol, 1,5-pentanediol, 1,6-hexanediol, 1,7-heptanediol, 1,8-octanediol, 1,9-nonanediol, dodecanediol, neopentyl glycol, or 1,4-butenediol; and a trihydric or higher hydric polyhydric alcohol such as glycerin, pentaerythritol, trimethylolpropane, or sorbitol. These compounds may be used singly or in combination of two or more types thereof. A method for forming the crystalline polyester resin is not particularly limited, and the crystalline polyester resin can be formed by polycondensing (esterifying) the polyhydric alcohol component and the polycarboxylic acid component using a known esterification catalyst. As for a use ratio between the polyhydric alcohol component and the polycarboxylic acid component, the amount of hydroxy groups in the polyhydric alcohol component with respect to the amount of carboxy groups in the polycarboxylic acid component is preferably 1/1.5 or more and 1.5/1 or less, and more preferably 1/1.2 or more and 1.2/1 or less. Examples of a catalyst that can be used for manufacturing the crystalline polyester resin include an alkali metal compound of sodium or lithium; an alkaline earth metal compound of magnesium or calcium; a metal compound of aluminum, zinc, manganese, antimony, titanium, tin, zirconium, or germanium; a phosphorous acid compound; a phosphoric acid compound, and an amine compound. Specific examples of the tin compound include dibutyltin oxide, tin ocrylate, tin dioctylate, and salts thereof. Examples of the titanium compound include: a titanium alkoxide such as tetranormal butyl titanate, tetraisopropyl titanate, tetramethyl titanate, or tetrastearyl titanate; a titanium acylate such as polyhydroxytitanium stearate; and a titanium chelate such as titanium tetraacetylacetate, titanium tetrabutoxide, titanium lactate, or titanium triethanol aminate Examples of the germanium compound include germanium dioxide. Examples of the aluminum compound include a hydroxide such as polyaluminum hydroxide, aluminum alkoxide, and tributylaluminate. These compounds may be used singly or in combination of two or more types thereof. The polymerization temperature and the polymerization time are not particularly limited, and the pressure inside the reaction system may be reduced as necessary during polymerization. (Hybrid Crystalline Polyester Resin) In the present invention, inclusion of a hybrid crystalline polyester resin obtained by chemically bonding a crystalline polyester polymerization segment and a vinyl-based resin polymerization segment to each other as the crystalline resin is preferable because the hybrid crystalline polyester resin is more likely to be finely dispersed in the toner and excellent low-temperature fixability is achieved. That is, the moiety having a crystal structure preferably contains the hybrid crystalline polyester resin. A ratio of an aliphatic carboxylic acid monomer and an aliphatic alcohol monomer derived from the crystal nucleating agent moiety with respect to all the units derived from monomers constituting the crystalline polyester polymerization segment is preferably 0.1 mol % or more and 3 mol % or less, and more preferably 0.5 mol % or more and 1 mol % or less. When the ratio is 0.1 mol % or more, an effect of suppressing fluctuations in fixability of the crystal nucleating agent moiety as the crystal nucleating agent can be sufficiently exhibited. When the ratio is 3 mol % or less, the melting point of the crystal nucleating agent moiety is not too high, and low-temperature fixability can be more preferable. The crystalline polyester polymerization segment refers to a portion derived from the crystalline polyester resin, and a resin segment having not a stepwise endothermic change but a clear endothermic peak in differential scanning calorimetry (DSC) of the toner. The crystalline polyester polymerization segment is not particularly limited as long as the crystalline polyester polymerization segment is as defined above. Examples thereof include a resin having a structure in which a main chain formed by the crystalline polyester polymerization segment is copolymerized with another component, and a resin having a structure in which the crystalline polyester polymerization segment is copolymerized with a main chain containing another component. The crystalline polyester polymerization segment is generated by polycondensing a polycarboxylic acid monomer and a polyhydric alcohol monomer. As the polycarboxylic acid monomer and the polyhydric alcohol monomer, monomers similar to the polycarboxylic acid monomer and the polyhydric alcohol monomer, which are raw materials of the above crystalline polyester resin, can be used, respectively. A method for forming the crystalline polyester polymerization segment is not particularly limited, and the segment can be formed by polycondensing (esterifying) the polycarboxylic acid and the polyhydric alcohol using a known esterification catalyst. The crystalline polyester polymerization segment used in the present invention is preferably obtained by polymerizing a polyhydric alcohol monomer having 4 to 14 carbon atoms and a polycarboxylic acid monomer having 4 to 14 carbon atoms. When the number of carbon atoms is 4 or more, the number of hydrogen bonds derived from an ester bond is not too large, an excessive increase in the melting point of the crystalline polyester resin is suppressed, and low-temperature fixability can be more preferable. When the number of carbon atoms is 14 or less, an interaction between aliphatic groups is not too strong, an excessive increase in the melting point of the crystalline polyester resin is suppressed, and low-temperature fixability can be more preferable. The polymerization segment of the vinyl-based resin (also referred to as vinyl-based polymerization segment) is synthesized from a vinyl monomer which is a raw material of the vinyl-based resin. In the present invention, the content of the vinyl-based polymerization segment is preferably 3% by mass or more and 40% by mass or less, and more preferably 5% by mass or more and 20% by mass or less with respect to the total mass of the crystalline resin. This can increase low-temperature fixability. In particular, when the content is 3% by mass or more, stability of an interface between the crystalline resin and the vinyl-based resin which is a main binder is not excessively lowered, and fine dispersion can be achieved sufficiently. As a result, low-temperature fixability can be more preferable. When the content is 40% by mass or less, compatibility of the crystalline resin with the vinyl-based resin which is a main binder is not too high. As a result, heat-resistant storage can be preferable. Examples of a method for synthesizing the hybrid crystalline polyester resin include the following (1) to (3).(1) A method for synthesizing a hybrid crystalline polyester resin, the method including polymerizing a vinyl-based polymerization segment, and then performing a polymerization reaction to form a crystalline polyester polymerization segment in the presence of the vinyl-based polymerization segment.(2) A method for synthesizing a hybrid crystalline polyester resin, the method including forming a crystalline polyester polymerization segment and a vinyl-based polymerization segment, and then bonding the crystalline polyester polymerization segment and the vinyl-based polymerization segment to each other.(3) A method for synthesizing a hybrid crystalline polyester resin, the method including polymerizing a crystalline polyester polymerization segment, and then performing a polymerization reaction to form a vinyl-based polymerization segment in the presence of the crystalline polyester polymerization segment. Among the above methods (1) to (3), the method (1) is preferable from a viewpoint of easily forming a hybrid crystalline polyester resin having a structure in which a crystalline polyester resin chain is grafted on a vinyl-based resin chain, and simplifying a manufacturing process. The acid value of the crystalline resin in the present invention is preferably 9 mgKOH/g or more and 30 mgKOH/g, and more preferably 15 mgKOH/g or more and 23 mgKOH/g from a viewpoint of low-temperature fixability and fold fixability. Note that the acid value of the crystalline polyester resin is the number of mg (mgKOH/g) of potassium hydroxide required for neutralizing carboxy groups present in 1 g of the resin. Specifically, the acid value is determined by a method in accordance with JIS K0070-1992. <Crystal Nucleating Agent Moiety> In the present invention, the “crystal nucleating agent moiety” is a moiety having a higher crystallization rate than the moiety having a crystal structure. First, the crystal nucleating agent moiety rapidly generates a crystal nucleus, and crystallization of the moiety having a crystal structure is promoted starting from the crystal nucleus. The crystal nucleating agent moiety is not particularly limited as long as the crystal nucleating agent moiety is a compound having a higher crystallization rate than the moiety having a crystal structure. In addition, the crystal nucleating agent moiety is preferably a compound having a main chain containing a hydrocarbon-based moiety and having one or more functional groups that can react with an end of a polyester moiety from a viewpoint of high crystallization rate. Furthermore, the crystal nucleating agent moiety is preferably a compound having a linear hydrocarbon-based moiety and having one or more functional groups that react with a polyester moiety. According to findings of the present inventors, when a crystal nucleating agent is introduced separately from a resin, if the molecular weight of the crystal nucleating agent is low, the toner may bleed out during storage thereof to decrease a storage property or change chargeability. In order to suppress this disadvantage, the crystal nucleating agent is preferably contained in the resin. In addition, according to findings of the present inventors, when the crystal nucleating agent is contained in an amorphous resin, particularly an amorphous resin having a high affinity with a vinyl-based resin, the wax is less likely to seep to an image surface at the time of fixing, and separability may deteriorate or fold fixability may be reduced due to the wax remaining in the image. Therefore, the crystal nucleating agent is preferably contained in the crystalline resin. The crystal nucleating agent moiety is not particularly limited as long as the crystal nucleating agent moiety does not hinder the low-temperature fixability of the toner according to an embodiment of the present invention and causes crystallization more rapidly than a crystalline resin having no crystal nucleating agent moiety. However, the crystal nucleating agent moiety is preferably derived from at least one compound selected from the group consisting of an aliphatic monocarboxylic acid having 10 or more and 30 or less carbon atoms and an aliphatic monoalcohol having 10 or more and 30 or less carbon atoms from a viewpoint that a nucleation effect can be more stably exhibited and the effect of the present invention can be more preferably exhibited. The aliphatic compound may be an unsaturated compound, a saturated compound, a branched compound, or a linear compound without being limited, but is preferably a saturated linear compound having 10 or more and 20 or less carbon atoms from a viewpoint of achieving both folding fixability and tacking suppression. The above crystal nucleating agent moiety may be bonded to any site of the above moiety having a crystal structure, but is preferably bonded to an end of a molecular chain that easily promotes crystallization of the moiety having a crystal structure. Examples of the aliphatic monocarboxylic acid include stearic acid, lauric acid, arachidic acid, n-behenic acid, n-tetradocosanoic acid, n-hexadocosanoic acid, n-octadocosanoic acid, and n-triacontanoic acid. Examples of the above aliphatic monoalcohol include stearyl alcohol, lauryl alcohol, behenyl alcohol, arachidyl alcohol, 1-octadecanol, 1-icosanol, 1-docosanol, 1-tetracosanol, 1-hexacosanol, 1-octacosanol, and 1-triacontanol. The content of the crystal nucleating agent moiety with respect to the total mass of the crystalline resin excluding the crystal nucleating agent is preferably 1% by mass or more and 15% by mass or less, and more preferably 3% by mass or more and 9% by mass or less from a viewpoint of a tacking suppression effect and fold fixability. When the content is 1% by mass or more, tacking can be suppressed and fold fixability can be improved. When the content is 15% by mass or less, it can be suppressed that the crystalline resin and the wax are compatible with each other due to a too strong interaction between the crystalline resin and the wax. As a result, crystallization of the crystalline resin can be accelerated. The crystal nucleating agent moiety is formed by polycondensing a monomer which is a raw material of the crystalline resin, and then putting the crystal nucleating agent into a reaction container to cause a reaction. At this time, the reaction only needs to be a reaction in which the crystalline polyester polymerization segment and the crystal nucleating agent can be chemically bonded to each other, and examples thereof include, but are not limited to, causing the reaction by heating at 200° C. under normal pressure. 1-1-2. Release Agent The first toner in the present invention contains a release agent containing a hydrocarbon wax. The hydrocarbon wax generally has a high melting point and a high crystallization start temperature, and therefore acts in a direction of suppressing tacking. In addition, lowering affinity with a fixing member leads to suppression of adhesion to the member, contamination in a machine, and image unevenness. The melting point of the hydrocarbon wax used as the release agent according to an embodiment of the present invention is preferably 80° C. or higher and 92° C. or lower, and more preferably 80° C. or higher and 88° C. or lower from a viewpoint of tacking suppression and low-temperature fixability. The hydrocarbon wax is not particularly limited, and a known hydrocarbon wax can be used. Examples of the hydrocarbon wax include: an aliphatic hydrocarbon-based wax such as a low molecular weight polyethylene, a low molecular weight polypropylene, a microcrystalline wax, a paraffin wax, or a Fishertropch wax; an oxide of an aliphatic hydrocarbon wax such as a polyethylene oxide wax and a block copolymer thereof; and a wax obtained by grafting an aliphatic hydrocarbon wax with a vinyl-based monomer such as styrene or acrylic acid. The content of the release agent can be usually 1% by mass or more and 23% by mass or less, preferably 3% by mass or more and 16% by mass or less, and more preferably 5% by mass or more and 14% by mass or less with respect to the total mass of the toner. The content of the release agent within the above range achieves sufficient fixing separability. 1-1-3. Colorant The first toner contains a yellow colorant. The yellow colorant is not particularly limited, and a known yellow colorant can be used. Examples of the yellow colorant (pigment) include C.I. Pigment Yellow 12, C.I. Pigment Yellow 13, C.I. Pigment Yellow 14, C.I. Pigment Yellow 15, C.I. Pigment Yellow 17, C.I. Pigment Yellow 74, C.I. Pigment Yellow 93, C.I. Pigment Yellow 94, C.I. Pigment Yellow 138, C.I. Pigment Yellow 155, C.I. Pigment Yellow 180, and C.I. Pigment Yellow 185. The first toner may contain a colorant other than the yellow colorant. Examples of the colorant other than the yellow colorant include a black colorant, an orange colorant, a magenta or red colorant, and a green or cyan colorant. Examples of the black colorant include carbon black such as furnace black, channel black, acetylene black, thermal black, or lamp black, magnetite, and ferrite. Examples of the orange colorant (pigment) include C.I. Pigment Orange 31 and C.I. Pigment Orange 43. Examples of the magenta or red colorant (pigment) include C.I. Pigment Red 2, C.I. Pigment Red 3, C.I. Pigment Red 5, C.I. Pigment Red 6, C.I. Pigment Red 7, C.I. Pigment Red 15, C.I. Pigment Red 16, C.I. Pigment. Red 48; 1, C.I. Pigment Red 53; 1, C.I. Pigment Red 57; 1, C.I. Pigment Red 122, C.I. Pigment Red 123, C.I. Pigment Red 139, C.I. Pigment Red 144, C.I. Pigment Red 149, C.I. Pigment Red 150, C.I. Pigment Red 166, C.I. Pigment Red 177, C.I. Pigment Red 178, C.I. Pigment Red 184, C.I. Pigment Red 222, and C.I. Pigment Red 269. Examples of the green or cyan colorant (pigment) include C.I. Pigment Blue 15, C.I. Pigment Blue 15; 2, C.I. Pigment Blue 15; 3, C.I. Pigment Blue 15; 4, C.I. Pigment Blue 16, C.I. Pigment Blue 60, C.I. Pigment Blue 62, C.I. Pigment Blue 66, and C.I. Pigment Green 7. The content of the colorant is preferably 1% by mass or more and 10% by mass or less, and more preferably 2% by mass or more and 15% by mass or less with respect to the total mass of the toner. Within such a range, color reproducibility of an image can be ensured. The size of a colorant particle is not particularly limited, but a volume-based median diameter is preferably 10 nm or more and 1000 nm or less, and more preferably 50 nm or more and 500 nm or less from a viewpoint of obtaining high color reproducibility. The median diameter of the particle can be measured using “Nanotrac Wave 2-EX150” (manufactured by Microtrack Bell Co., Ltd.). 1-1-4. Other Components Other components contained in the toner according to an embodiment of the present invention are not particularly limited and can be appropriately selected depending on a purpose. Examples thereof include various known external additives such as inorganic fine particles and organic fine particles, a charge control agent, and a developer. The inorganic fine particles are generally used for the purpose of improving fluidity of the toner. Examples of the inorganic fine particles include fine particles of silica, alumina, titanium oxide, barium titanate, magnesium titanate, calcium titanate, strontium titanate, zinc oxide, silica sand, clay, mica, wollastonite, diatomaceous earth, cerium chloride, red iron oxide, chromium oxide, cerium oxide, antimony trioxide, magnesium oxide, zirconium oxide, silicon carbide, and silicon nitride. Among these fine particles, silica fine particles are preferable, and hydrophobized silica fine particles are particularly preferable. The organic fine particles are used generally for the purpose of improving cleaning performance or transferability, and sometimes for the purpose of improving chargeability. Examples of the organic fine particles include fine particles of polystyrene, polymethylmethacrylate, polyvinylidene fluoride, and a polystyrene-acrylic copolymer. The content of the external additive is preferably 0.05% by mass or more and 5% by mass or less, and more preferably 0.1% by mass or more and 3% by mass or less with respect to the total mass of the toner particles. The charge control agent is generally used for the purpose of improving chargeability. Examples of the charge control agent include a known compound such as a nigrosine-based dye, a metal salt of naphthenic acid or a higher fatty acid, an alkoxylated amine, a quaternary ammonium salt, an azo-based metal complex, or a salicylic acid metal salt. In the toner according to an embodiment of the present invention, the developer may be used as a magnetic or non-magnetic one-component developer, or may be mixed with a carrier to be used as a two-component developer. When the toner is used as a two-component developer, as the carrier, magnetic particles containing a conventionally known material, for example, a metal such as iron, ferrite, or magnetite, or an alloy formed by the metal and a metal such as aluminum or lead can be used. Ferrite particles are preferable because saturation magnetization and a surface shape of a magnetic particle can be easily adjusted depending on a composition. The carrier may be a coated carrier obtained by coating surfaces of magnetic particles with a coating agent such as a resin, or may be a dispersion type carrier obtained by dispersing magnetic material fine powder in a binder resin. A volume-based median diameter of the carrier is preferably 20 μm or more and 100 μm or less, and more preferably 25 μm or more and 80 μm or less. When the median diameter is less than 20 μm, saturation magnetization may be small, and carrier adhesion may occur. When the median diameter is more than 100 μm, stacking or roughness of a halftone image may occur, and graininess may deteriorate. The volume-based median diameter (d50) of the carrier can be measured, for example, with a laser diffraction type particle size distribution measuring device “HELOS” (manufactured by SYMPATEC Gmbh) equipped with a wet type disperser. 1-2. Second Toner The second toner only needs to contain a cyan colorant represented by general formula (1). The second toner is a toner containing a binder resin, a colorant, and a release agent like the first toner. In general formula (1), M represents any one of a silicon atom, a germanium atom, a cobalt atom, and a zinc atom, A1, A2, A3, and A4each independently represent an atomic group forming an aromatic ring which may have an electron-withdrawing substituent, and Zs each independently represent an aryloxy group having 6 or more and 18 or less carbon atoms, an alkoxy group having 1 or more and 22 or less carbon atoms, or a group represented by the following general formula (1-1). In general formula (1-1), R1, R2, and R3each independently represent an alkyl group, an aryl group, or an alkoxy group having 1 or more and 6 or less carbon atoms. The cyan colorant represented by general formula (1) is a high chroma pigment of a phthalocyanine compound having a polytetraazaporfin structure having an axial ligand. By inclusion of the colorant represented by general formula (1) in the second toner, the colorant is easily dispersed uniformly in a toner matrix particle and in a fixed image, and a high chroma image can be obtained. In addition, it is difficult for the pigment to function as a nucleating agent because a dispersion diameter of the colorant is decreased. Furthermore, when the vinyl-based resin which is a matrix is cooled, crystallization starts in a state of high elasticity. Therefore, it is considered that excessive growth of domains of the crystalline resin can be suppressed to improve fold fixability. 1-2-1. Binder Resin The binder resin contained in the second toner can be selected from similar resins to the binder resin contained in the first toner. However, in the second toner, the content of the crystal nucleating agent moiety with respect to the total mass of the crystalline resin excluding the crystal nucleating agent is preferably less than 3% by mass, and more preferably less than 1% by mass from a viewpoint of improving fold fixability and low-temperature fixability. The binder resin contained in the second toner can be similar to that described for the first toner except for the content of the crystal nucleating agent, and therefore detailed description thereof will be omitted. 1-2-2. Release Agent The second toner in the present invention contains a release agent containing a hydrocarbon wax. The release agent can be similar to that described for the first toner, and therefore detailed description thereof will be omitted. 1-2-3. Colorant The second toner contains a cyan colorant represented by general formula (1). A toner using the compound represented by the above general formula (1) as a colorant exhibits better separability than a toner using a phthalocyanine compound having no axial ligand as a colorant. It is presumed that this is because the compound of general formula (1) has a bulky substituent (Z), therefore the colorant is easily dispersed uniformly in a toner matrix particle and in a fixed image, the wax (release agent) in the toner is likely to seep to an image surface at the time of fixing, and therefore the amount of the release agent inside the image is reduced to improve fold fixability. In the above general formula (1), M represents any one of a silicon atom (Si), a germanium atom (Ge), a cobalt atom (Co) and a zinc atom (Zn). M is preferably a silicon atom (Si) from a viewpoint of excellent color of a formed image. In the above general formula (1), A1, A2, A3, and A4each independently represent an atomic group forming an aromatic ring which may have an electron-withdrawing substituent. Examples of the electron-withdrawing substituent include a chloro group (—Cl), a methyl chloride halide group (—CClX2), a trifluoromethyl group (—CF3), and a nitro group (—NO2). Note that “X” in the methyl chloride halide group (—CClX2) represents a halogen atom. In the above general formula (1), Zs each represent an aryloxy group having 6 or more and 18 or less carbon atoms, an alkoxy group having 1 or more and 22 or less carbon atoms, or a group represented by general formula (1-1). Z is preferably a group represented by general formula (1-1), more preferably an n-propyl group, an isopropyl group, an n-butyl group, an isobutyl group, or a t-butyl group from a viewpoint of ease of synthesis and increasing a bulk of a molecule. In the above general formula (1-1), R1, R2, and R3each independently represent an alkyl group, an aryl group, or an alkoxy group having 1 or more and 6 or less carbon atoms. R1, R2, and R3are each preferably an alkyl group, an aryl group or an alkoxy group having 1 or more and 6 or less carbon atoms, and more preferably an n-propyl group, an isopropyl group, an n-butyl group, an isobutyl group, or a t-butyl group. As the colorant, the compound represented by general formula (1) may be used singly, or two or more of the compounds may be used in combination. A known colorant may be used in combination with the above colorant. In particular, the compound represented by general formula (1) has high molecular absorbency, and therefore exhibits favorable color reproducibility with a smaller amount of addition than that in prior art. The ratio of the compound represented by general formula (1) in the colorant is not particularly limited as long as the compound is blended to such an extent that the compound can exhibit its function. Colorants for obtaining toners of colors can be used singly or in combination of two or more types thereof for each of the colors. The second toner may contain a colorant other than the cyan colorant. Examples of the colorant other than the cyan colorant include a black colorant, an orange or yellow colorant, a magenta or red colorant, and a green colorant. Examples of these colorants can be similar to those described for the first toner, and therefore detailed description thereof will be omitted. The content and particle size of the colorant contained in the second toner can be similar to those described for the first toner, and therefore detailed description thereof will be omitted. 1-2-4. Other Components Other components contained in the second toner can be appropriately selected according to a purpose in a similar manner to those described for the other components contained in the first toner, and therefore detailed description thereof will be omitted. 1-3. Third Toner The toner set according to an embodiment of the present invention may contain a third toner containing a magenta colorant. The third toner contains a binder resin containing a vinyl-based resin and a crystalline resin, and a release agent containing a hydrocarbon wax, in which the content of the vinyl-based resin with respect to the total mass of the binder resin is 50% by mass or more. 1-3-1. Binder Resin The binder resin contained in the third toner can be selected from resins similar to the binder resin contained in the first toner. However, in the third toner, the content of the crystal nucleating agent moiety with respect to the total mass of the crystalline resin excluding the crystal nucleating agent is preferably less than 3% by mass, and more preferably less than 1% by mass from a viewpoint of improving fold fixability and low-temperature fixability. The binder resin contained in the third toner can be similar to that described for the first toner except for the content of the crystal nucleating agent, and therefore detailed description thereof will be omitted. 1-3-2. Release Agent The third toner in the present invention contains a release agent containing a hydrocarbon wax. The release agent can be similar to that described for the first toner, and therefore detailed description thereof will be omitted. 1-3-3. Colorant The third toner contains a magenta colorant. The colorant contained in the third toner preferably contains a compound obtained by a reaction between a colorant represented by general formula (2) and a metal-containing compound represented by general formula (3) in an amount of 50% by mass or more with respect to the total mass of the magenta colorant from a viewpoint of improving low-temperature fixability and fold fixability. In general formula (2), Rx1and Rx2each independently represent a substituted or unsubstituted linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms, Lx represents a hydrogen atom or a substituted or unsubstituted linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms, Gx1represents a substituted or unsubstituted linear, branched, or cyclic alkyl group having 2 or more and 20 or less carbon atoms, Gx2represents a substituted or unsubstituted linear or branched alkyl group having 1 or more and 5 or less carbon atoms, Go represents a hydrogen atom, a halogen atom, a group represented by Gx4-CO—NH—, or a group represented by Gx5-N(Gx6)—CO—, in which Gx4represents a substituent, Gx5and Gx6each independently represent a hydrogen atom or a substituent, and Qx1, Qx2, Qx3, Qx4, and Qx5each independent represent a hydrogen atom or a substituent. In general formula (3), R4represents a substituted or unsubstituted linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms, R5represents a hydrogen atom, an alkoxycarbonyl group, an arylcarbonyl group, an aryloxycarbonyl group, a sulfamoyl group, a sulfinyl group, an alkylsulfonyl group, an arylsulfonyl group, an acyl group, a nitrophenyl group, a halogen atom, or a cyano group, R6represents a substituted or unsubstituted aromatic hydrocarbon-containing group having 9 or more and 120 or less carbon atoms, and M represents a divalent metal element. In the above general formula (2), Rx1and Rx2each independently represent a substituted or unsubstituted linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms. Specific examples thereof include a methyl group, an ethyl group, an n-propyl group, an isopropyl group, a 2-methylpropyl group, an n-butyl group, an isobutyl group, a sec-butyl group, a tert-butyl group, an n-pentyl group, an iso-amyl group, a tert-pentyl group, a neopentyl group, an n-hexyl group, a 3-methylpentan-2-yl group, a 3-methylpentan-3-yl group, a 4-methylpentyl group, a 4-methylpentan-2-yl group, a 1,3-dimethylbutyl group, a 3,3-dimethylbutyl group, a 3,3-dimethylbutan-2-yl group, an n-heptyl group, a 1-methylhexyl group, a 3-methylhexyl group, a 4-methylhexyl group, a 5-methylhexyl group, a 1-ethylpentyl group, a 1-(n-propyl) butyl group, a 1,1-dimethylpentyl group, a 1,4-dimethylpentyl group, a 1,1-diethylpropyl group, a 1,3,3-trimethylbutyl group, a 1-ethyl-2,2-dimethylpropyl group, an n-octyl group, a 2-ethylhexyl group, a 2-methylhexan-2-yl group, a 2,4-dimethylpentan-3-yl group, a 1,1-dimethylpentan-1-yl group, a 2,2-dimethylhexan-3-yl group, a 2,3-dimethylhexan-2-yl group, a 2,5-dimethylhexan-2-yl group, a 2,5-dimethylhexan-3-yl group, a 3,4-dimethylhexan-3-yl group, a 3,5-dimethylhexan-3-yl group, a 1-methylheptyl group, a 2-methylheptyl group, a 5-methylheptyl group, a 2-methylheptan-2-yl group, a 3-methylheptan-3-yl group, a 4-methylheptan-3-yl group, a 4-methylheptan-4-yl group, a 1-ethylhexyl group, a 2-ethylhexyl group, a 1-propylpentyl group, a 2-propylpentyl group, a 1,1-dimethylhexyl group, a 1,4-dimethylhexyl group, a 1,5-dimethylhexyl group, a 1-ethyl-1-methylpentyl group, a 1-ethyl-4-methylpentyl group, a 1,1,4-trimethylpentyl group, a 2,4,4-trimethylpentyl group, a 1-isopropyl-1,2-dimethylpropyl group, a 1,1,3,3-tetramethylbutyl group, an n-nonyl group, a 1-methyloctyl group, a 6-methyloctyl group, a 1-ethylheptyl group, a 1-(n-butyl) pentyl group, a 4-methyl-1-(n-propyl)) pentyl group, a 1,5,5-trimethylhexyl group, a 1,1,5-trimethylhexyl group, a 2-methyloctan-3-yl group, an n-decyl group, a 1-methyl nonyl group, a 1-ethyloctyl group, a 1-(n-butyl) hexyl group, a 1,1-dimethyloctyl group, a 3,7-dimethyloctyl group, an n-undecyl group, a 1-methyldecyl group, a 1-ethylnonyl group, an n-dodecyl group, a 1-methylundecyl group, an n-tridecyl group, an n-tetradecyl group, a 1-methyltridecyl group, an n-pentadecyl group, an n-hexadecyl group, an n-heptadecyl group, an n-octadecyl group, an n-nonadecyl group, an n-eicosyl group, a cyclopropyl group, a cyclobutyl group, a cyclopentyl group, a cyclohexyl group, and a 4-tert-butyl-cyclohexyl group. One or more hydrogen atoms of each of the above alkyl groups may be replaced with substituents. Examples of the substituents include an alkenyl group (for example, a vinyl group or an allyl group), an alkynyl group (for example, an ethynyl group or a propargyl group), an aromatic hydrocarbon group (for example, a phenyl group or a naphthyl group), an aromatic heterocyclic group (for example, a furyl group, a thienyl group, a pyridyl group, a pyridazyl group, a pyrimidyl group, a pyrazyl group, a triazyl group, an imidazolyl group, a pyrazolyl group, a thiazolyl group, a benzimidazolyl group, a benzoxazolyl group, a quinazolyl group, or a phthalazyl group), a heterocyclic group (for example, a pyrrolidyl group, an imidazolidyl group, a morpholyl group, or an oxazozolidyl group), an alkoxy group (for example, a methoxy group, an ethoxy group, a propyloxy group, a pentyloxy group, a hexyloxy group, an octyloxy group, or a dodecyloxy group), a cycloalkoxy group (for example, a cyclopentyloxy group or a cyclohexyloxy group), an aryloxy group (for example, a phenoxy group or a naphthyloxy group), an alkylthio group (for example, a methylthio group, an ethylthio group, a propylthio group, a pentylthio group, a hexylthio group, an octylthio group, or a dodecylthio group), a cycloalkylthio group (for example, a cyclopentylthio group or a cyclohexylthio group), an arylthio group (for example, a phenylthio group or a naphthylthio group), an alkoxycarbonyl group (for example, a methyloxycarbonyl group, an ethyloxycarbonyl group, a butyloxycarbonyl group, an octyloxycarbonyl group, or a dodecyloxycathonyl group), an alkoxyalkylene ether group (for example, a methoxyethylene ether group), an alkylaminocarbonyl group (for example, a diethylamino carbonyl group), an aryloxycarbonyl group (for example, a phenyloxycarbonyl group or a naphthyloxycarbonyl group), a phosphoryl group (for example, dimethoxy phosphoryl or diphenyl phosphoryl), a sulfamoyl group (for example, an aminosulfonyl group, a methylaminosulfonyl group, a dimethylaminosulfonyl group, a butylaminosulfonyl group, a hexylaminosulfonyl group, a cyclohexylaminosulfonyl group, an octylaminosulfonyl group, a dodecylaminosulfonyl group, a phenylaminosulfonyl group, a naphthylaminosulfonyl group, or a 2-pyridylaminosulfonyl group), an acyl group (for example, an acetyl group, an ethylcarbonyl group, a propylcarbonyl group, a pentylcarbonyl group, a cyclohexylcarbonyl group, an octylcathonyl group, a 2-ethylhexylcarbonyl group, a dodecylcarbonyl group, a phenylcarbonyl group, a naphthylcarbonyl group, or a pyridylcarbonyl group), an acyloxy group (for example, an acetyloxy group, an ethylcarbonyloxy group, a butylcarbonyloxy group, an octylcarbonyloxy group, a dodecylcarbonyloxy group, or a phenylcarbonyloxy group), an amido group (for example, a methylcarbonylamino group, an ethylcarbonylamino group, a dimethylcarbonylamino group, a propylcarbonylamino group, a pentylcarbonylamino group, a cyclohexylcarbonylamino group, a 2-ethylhexylcarbonylamino group, an octylcathonylamino group, a dodecylcarbonylamino group, a phenylcarbonylamino group, or a naphthylcarbonylamino group), a carbamoyl group (for example, an aminocarbonyl group, a methylaminocarbonyl group, a dimethylaminocarbonyl group, a propylaminocarbonyl group, a pentylaminocarbonyl group, a cyclohexylaminocarbonyl group, an octylaminocarbonyl group, a 2-ethylhexylaminocarbonyl group, a dodecylaminocarbonyl group, a phenylaminocarbonyl group, a naphthylaminocarbonyl group, or a 2-pyridylaminocarbonyl group), a ureido group (for example, a methylureido group, an ethylureido group, a pentylureido group, a cyclohexylureido group, an octylureido group, a dodecylureido group, a phenylureido group, a naphthylureido group, or a 2-pyridylaminoureido group), a sulfinyl group (for example, a methylsulfinyl group, an ethylsulfinyl group, a butylsulfinyl group, a cyclohexylsulfinyl group, a 2-ethylhexylsulfinyl group, a dodecysulfinyl group, a phenylsulfinyl group, a naphthylsulfinyl group, or a 2-pyridylsulfinyl group), an alkylsulfonyl group (for example, a methylsulfonyl group, an ethylsulfonyl group, a butylsulfonyl group, a cyclohexylsulfonyl group, a 2-ethylhexylsulfonyl group, or a dodecylsulfonyl group), an arylsulfonyl group (for example, a phenylsulfonyl group, a naphthylsulfonyl group, or a 2-pyridylsulfonyl group), an amino group (for example, an amino group, an ethylamino group, a dimethylamino group, a butylamino group, a dibutylamino group, a cyclopentylamino group, a 2-ethylhexylamino group, a dodecylamino group, an anilino group, a naphthylamino group, or a 2-pyridylamino group), an azo group (for example, a phenylazo group), an alkylsulfonyloxy group (for example, a methane sulfonyloxy group), a cyano group, a nitro group, a halogen atom (for example, a fluorine atom, a chlorine atom, or a bromine atom), and a hydroxy group. These groups may further have substituents. Among these substituents, an aromatic hydrocarbon group (preferably having 6 or more and 20 or less carbon atoms), an alkoxy group (preferably having 1 or more and 10 or less carbon atoms), a cycloalkoxy group (preferably having 4 or more and 10 or less carbon atoms), a halogen atom, a hydroxy group, an alkoxyalkylene ether group (preferably an alkoxy group having 1 to 10 carbon atoms and an alkylene group having 1 or more and 10 or less carbon atoms), or an alkylaminocarbonyl group (preferably an alkyl group having 1 or more and 10 or less carbon atoms) is preferable. Rx1and Rx2are each independently preferably an unsubstituted alkyl group or an alkoxy group-substituted alkyl group, and more preferably an unsubstituted alkyl group. In addition, the total number of carbon atoms contained in the alkyl group used for Rx1and carbon atoms contained in the alkyl group used for Rx2is preferably 2 or more. In the above general formula (2), Gx1represents a substituted or unsubstituted linear, branched, or cyclic alkyl group having 2 or more and 20 or less carbon atoms. Specific examples of the alkyl group are similar to those of the alkyl groups used for Rx1and Rx2excluding a methyl group, and therefore detailed description thereof will be omitted here. In addition, specific examples of the substituents are similar to those of the substituents that can be used for Rx1and Rx2, and therefore detailed description thereof will be omitted here. Gx1is preferably a branched alkyl group, more preferably a tertiary alkyl group, still more preferably an isopropyl group or a t-butyl group, and particularly preferably a t-butyl group. In the above general formula (2), Gx2represents a substituted or unsubstituted linear or branched alkyl group having 1 or more and 5 or less carbon atoms. Specific examples of the alkyl group used for Gx2include a methyl group, an ethyl group, an n-propyl group, an isopropyl group, an n-butyl group, an isobutyl group, a sec-butyl group, a tert-butyl group, an n-pentyl group, an iso-amyl group, a tert-pentyl group, and a neopentyl group. Specific examples of the substituents are similar to those of the substituents that can be used for Rx1and Rx2, and therefore detailed description thereof will be omitted here. Gx2is preferably a methyl group or an ethyl group from a viewpoint of obtaining the effects of the present invention more effectively. In the above general formula (2), Gx3represents a hydrogen atom, a halogen atom, a group represented by Gx4-CO—NH—, or a group represented by Gxs-N(GO-CO—, in which case Gx4is a substituent, and Gx5and Gx6each independently represent a hydrogen atom or a substituent. Specific examples of the substituents used for Gx4, Gx5, and Gx6include, in addition to the substituents that can be used for Rx1and Rx2, a linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms. Go is preferably a hydrogen atom or a diethylamine carbonyl group. In the above general formula (1), Qx1, Qx2, Qx3, Qx4, and Qx5each independently represent a hydrogen atom or a substituent. Specific examples of the substituents used for Qx1, Qx2, Qx3, Qx4, and Qx5include, in addition to the substituents that can be used for Rx1and Rx2, a linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms. Qx1, Qx2, Qx3, Qx4, and Qx5are each independently preferably a hydrogen atom, an alkyl group, a halogen atom, an alkoxy group (preferably having 1 or more and 10 or less carbon atoms), or an aryl group, and are each more preferably a hydrogen atom. Examples of the colorant represented by the above general formula (2) include the following compounds. The colorant represented by the above general formula (2) can be synthesized by referring to a conventionally known method described in JP 2016-130806 A. In the above general formula (3), R4represents a substituted or unsubstituted linear, branched, or cyclic alkyl group having 1 or more and 20 or less carbon atoms. Specific examples of the alkyl group are similar to the alkyl groups used for Rx1and Rx2in formula (2) of the section of the colorant, and therefore detailed description thereof will be omitted here. One or more hydrogen atoms of each of the above alkyl groups may be replaced with substituents. Examples of the substituents include an alkenyl group (for example, a vinyl group or an allyl group), an alkynyl group (for example, an ethynyl group or a propargyl group), an aryl group (for example, a phenyl group or a naphthyl group, or a 4-octyloxybenzene), a heterocyclic aryl group (for example, a furyl group, a thienyl group, a pyridyl group, a pyridazyl group, a pyrimidyl group, a pyrazyl group, a triazyl group, an imidazolyl group, a pyrazolyl group, a thiazolyl group, a benzimidazolyl group, a benzoxazolyl group, a quinazolyl group, or a phthalazyl group), a heterocyclic group (for example, a pyrrolidyl group, an imidazolidyl group, a morpholyl group, or an oxazozolidyl group), an alkoxy group (for example, a methoxy group, an ethoxy group, a propyloxy group, a pentyloxy group, a hexyloxy group, an octyloxy group, or a dodecyloxy group, a cyclopentyloxy group, or a cyclohexyloxy group), an aryloxy group (for example, a phenoxy group or a naphthyloxy group), an alkylthio group (for example, a methylthio group, an ethylthio group, a propylthio group, a pentylthio group, a hexylthio group, an octylthio group, or a dodecylthio group, a cyclopentylthio group, or a cyclohexylthio group), an arylthio group (for example, a phenylthio group or a naphthylthio group), an alkoxycarbonyl group (for example, a methyloxycarbonyl group, an ethyloxycarbonyl group, a butyloxycarbonyl group, an octyloxycarbonyl group, or a dodecyloxycarbonyl group), an aryloxycarbonyl group (for example, a phenyloxycarbonyl group or a naphthyloxycarbonyl group), a sulfamoyl group (for example, an aminosulfonyl group, a methylaminosulfonyl group, a dimethylaminosulfonyl group, a butylaminosulfonyl group, a hexylaminosulfonyl group, a cyclohexylaminosulfonyl group, an octylaminosulfonyl group, a dodecylaminosulfonyl group, a phenylaminosulfonyl group, a naphthylaminosulfonyl group, or a 2-pyridylaminosulfonyl group), an acyl group (for example, an acetyl group, an ethylcathonyl group, a propylcarbonyl group, a pentylcarbonyl group, a cyclohexylcarbonyl group, an octylcarbonyl group, a 2-ethylhexylcathonyl group, a dodecylcarbonyl group, a phenylcarbonyl group, a naphthylcarbonyl group, or a pyridylcarbonyl group), an acyloxy group (for example, an acetyloxy group, an ethylcarbonyloxy group, a butylcarbonyloxy group, an octylcarbonyloxy group, a dodecylcarbonyloxy group, or a phenylcarbonyloxy group), an amido group (for example, a methylcarbonylamino group, an ethylcarbonylamino group, a dimethylcarbonylamino group, a propylcarbonylamino group, a pentylcarbonylamino group, a cyclohexylcarbonylamino group, a 2-ethylhexylcarbonylamino group, an octylcarbonylamino group, a dodecylcarbonylamino group, a phenylcarbonylamino group, or a naphthylcarbonylamino group), a carbamoyl group (for example, an aminocarbonyl group, a methylaminocarbonyl group, a dimethylaminocarbonyl group, a propylaminocarbonyl group, a pentylaminocarbonyl group, a cyclohexylaminocarbonyl group, an octylaminocarbonyl group, a 2-ethylhexylaminocarbonyl group, a dodecylaminocarbonyl group, a phenylaminocarbonyl group, a naphthylaminocarbonyl group, or a 2-pyridylaminocarbonyl group), a ureido group (for example, a methylureido group, an ethylureido group, a pentylureido group, a cyclohexylureido group, an octylureido group, a dodecylureido group, a phenylureido group, a naphthylureido group, or a 2-pyridylaminoureido group), a sulfinyl group (for example, a methylsulfinyl group, an ethylsulfinyl group, a butylsulfinyl group, a cyclohexylsulfinyl group, a 2-ethylhexylsulfinyl group, a dodecysulfinyl group, a phenylsulfinyl group, a naphthylsulfinyl group, or a 2-pyridylsulfinyl group), an alkylsulfonyl group (for example, a methylsulfonyl group, an ethylsulfonyl group, a butylsulfonyl group, a cyclohexylsulfonyl group, a 2-ethylhexylsulfonyl group, or a dodecylsulfonyl group), an arylsulfonyl group (for example, a phenylsulfonyl group, a naphthylsulfonyl group, or a 2-pyridylsulfonyl group), an amino group (for example, an amino group, an ethylamino group, a dimethylamino group, a butylamino group, a cyclopentylamino group, a 2-ethylhexylamino group, a dodecylamino group, an anilino group, a naphthylamino group, or a 2-pyridylamino group), a cyano group, a nitro group, and a halogen atom (for example, a chlorine atom, a bromine atom, fluorine atom, or an iodine atom). These groups may further have similar substituents. R4is preferably an alkyl group having 1 or more and 4 or less carbon atoms, more preferably a linear alkyl group having 1 or more and 4 or less carbon atoms, still more preferably a methyl group or an ethyl group, and particularly preferably a methyl group. In the above general formula (3), R5represents a hydrogen atom, an alkoxycarbonyl group, an arylcarbonyl group, an aryloxycarbonyl group, a sulfamoyl group, a sulfinyl group, an alkylsulfonyl group, an arylsulfonyl group, an acyl group, a nitrophenyl group, a halogen atom, or a cyano group. Specific examples thereof include: an alkoxycarbonyl group such as a methyloxycarbonyl group, an ethyloxycarbonyl group, a butyloxycarbonyl group, an octyloxycarbonyl group, or a dodecyloxycarbonyl group; an arylcarbonyl group such as a phenylcarbonyl group; an aryloxycathonyl group such as a phenyloxycarbonyl group or a naphthyloxycarbonyl group; a sulfamoyl group of an alkylsulfonylamino group such as an aminosulfonyl group, a methylaminosulfonyl group, a dimethylaminosulfonyl group, a butylaminosulfonyl group, a hexylaminosulfonyl group, a cyclohexylaminosulfonyl group, an octylaminosulfonyl group, or a dodecylaminosulfonyl group, or an arylsulfonylamino group such as a phenylaminosulfonyl group, a 3-methyl-4-dodecyloxy-5-t-butylphenylaminosulfonyl group, a naphthylaminosulfonyl group, or a 2-pyridylaminosulfonyl group; a sulfinyl group such as a methyl sulfinyl group, an ethyl sulfinyl group, a butyl sulfinyl group, a cyclohexyl sulfinyl group, a 2-ethylhexyl sulfinyl group, a dodecyl sulfinyl group, a phenyl sulfinyl group, a naphthyl sulfinyl group, or a 2-pyridyl sulfinyl group; an alkylsulfonyl group such as a methylsulfonyl group, an ethylsulfonyl group, a butylsulfonyl group, a cyclohexylsulfonyl group, a 2-ethylhexylsulfonyl group, or a dodecylsulfonyl group; an arylsulfonyl group such as a phenylsulfonyl group, a 4-methylphenylsulfonyl group, a naphthylsulfonyl group, or a 2-pyridylsulfonyl group; an acyl group such as an acetyl group, an ethylcarbonyl group, a propylcarbonyl group, a pentylcarbonyl group, a hexylcarbonyl group, a cyclohexylcarbonyl group, an octylcarbonyl group, a 2-ethylhexylcalbonyl group, a dodecylcarbonyl group, a phenylcarbonyl group, a naphthylcarbonyl group, or a pyridylcarbonyl group; a halogen atom (for example, a chlorine atom, a bromine atom, a fluorine atom, or an iodine atom), and a cyano group. R5is preferably an alkoxycarbonyl group (preferably having 2 or more and 10 or less carbon atoms), an arylcarbonyl group (preferably having 2 or more and 10 or less carbon atoms), an alkylsulfonyl group (preferably having 7 or more and 10 or less carbon atoms), an arylsulfonyl group (preferably having 6 or more and 10 or less carbon atoms), an acyl group (preferably having 2 or more and 10 or less carbon atoms), or a cyano group, more preferably an alkoxycarbonyl group (preferably having 2 or more and 10 or less carbon atoms), an acyl group (preferably having 2 or more and 10 or less carbon atoms), or a cyano group, and still more preferably a cyano group. In the above general formula (3), R6represents a substituted or unsubstituted aromatic hydrocarbon-containing group having 9 or more and 120 or less carbon atoms. Here, the aromatic hydrocarbon-containing group having 9 or more and 120 or less carbon atoms refers to a group having 9 or more and 120 or less carbon atoms in R6and having an aromatic hydrocarbon structure at any position in R6. Examples of the aromatic hydrocarbon structure include an aryl group (for example, a phenyl group or a naphthyl group). For example, when the aromatic hydrocarbon structure is a phenyl group, the phenyl group forms R6together with any substituent having 3 or more carbon atoms. In this case, R6may have three or more substituents each having one carbon atom, or may have one or more substituents each having one carbon atom and one or more substituents each having two carbon atoms. The total number of carbon atoms in R6is preferably 9 or more and 40 or less, more preferably 12 or more and 40 or less, and still more preferably 14 or more and 30 or less. R6is preferably a group represented by the following formula (4). In the above general formula (4), L represents a group formed by combining a linear or branched alkylene group having 1 or more and 15 or less carbon atoms with one or more divalent linking groups selected from the group consisting of —SO2O—, —OSO2—, —SO2—, —CO—, —O—, —S—, —SO2NH—, —NHSO2—, —CONH—, —NHCO—, —COO—, and —OOC—. At *, R6is bonded to an oxygen atom adjacent to R6in general formula (2). Specific examples of the linear or branched alkylene group having 1 or more and 15 or less carbon atoms include a methylene group, an ethylene group, a trimethylene group, a tetramethylene group, a propylene group, an ethylethylene group, a pentamethylene group, a hexamethylene group, a 2,2,4-trimethylhexamethylene group, a heptamethylene group, an octamethylene group, a nonamethylene group, a decamethylene group, an undecamethylene group, a dodecamethylene group, a tridecamethylene group, a tetradecamethylene group, and a pentadecamethylene group. L may have a substituent, and examples of the substituent include a group similar to the substituent used for R4in the above general formula (3). The divalent linking group represented by L is preferably an alkylene group or a group containing an alkylene group. The group containing an alkylene group only needs to contain an alkylene group at any position in the divalent linking group represented by L, and specifically refers to a group formed by combining an alkylene group with one or more divalent linking groups selected from the group consisting of —SO2O—, —OSO2—, —SO2—, —CO—, —O—, —S—, —SO2NH—, —NHSO2—, —CONH—, —NHCO—, —COO—, and —OOC—. L is preferably an alkylene group having 1 or more and 10 or less carbon atoms, an —R9—O— group, an —R9—CO— group, an —R9—NHCO— group, an —R9—SO2— group, an —R9—COS— group, an —NH—SO2— group, an —NH—SO2—R9— group, or an —R9—O—R9—O—R9— group, in which —R9— represents an alkylene group having 1 or more and 10 or less carbon atoms. R7represents an aryl group (for example, a phenyl group or a naphthyl group). Specific examples of the divalent linking group represented by L are describe below, but the present invention is not limited thereto. At *, L is bonded to an oxygen atom adjacent to R3in the above general formula (2) or R7. R6and R7each may have a substituent, and examples of the substituent include a group similar to the substituent used for R4in the above general formula (3). Preferable examples of the substituents for substitution in L, R6, and R7include an alkyl group (preferably having 1 or more and 20 or less carbon atoms), an alkoxy group (preferably having 1 or more and 20 or less carbon atoms), an aryloxy group, an alkylthio group, an arylthio group, an allkoxycarbonyl group (preferably having 2 or more and 20 or less carbon atoms), an aryloxycarbonyl group, a sulfamoyl group, an acyl group, an acyloxy group, an amide group, an alkylaminocarbonyl group (preferably having 2 or more and 20 or less carbon atoms), a carbamoyl group, an alkylsulfonyl group, an alylsulfonyl group, an amino group, a cyano group, a nitro group, and a halogen atom. More preferable examples thereof include an alkyl group, an alkoxy group, an acyloxy group, an alkoxycarbonyl group, an alyloxycarbonyl group, a sulfamoyl group, an acyl group, an acyloxy group, an amide group, and a carbamoyl group. Still more preferable examples thereof include an alkyl group, an alkoxy group, an aryloxy group, an alkoxycarbonyl group, an acyloxy group, and an amide group. R7is preferably a phenyl group, more preferably a phenyl group having a substituent, still more preferably a phenyl group having an alkyl group, an alkoxy group, an aryloxy group, an alkoxycarbonyl group, an acyloxy group, or an amide group, and particularly preferably a phenyl group having an alkyl group or an alkoxy group. R6is more preferably a group represented by the following general formula (4-1) or (4-2). In the general formulas (4-1) and (4-2), L and * represent groups synonymous with L and * in the above general formula (4), respectively, Xs each independently represent —O—·, —NHCO—, or —COO—, R8represents a linear or branched alkyl group having 1 or more and 30 or less carbon atoms, and n represents an integer of 0 to 3. R8is preferably an alkyl group having 1 or more and 20 or less carbon atoms, and more preferably an alkyl group having 1 or more and 10 or less carbon atoms. R8may have a substituent, and examples of the substituent include a group synonymous with the substituent used for R1in the above general formula (3). R8is preferably a linear alkyl group, and more preferably contains only a carbon atom and a hydrogen atom. n is preferably 0 or 1. L is preferably an alkylene group having 1 or more and 10 or less carbon atoms, an —R9—O— group, an —R9—CO— group, an —R9—NHCO— group, an —R9—SO2— group, an —R9—COS— group, an —NH—SO2— group, an —NH—SO2—R9— group, or an —R9—O—R9—O—R9— group, in which —R9— represents an alkylene group having 1 to 10 carbon atoms. More preferably, L is an alkylene group having 1 or more and 6 or less carbon atoms, or an —R9—O— group (in which R9preferably has 1 or more and 5 or less carbon atoms). In the above general formula (3), M represents a divalent metal element. Examples of the divalent metal include iron, magnesium, nickel, cobalt, copper, palladium, zinc, vanadium, titanium, indium, and tin. M is preferably magnesium, copper, zinc, cobalt, nickel, iron, vanadium, titanium, or tin chloride (II), more preferably magnesium or copper, and still more preferably copper (Cu) from a viewpoint of reactivity with the colorant represented by formula (2). The metal-containing compound used in the present invention may have a neutral ligand depending on a central metal, and examples of a typical ligand include H2O and NH3. As the metal-containing compound, the following structure is exemplified. Compound No.R4R5R6[Chemical formula 12]3-1CH3—*3-2CH3—*NC—*3-3CH3—*NC—*3-4CH3—*3-5CH3—*NC—*3-6CH3—*3-7CH3—*NC—*3-8CH3—*NC—*3-9CH3—*3-10CH3—*3-11CH3—*NC—*3-12CH3—*NC—*3-13CH3—*NC—*[Chemical formula 13]3-14CH3—*3-15CH3—*NC—*3-16CH3—*NC—*3-17C2H5—*3-18C2H5—*NC—*3-19CH3—*NC—*3-20C2H5—*3-21CH3—*NC—*3-22(CH3)2CH—*3-23(CH3)2CH—*3-24CH3—*NC—*3-25CH2—O—C2H4—*3-26NC—*[Chemical formula 14]3-27NC—*3-28CH3—*3-29CH3—*3-30CH3—*NC—*3-31CH3—*NC—*3-32CH3—*NC—*3-33CH3—*NC—*3-34CH3—*NC—*3-35C2H5—*NC—*3-36CH3—*NC—*3-37CH3—*[Chemical formula 15]3-38(n)C4H9—*NC*—3-39NC—*3-40CH3—*NC—*3-41CH3CH2C(CH3)2—*H—*3-42CH3—*3-43Cl—*3-44CH3—*Br—*3-45CH3—*CH3—O—C(═O)—*3-46CH3—*CH3—S(═O)—*3-47CH3—*C6H6—O—C(═O)—* These metal-containing compounds may be used singly or in combination of two or more types thereof. The content of the metal-containing compound is preferably 1% by mass or more and 15% by mass or less, and more preferably 2% by mass or more and 10% by mass or less with respect to the entire color toner. The metal-containing compound is preferably synthesized by causing a raw material compound represented by the following formula (3-1) to react with a divalent metal compound. Examples of the divalent metal compound used include metal chloride (II), metal acetate (II), and metal perchlorate. R4, R5, and R6in the following formula (3-1) are similar to R4, R5, and R6in formula (3), respectively. Conditions for a reaction between the colorant represented by general formula (2) and the metal-containing compound represented by general formula (3) are not particularly limited. A reaction compound can be obtained by mixing the colorant represented by general formula (2) and the metal-containing compound represented by general formula (3) in a solvent, and stirring the mixture, for example, preferably at 50° C. or higher and 95° C. or lower, more preferably at 60° C. or higher and 90° C. or lower, and still more preferably at 75° C. or higher and 87° C. or lower, preferably for five minutes or more and 250 minutes or less, more preferably for 10 minutes or more and 60 minutes or less, still more preferably for 15 minutes or more and 30 minutes or less. Specifically, for example, by adding a dispersion of the metal-containing compound (metal-containing compound fine particle dispersion) to a dispersion of the colorant (colorant fine particle dispersion) or a dispersion containing the colorant, such as a resin fine particle dispersion containing the colorant or a wax-containing resin fine particle dispersion, the dispersion becomes cloudy once, but the dispersion (supernatant) becomes transparent by stirring. As a result, it is considered that the reaction between the colorant represented by general formula (2) and the metal-containing compound represented by general formula (3) is completed and the reaction compound is formed. As the solvent used in the reaction, a solvent used for preparing the toner is preferably used. The colorant represented by general formula (2) (hereinafter, also referred to as colorant (a)) and the metal-containing compound represented by general formula (3) (hereinafter, also referred to as metal-containing compound (b)) react with each other at a ratio of 1:1 (molar ratio) to form an ionic compound, but the molar ratio of colorant (a):metal-containing compound (b) is preferably 1:0.7 to 1.2. The magenta colorant contained in the third toner may contain a magenta colorant in addition to the compound obtained by a reaction between the colorant represented by general formula (2) and the metal-containing compound represented by general formula (3). Examples of the magenta colorant can be similar to those described for the first toner, and therefore detailed description thereof will be omitted. The colorant contained in the third toner may contain a colorant other than the magenta colorant. Examples of the colorant other than the magenta colorant include a black colorant, an orange or yellow colorant, a red colorant, and a cyan or green colorant. Examples of these colorants can be similar to those described for the first toner, and therefore detailed description thereof will be omitted. The content and particle size of the colorant contained in the third toner can be similar to those described for the first toner, and therefore detailed description thereof will be omitted. 1-4. Toner Form [Core-Shell Structure] For each toner contained in the toner set, a toner particle can be used as it is as the toner. However, each toner may be a toner particle having a multi-layer structure such as a core-shell structure including the toner particle as a core particle and a shell layer covering a surface of the core particle. The shell layer does not have to coat the entire surface of the core particle, and the core particle may be partially exposed. A cross section of the core-shell structure can be confirmed by a known observation means such as a transmission electron microscope (TEM) or a scanning probe microscope (SPM). In a case of the core-shell structure, the properties such as a glass transition point, a melting point, and hardness can be made different between the core particle and the shell layer, and it is possible to design a toner particle according to a purpose. For example, on a surface of a core particle containing a binder resin, a colorant, a release agent, and the like and having a relatively low glass transition point (Tg), a resin having a relatively high glass transition point (Tg) is aggregated and fusion-bonded, and a shell layer can be thereby formed. The shell layer preferably contains an amorphous polyester resin. [Particle Size of Toner Particle] An average particle size of the toner particles is preferably 3 μm or more and 10 μm or less, and more preferably 5 μm or more and 8 μm or less in terms of a volume-based median diameter (d50). Within the above range, high reproducibility can be obtained even with a very small dot image at a 1200 dpi level. Note that the average particle size of the toner particles can be controlled by the concentration of a flocculant used for manufacturing, the amount of an organic solvent added, fusion-bonding time, the composition of the binder resin, and the like. The volume-based median diameter (d50) of the toner particles can be measured using a measuring device in which a computer system equipped with data processing software V 3.51 is connected to “Multisizer 3” (manufactured by Beckman Coulter, Inc.). Specifically, a measurement sample (toner) is added to a surfactant solution (for the purpose of dispersing the toner particles, for example, a surfactant solution obtained by diluting a neutral detergent containing a surfactant component 10 times with pure water) and familiarized. Thereafter, the resulting solution is subjected to ultrasonic dispersion to prepare a toner particle dispersion. This toner particle dispersion is injected into a beaker containing “ISOTON II” (manufactured by Beckman Coulter, Inc.) in a sample stand with a pipette until a display concentration of the measuring device reaches 8%. Here, by setting the concentration to this concentration, a reproducible measured value can be obtained. Then, in the measuring device, the count number of measurement particles is set to 25000, an aperture diameter is set to 100 μm, a measurement range of 2 to 60 μm is divided into 256 parts, and a frequency value is calculated. A particle size of 50% from a larger volume integration fraction is taken as the volume-based median diameter (d50). The average circularity of the toner particles is preferably 0.930 or more and 1.000 or less, and more preferably 0.950 or more and 0.995 or less from a viewpoint of enhancing stability of charging characteristics and low-temperature fixability. If the average circularity is within the above range, each toner particle is less likely to be crushed. This makes it possible to suppress contamination of a frictional charge imparting member, to stabilize chargeability of the toner, and to enhance image quality of a formed image. The average circularity of the toner particles can be measured using “FPIA-3000” (manufactured by Sysmex Corporation). Specifically, a measurement sample (toner) is familiarized with an aqueous solution containing a surfactant, and is subjected to an ultrasonic dispersion treatment for one minute to be dispersed. Thereafter, the resulting solution is photographed using “FPIA-3000” (manufactured by Sysmex Corporation) under measurement conditions: HPF (high magnification imaging) mode at an appropriate concentration of 3,000 to 10,000 HPF detection numbers. If the HPF detection number is within the above range, a reproducible measurement value can be obtained. The circularity of each toner particle is calculated from the photographed particle image according to the following formula (I), and the circularities of the toner particles are summed up, and the resulting sum is divided by the total number of toner particles to obtain an average circularity. Circularity=(peripheral length of circle having the same projected area as particle image)/(peripheral length of particle projected image) (I) 1-5. Method for Manufacturing Toner A method for manufacturing the first toner and the second toner is not particularly limited, and examples thereof include a known method such as a kneading pulverization method, a suspension polymerization method, an emulsion aggregation method, a dissolution suspension method, a polyester elongation method, or a dispersion polymerization method. The emulsion aggregation method is preferably adopted from a viewpoint of uniformity of the particle size, controllability of the shape, and ease of forming the core-shell structure. In addition, the emulsion aggregation method is preferably used from a viewpoint that it is easy to control the crystalline polyester resin in the toner particles so as to be located at a desired position by utilizing hydrophobicity of the crystalline polyester resin and the release agent. 1-5-1. Emulsification Aggregation Method The emulsion aggregation method is a method for manufacturing toner particles, the method including mixing a dispersion of crystalline resin particles, a dispersion of amorphous resin particles, and a dispersion of colorant particles, aggregating the particles until the particles each have a predetermined particle size, and then fusion-bonding the crystalline resin particles and the amorphous resin particles to each other to perform shape control. Specifically, toner particles are manufactured through the following steps (a) to (g). (a) Preparation of Each Dispersion (a-1) Preparation of Crystalline Resin Particle Dispersion Examples of a method for preparing the crystalline resin particle dispersion include a method for dispersing the obtained crystalline resin in an aqueous medium without using a solvent and a method for dissolving the crystalline resin in a solvent such as ethyl acetate or methyl ethyl ketone to prepare a solution, emulsifying and dispersing the solution in an aqueous medium using a disperser, and removing the solvent therefrom. In the present invention, the “aqueous medium” refers to a medium containing at least 50% by mass of water, and examples of a component other than water include an organic solvent that is dissolved in water, such as methanol, ethanol, isopropanol, acetone, dimethylformamide, methyl cellosolve, or tetrahydrofuran. Among these solvents, it is preferable to use an alcohol-based organic solvent such as methanol, ethanol, or isopropanol, which is an organic solvent that does not dissolve a resin. Preferably, only water is used as the aqueous medium. When the crystalline resin contains a crystalline polyester resin and its structure contains a carboxy group, ammonia, sodium hydroxide, or the like may be added in order to ion-dissociate the carboxy group and stably emulsify the resulting ions in an aqueous phase to smoothly promote emulsification. Furthermore, a dispersion stabilizer may be dissolved in the aqueous medium, and a surfactant, resin particles, or the like may be added for the purpose of improving dispersion stability of oil droplets. As the dispersion stabilizer, a known stabilizer can be used. For example, a stabilizer soluble in an acid or an alkali, such as tricalcium phosphate, is preferably used, or a stabilizer that can be decomposed by an enzyme is preferably used from an environmental point of view. As the surfactant, a known anionic surfactant, cationic surfactant, nonionic surfactant, and amphoteric surfactant can be used. Examples of the resin particles for improving dispersion stability include polymethylmethacrylate resin particles, polystyrene resin particles, and polystyrene-acrylonitrile resin particles. The dispersion treatment described above can be performed by utilizing mechanical energy. A disperser is not particularly limited, and examples thereof include a homogenizer, a low-speed shear disperser, a high-speed shear disperser, a frictional disperser, a high-pressure jet type disperser, an ultrasonic disperser, a high-pressure impact type disperser ultimizer, and an emulsification disperser. During dispersion, a solution is preferably heated. Heating conditions are not particularly limited, but are usually 60° C. or higher and 200° C. or lower. The volume-based median diameter of the crystalline resin particles in the crystalline resin particle dispersion prepared in this way is preferably 60 nm or more and 1000 nm or less, and more preferably 80 nm or more and 500 nm or less. Note that the median diameter can be controlled, for example, by the magnitude of mechanical energy during emulsification and dispersion. The content of the crystalline resin particles in the crystalline resin particle dispersion is preferably 10% by mass or more and 50% by mass or less, and more preferably 15% by mass or more and 40% by mass or less with respect to the entire dispersion. Within such a range, spread of a particle size distribution can be suppressed, and toner characteristics can be improved. (a-2) Preparation of Amorphous Resin Particle Dispersion For example, when emulsion polymerization is performed in an aqueous medium to obtain an amorphous resin, the liquid after the polymerization reaction can be used as it is as an amorphous resin particle dispersion. It is also possible to use a method for pulverizing an isolated amorphous resin as necessary, and then dispersing the amorphous resin in an aqueous medium using an ultrasonic disperser or the like in the presence of a surfactant. Examples of the aqueous medium and the surfactant are similar to those in the above (a-1) Preparation of crystalline resin particle dispersion. The volume-based median diameter of the amorphous resin particles in the amorphous resin particle dispersion is preferably 60 nm or more and 1000 nm or less, and more preferably 80 nm or more and 500 nm or less. Note that the median diameter can be controlled, for example, by the magnitude of mechanical energy during polymerization. The content of the vinyl-based resin particles in the amorphous resin particle dispersion is preferably 10% by mass or more and 50% by mass or less, and more preferably 15% by mass or more and 40% by mass or less with respect to the entire dispersion. Within such a range, spread of a particle size distribution can be suppressed, and toner characteristics can be improved. In the present invention, the amorphous resin particle dispersion may contain a release agent. The content of the release agent with respect to the amorphous resin particle dispersion is preferably 2% by mass or more and 20% by mass or less, and more preferably 5% by mass or more and 15% by mass or less. (a-3) Preparation of Colorant Particle Dispersion A colorant is dispersed in a form of fine particles in an aqueous medium to prepare a colorant particle dispersion. Examples of the aqueous medium used are similar to those in the above (a-1) Preparation of crystalline resin particle dispersion. A surfactant, resin particles, or the like may be added to the aqueous medium for the purpose of improving dispersion stability. The colorant can be dispersed by a disperser using mechanical energy, examples of the disperser are similar to those in the above (a-1) Preparation of crystalline resin particle dispersion. The volume-based median diameter of the colorant particles in the colorant particle dispersion is preferably 10 nm or more and 300 nm or less. The content of the colorant in the colorant particle dispersion with respect to the entire dispersion is preferably 5% by mass or more and 45% by mass or less, and more preferably 10% by mass or more and 30% by mass or less. Within such a range, there is an effect of ensuring color reproducibility. (b) Aggregation and Fusion-Bonding A dispersion of the crystalline resin particles, a dispersion of the amorphous resin particles, and a dispersion of the colorant particles are mixed in an aqueous medium, and these particles are dispersed in the aqueous medium. Thereafter, a flocculant is added to the resulting dispersion. The resulting mixture is heated at a temperature equal to or higher than the glass transition point of the amorphous resin particles to aggregate the particles and fuse-bond the particles to each other. The flocculant used in the present invention is not particularly limited, but is preferably selected from metal salts such as an alkali metal salt and a group 2 metal salt. Examples of the metal salts include: a monovalent metal salt of sodium, potassium, or lithium; a divalent metal salt of calcium, magnesium, manganese, or copper; and a trivalent metal salt of iron or aluminum. Specific examples of the metal salt include sodium chloride, potassium chloride, lithium chloride, calcium chloride, magnesium chloride, zinc chloride, copper sulfate, magnesium sulfate, manganese sulfate, and aluminum sulfate. Among these metal salts, a divalent or trivalent metal salt is particularly preferably used because aggregation can be promoted with a smaller amount thereof. These flocculants can be used singly or in combination of two or more types thereof. The amount of the flocculant used is not particularly limited, but is preferably 2% by mass or more and 30% by mass or less, and more preferably 5% by mass or more and 20% by mass or less with respect to the solid content of the binder resin constituting the toner particles from a viewpoint of toner particle size controllability. In the aggregation step, the temperature is preferably raised rapidly by heating after the flocculant is added, and a temperature rising rate is preferably 0.05° C./min or more. An upper limit of the temperature rising rate is not particularly limited, but is preferably 15° C./min or less from a viewpoint of suppressing generation of coarse particles due to rapid progress of fusion-bonding. Furthermore, after the temperature of the aggregation dispersion reaches a desired temperature, it is important to maintain the temperature of the aggregation dispersion for a certain period of time, preferably until the volume-based median diameter reaches 4.5 to 7.0 μm, and to continue fusion-bonding (c) Aging An aging treatment is performed by heating and stirring the system in which the associated particles are dispersed, and adjusting a heating temperature, a stirring speed, heating time and the like until the associated particles have a desired circularity. The aging treatment is performed as necessary. (d) Cooling The dispersion of toner particles is cooled. As a condition of the cooling treatment, cooling is preferably performed at a cooling rate of 1 to 20° C./min. A specific method for the cooling treatment is not particularly limited, and examples thereof include a method for introducing a refrigerant from the outside of a reaction container for cooling and a method for directly putting cold water into a reaction system for cooling. (e) Filtration and Cleaning The toner particles are separated from the cooled dispersion of the toner particles by solid-liquid separation. Deposits such as the surfactant or the flocculant are removed from the toner cake (aggregate obtained by aggregating the wet toner particles in a form of a cake), and the residue is cleaned. A method for the solid-liquid separation is not particularly limited, but examples thereof include a centrifugal separation method, a reduced pressure filtration method using Nutsche or the like, and a filtration method using a filter press or the like. (f) Drying The cleaned toner cake is dried. A drying method is not particularly limited, but examples thereof include a drying method using a spray dryer, a vacuum freeze dryer, or a reduced pressure dryer. A stationary shelf dryer, a mobile dryer, a fluidized layer dryer, a rotary dryer, a stirring dryer, or the like is preferably used from a viewpoint of manufacturing stability. (g) Addition of External Additive An external additive is added to the toner particles as necessary. As a device for mixing the external additive, a mechanical mixing device such as a Henschel mixer, a coffee mill, or a sample mill can be used. 2 Image Forming Method The toner set according to an embodiment of the present invention can be used in a known electrophotographic image forming method. The first toner is preferably attached to a position farthest from a surface of a recording medium from a viewpoint of tacking suppression. Note that an image formed by the image forming method according to an embodiment of the present invention may be an image in which the amount of adhesion of the first toner is smaller than the amount of adhesion of a toner other than the first toner, and may be, for example, a secondary color image formed by using the second toner, the third toner, and a small amount of the first toner. An image formed by the image forming method according to an embodiment of the present invention is more preferably an image in which the amount of adhesion of the first toner is larger than that of the above secondary color image from a viewpoint of tacking suppression and wax adhesion suppression, and is for example, preferably a three color image formed by using the first toner, the second toner, and the third toner. 2-1. Image Forming Device The present invention provides an image forming device that forms a superimposed image of color toners, the image forming device including: a charging unit that charges an image carrier; an exposure unit that exposes the image carrier charged by the charging unit to form an electrostatic latent image; a developing unit that supplies a developer containing color toners to the electrostatic latent image formed by the exposure unit and performs development; a primary transfer unit that transfers the developer image formed by the developing unit onto a transfer belt; and a fixing unit that comes into contact with the developer image transferred from the transfer belt onto the transfer unit, heats the developer image, and fixes the developer image to a recording medium, in which an electrostatic latent image-developing toner set for forming the superimposed image is the above-described toner set. For description of a general image forming device itself other than the color toner set, which is a characteristic of the present invention, for example, JP 2016-184164 A and JP 2017-207639 A are referred to, and the disclosure contents thereof are cited as a whole by reference. Example Hereinafter, the present invention will be described in more detail with reference to Example, but these descriptions do not limit the scope of the present invention. [Preparation of Amorphous Resin Particle Dispersion] <Preparation of Vinyl-Based Resin Particle Dispersion [D-S1]> (First Stage Polymerization) In a 5 L reaction container equipped with a stirrer, a temperature sensor, a cooling tube, and a nitrogen introducing device, 8 parts by mass of sodium dodecylsulfate and 3000 parts by mass of deionized water were put. While the resulting mixture was stirred at a stirring speed of 230 rpm under a nitrogen stream, the internal temperature was raised to 80° C. After the temperature was raised, a solution obtained by dissolving 10 parts by mass of potassium persulfate in 200 parts by mass of deionized water was added thereto, the liquid temperature was set to 80° C. again, and a mixed solution of the following monomers was added dropwise thereto over one hour. Styrene480parts by massn-Butyl acrylate250parts by massMethacrylic acid68parts by mass After the above mixed solution was added dropwise, the resulting mixture was heated and stirred at 80° C. for two hours to polymerize the monomers, thus preparing a particle dispersion (a). (Second Stage Polymerization) In a 5 L reaction container equipped with a stirrer, a temperature sensor, a cooling tube, and a nitrogen introducing device, 1100 parts by mass of deionized water and 55 parts by mass of the particle dispersion (a) in terms of solid content prepared by the first stage polymerization were put, and the resulting mixture was heated to 87° C. Thereafter, a mixed solution obtained by dissolving the following monomers, chain transfer agent, and release agent at 85° C. was mixed and dispersed for 10 minutes using a mechanical disperser “CLEARMIX” (manufactured by M Technique Co., Ltd.) having a circulation path to prepare a dispersion containing emulsified particles (oil droplets). This dispersion was added to the 5 L reaction container. A polymerization initiator solution obtained by dissolving 5 parts by mass of potassium persulfate in 103 parts by mass of deionized water was added to the dispersion, and the system was heated and stirred at 87° C. for one hour to perform polymerization, thus preparing a particle dispersion (b). Styrene257parts by mass2-Ethylhexyl acrylate95parts by massMethacrylic acid38parts by massChain transfer agent:4parts by massn-octyl-3-mercaptopropionateRelease agent 1: HNP0190131parts by mass(manufactured by Nippon Seiro Co., Ltd.) (Third Stage Polymerization) To the particle dispersion (b) obtained by the second stage polymerization, a solution obtained by dissolving 7 parts by mass of potassium persulfate in 158 parts by mass of deionized water was further added. Furthermore, under a temperature condition of 84° C., a mixed solution of the following monomers and chain transfer agent was added dropwise thereto over 90 minutes. Styrene370parts by massn-Butyl acrylate165parts by massMethacrylic acid40parts by massMethyl methacrylate47parts by massn-Octyl-3-mercaptopropionate9parts by mass After completion of the dropwise addition, the resulting mixture was heated and stirred for two hours to perform polymerization, and then cooled to 28° C. to obtain a vinyl-based resin fine particle dispersion [D-S1]. <Preparation of Vinyl-Based Resin Fine Particle Dispersions [D-S2] to [D-S5]> Vinyl-based resin fine particle dispersions [D-S2] to [D-S5] were obtained in a similar manner to the above <Preparation of vinyl-based resin fine particle dispersion [D-S1]> except that the type of release agent in the second stage polymerization was changed to those described in Table 1. Note that a release agent 4 was purified by subjecting a release agent 2 to a centrifugal thin film distillation device (CEH-400BII manufactured by ULVAC, Inc.). TABLE 1ReleaseType ofMeltingagentProduct name orreleasepointNo.substance nameagent[° C.]D-S11HNP0190 (manufactured byHydrocarbon85.2Nippon Seiro Co., Ltd.)waxD-S22NCM9395 (manufactured byHydrocarbon90.0Sasol)waxD-S33HiMic-1080 (manufactured byHydrocarbon78.0Nippon Seiro Co., Ltd.)waxD-S44NCM9395 (manufactured byHydrocarbon92.1Sasol) (*)waxD-S55Ethylene glycol distearateEster81.1wax(*) Release agent 4 was manufactured by purifying release agent 2. <Preparation of Amorphous Polyester Resin Particle Dispersion [D-P1]> A mixed solution of the following vinyl-based resin monomers, bireactive monomer, and polymerization initiator was nut in a dropping funnel. Styrene80 parts by massn-Butyl acrylate20 parts by massAcrylic acid10 parts by massdi-t-Butyl peroxide16 parts by mass(polymerization initiator) The following amorphous polyester resin monomers were put in a four-necked flask equipped with a nitrogen introduction tube, a dehydration tube, a stirrer, and a thermocouple, and heated to 170° C. to be dissolved. 2 mol Adduct of bisphenol50parts by massA ethylene oxide2 mol Adduct of bisphenol250parts by massA propylene oxideTerephthalic acid120parts by massDodecenyl succinic acid46parts by mass Under stirring, the mixed solution contained in the dropping funnel was added dropwise to the four-necked flask over 90 minutes, and aged for 60 minutes. Thereafter, unreacted monomers were removed under reduced pressure (8 kPa). Thereafter, 0.4 parts by mass of Ti(OBu)4as an esterification catalyst was added thereto, and the temperature was raised to 235° C. A reaction was performed at normal pressure (101.3 kPa) for five hours and further performed under reduced pressure (8 kPa) for one hour. Subsequently, the resulting solution was cooled to 200° C., and a reaction was caused under reduced pressure (20 kPa). Thereafter, a solvent was removed to obtain an amorphous polyester resin [A1]. The amorphous polyester resin [A1] thus obtained had a weight average molecular weight (Mw) of 24000 and an acid value of 18.2 mgKOH/g. 108 parts by mass of the obtained amorphous polyester resin [A1] was stirred at 70° C. for 30 minutes in 64 parts by mass of methyl ethyl ketone and dissolved therein. Next, 3.4 parts by mass of a 25% by mass sodium hydroxide aqueous solution was added to this solution. This solution was put in a reaction container equipped with a stirrer, and 210 parts by mass of water warmed to 70° C. was added dropwise thereto and mixed over 70 minutes while being stirred. The liquid in the container became cloudy during the dropwise addition. After the entire amount of water was added dropwise, a uniformly emulsified state was obtained. As a result of measurement of the particle sizes of the oil droplets of this emulsion using “Nanotrack Wave” (manufactured by Microtrack Bell), the volume average particle size thereof was 90 nm. Subsequently, while the temperature of the emulsion was maintained at 70° C., the emulsion was stirred under reduced pressure of 15 kPa (150 mbar) for three hours using a diaphragm type vacuum pump “V-700” (manufactured by BUCHI) to remove methyl ethyl ketone by distillation, thus preparing an amorphous polyester resin fine particle dispersion [D-A1] in which fine particles of the amorphous polyester resin were dispersed. As a result of measurement using the above particle size distribution measuring device, the volume average particle size of the amorphous polyester resin in the amorphous polyester resin particle dispersion was 94 mm The weight average molecular weight (Mw) of each resin in the present Example can be measured using one obtained by connecting a gel permeation chromatography (HLC-8320GPC: manufactured by Tosoh Corporation), one column of “TSK gel guardcolumn Super HZ-L” (manufactured by Tosoh Corporation), and three columns of “TSK gel Super HZM-M” (manufactured by Tosoh Corporation). Specifically, the columns are stabilized at ° C., and tetrahydrofuran (THF) as a carrier solvent is caused to flow through the columns at this temperature at a flow rate of 0.35 mL/min. A THF sample solution of a measurement sample (resin) adjusted to a sample concentration of 1 mg/mL is treated with a roll mill at room temperature for 10 minutes and treated with a membrane filter having a pore size of 0.2 μm to obtain a sample solution. 10 μL of this sample solution is injected into the device together with the above carrier solvent, and a refractive index is detected using a refractive index detector (RI detector). Subsequently, based on a calibration curve created using a polystyrene standard sample having a monodispersed molecular weight distribution, a molecular weight distribution of the measurement sample is calculated. The calibration curve is created from 10 samples of “polystylene standard sample TSK standard”: “A-500”, “F-1”, “F-10”, “F-80”, “F-380”, “A-2500”, “F-4”, “F-40”, “F-128”, and “F-700” manufactured by Tosoh Corporation. Note that a data collection interval in sample analysis is 300 ms. The acid value of each resin in the present Example was determined by the following methods (1) to (3) in accordance with JIS K0070-1992. (1) Preparation of Reagent 1.0 g of phenolphthalein was dissolved in 90 mL of ethyl alcohol (95% by volume), and deionized water was added thereto to adjust the volume thereof to 100 mL, thus preparing a phenolphthalein solution. Next, 7 g of JIS special grade potassium hydroxide was dissolved in 5 mL of deionized water, and ethyl alcohol (95% by volume) was added thereto to adjust the volume thereof to 1 liter. The resulting solution was put in an alkali-resistant container so as not to come into contact with carbon dioxide, left for three days, and then filtered to prepare a potassium hydroxide solution. Standardization was performed in accordance with the description in JIS K0070-1966. (2) Main Test 2.0 g of the crushed sample was precisely weighed in a 200 mL Erlenmeyer flask, and 100 mL of a mixed solution of toluene:ethanol=2:1 was added thereto. The sample was dissolved therein over five hours. Subsequently, several drops of the phenolphthalein solution prepared as an indicator were added thereto, and titration was performed with the prepared potassium hydroxide solution. Note that an end point of the titration was determined as a time point when light red color of the indicator continued for about 30 seconds. (3) Blank Test A similar operation to that in the main test is performed except that no sample is used (that is, only a mixed solution of toluene:ethanol=2:1 is used). An acid value was calculated by substituting the titration results of the main test and the blank test in the following formula (1). A=[(B−C)×f×5.6]/Sformula (1) A: Acid value (mgKOH/g) B: Amount of potassium hydroxide solution added during main test (mL) C: Amount of potassium hydroxide solution added during blank test (mL) f: Factor of 0.1 mol/L potassium hydroxide ethanol solution S: Mass of sample (g) 25 mL of 0.1 mol/L hydrochloric acid was put in an Erlenmeyer flask, several drops of the phenolphthaline solution were added thereto, and titration was performed with the potassium hydroxide solution. The factor of the potassium hydroxide ethanol solution was determined from the amount the potassium hydroxide solution required for neutralization. Note that the hydrochloric acid used was prepared in accordance with JIS K8001-1998. [Preparation of Crystalline Resin Fine Particle Dispersion] <Synthesis of Crystalline Resin [C1]> The following raw material monomers of an addition polymerization-based resin (styrene acrylic resin: StAc) unit containing a bireactive monomer and the following polymerization initiator were put in a dropping funnel. Styrene40parts by massn-Butyl acrylate16parts by massAcrylic acid3.5parts by massPolymerization initiator8parts by mass(di-t-butyl peroxide) In addition, the following raw material monomers of a polycondensation-based resin (crystalline polyester resin: CPEs) unit were put in a four-necked flask equipped with a nitrogen introduction tube, a dehydration tube, a stirrer, and a thermocouple, and heated to 170° C. to be dissolved therein. Tetradecanedioic acid280 parts by mass1,4-Butanediol105 parts by mass Subsequently, the above monomers were put in a reaction container equipped with a stirrer, a thermometer, a cooling tube, and a nitrogen gas introduction tube, and the inside of the reaction container was replaced with a dry nitrogen gas. To the obtained mixed solution, 0.4 parts by mass of Ti(O-n-Bu)4was added, and the temperature was raised to 235° C. A reaction was caused at normal pressure (101.3 kPa) for five hours and further caused under reduced pressure (8 kPa) for one hour. Subsequently, the obtained reaction solution was cooled to 200° C., and then a reaction was caused under reduced pressure (20 kPa) such that an acid value calculated by the above measurement method was 20.0 mgKOH/g after introduction of the crystal nucleating agent. Subsequently, the pressure in the reaction container was gradually released to return to normal pressure. Thereafter, 20.4 parts by mass of stearic acid was added thereto as a crystal nucleating agent, and a reaction was caused under normal pressure at a temperature of 200° C. for 1.5 hours. Thereafter, the pressure in the reaction container was reduced to 5 kPa or less at 200° C., and a reaction was caused for 2.5 hours to obtain a crystalline resin [C1]. The obtained crystalline resin had a weight average molecular weight (Mw) of 11500 and an acid value of 20.0 mgKOH/g. <Preparation of Crystalline Resin Fine Particle Dispersion [D-C1]> 174 parts by mass of the crystalline resin obtained by the above method was put in 102 parts by mass of methyl ethyl ketone, stirred at 75° C. for 30 minutes, and dissolved therein. Next, 3.1 parts by mass of a 25% by mass sodium hydroxide aqueous solution was added to this solution. This solution was put in a reaction container equipped with a stirrer, and 375 parts by mass of water warmed to 70° C. was added dropwise thereto and mixed over 70 minutes while being stirred. The liquid in the container became cloudy during the dropwise addition. After the entire amount of water was added dropwise, a uniformly emulsified state was obtained. Subsequently, while the emulsion was maintained at 70° C., the emulsion was stirred under reduced pressure of 15 kPa (150 mbar) for three hours using a diaphragm type vacuum pump “V-700” (manufactured by BUCHI) to remove methyl ethyl ketone by distillation. Thereafter, the residue was cooled at a cooling rate of 6° C./min to prepare a crystalline resin fine particle dispersion [D-C1] in which fine particles of the crystalline resin were dispersed. As a result of measurement using the above particle size distribution measuring device, the volume average particle size of the crystalline resin fine particles in the crystalline resin fine particle dispersion [D-C1] was 202 nm. <Synthesis of Crystalline Resin [C2]> As compared with the above <Synthesis of Crystalline Resin [C1]>, the amount of 1,4-butanediol used was changed to 92 parts by mass, the crystal nucleating agent was changed to 19.8 parts by mass of stearyl alcohol, and the reaction time was appropriately changed such that the acid value of the crystalline resin after the crystal nucleating agent was introduced to cause 100% reaction was the value illustrated in Table 2. A crystalline resin [C2] was synthesized in a similar manner to <Synthesis of Crystalline Resin [C1]> except for the above changes. <Synthesis of Crystalline Resin [C4]> As compared with the above <Synthesis of Crystalline Resin [C1]>, the amount of 1,4-butanediol used was changed to 130 parts by mass, the amount of stearic acid used was changed to 21.7 parts by mass, and the reaction time was appropriately changed such that the acid value of the crystalline resin after the crystal nucleating agent was introduced to cause 100% reaction was the value illustrated in Table 2. A crystalline resin [C4] was synthesized in a similar manner to <Synthesis of Crystalline Resin [C1]> except for the above changes. <Synthesis of Crystalline Resin [C7]> The following monomers were put in a reaction container equipped with a stirrer, a thermometer, a capacitor, and a nitrogen gas introduction tube, and the inside of the reaction container was replaced with a dry nitrogen gas. Tetradecanedioic acid280 parts by mass1,4-Butanediol105 parts by mass To the obtained mixed solution, 0.4 parts by mass of Ti(O-n-Bu)4was added, and the temperature was raised to 235° C. A reaction was caused at normal pressure (101.3 kPa) for five hours and further caused under reduced pressure (8 kPa) for one hour. Subsequently, the obtained reaction solution was cooled to 200° C., and then a reaction was caused under reduced pressure (20 kPa) such that an acid value calculated by the above measurement method was 21.4 mgKOH/g after introduction of the crystal nucleating agent. Subsequently, the pressure in the reaction container was gradually released to return to normal pressure. Thereafter, 17.4 parts by mass of stearic acid was added thereto as a crystal nucleating agent, and a reaction was caused under normal pressure at a temperature of 200° C. for 1.5 hours. Thereafter, the pressure in the reaction container was reduced to 5 kPa or less at 200° C., and a reaction was caused for 2.5 hours to obtain a crystalline resin [C7]. The crystalline resin [C7] had a weight average molecular weight (Mw) of 10800 and an acid value of 21.4 mgKOH/g. <Synthesis of Crystalline Resin [C10]> In a reactor equipped with a stirrer and a thermometer, 1000 parts by mass of isophorone diisocyanate, 830 parts by mass of 1,4-adipate (polyester diol formed by 1,4-butanediol and adipic acid), 96.3 parts by mass of stearic acid as a crystal nucleating agent, and 250 parts by mass of methyl ethyl ketone were put while nitrogen was introduced thereinto. Thereafter, a urethanization reaction was caused at 80° C. for six hours. Next, 2128 parts by mass of deionized water was added thereto while being stirred, and then the pressure inside the reaction system was reduced to remove the solvent, thus obtaining a crystalline resin fine particle dispersion [C10]. The crystalline resin [C10] had a weight average molecular weight (Mw) of 14100. <Synthesis of Crystalline Resin [C15]> The following raw material monomers of an addition polymerization-based resin (styrene acrylic resin: StAc) unit containing a bireactive monomer and the following polymerization initiator were put in a dropping funnel. Styrene40parts by massn-Butyl acrylate16parts by massAcrylic acid4parts by massPolymerization initiator8parts by mass(di-t-butyl peroxide) In addition, the following raw material monomers of a polycondensation-based resin (crystalline polyester resin: CPEs) unit were put in a four-necked flask equipped with a nitrogen introduction tube, a dehydration tube, a stirrer, and a thermocouple, and heated to 170° C. to be dissolved therein. Tetradecanedioic acid280 parts by mass1,4-Butanediol105 parts by mass Subsequently, the above monomers were put in a reaction container equipped with a stirrer, a thermometer, a cooling tube, and a nitrogen gas introduction tube, and the inside of the reaction container was replaced with a dry nitrogen gas. To the obtained mixed solution, 0.4 parts by mass of Ti(O-n-Bu)4was added, and the temperature was raised to 235° C. A reaction was caused at normal pressure (101.3 kPa) for five hours and further caused under reduced pressure (8 kPa) for one hour. Subsequently, the obtained reaction solution was cooled to 200° C., and then a reaction was caused under reduced pressure (20 kPa) such that an acid value calculated by the above measurement method was 20.3 mgKOH/g to obtain a crystalline resin [C15]. <Synthesis of Other Crystalline Resins> As compared with the above <Synthesis of Crystalline Resin [C1]>, the type of crystal nucleating agent and the amount of a crystal nucleating agent moiety with respect to the amount of the crystalline resin excluding the crystal nucleating agent were changed as illustrated in Table 2, and the reaction time was appropriately changed such that the acid value of the crystalline resin after the crystal nucleating agent was introduced to cause 100% reaction was the value illustrated in Table 2. Crystalline resins [C3], [C5], [C6], [C8], [C9], and [C11] to [C14] were prepared in a similar manner to <Synthesis of Crystalline Resin [C1]> except for the above changes. TABLE 2Amount of crystalCrystalType of crystalnucleatingstructurenucleatingagent moietyAcidTypemoietyagent moiety[% by mass]HBMwvalueC1Crystalline PESStearic acid5Present1150020.0C2Crystalline PESStearyl alcohol5Present1210020.1C3Crystalline PESStearic acid5Present1570015.9C4Crystalline PESStearic acid5Present83027.1C5Crystalline PESStearic acid5Present2200013.1C6Crystalline PESStearic acid5Present2920011.2C7Crystalline PESStearic acid5Absent1080021.4C8Crystalline PESStearic acid2Present1380019.9C9Crystalline PESStearic acid10Present960020.5C10Crystalline polyurethaneStearic acid5Present1410021.4C11Crystalline PESOctanoic acid5Present1200020.1C12Crystalline PESDotriacontanoic acid5Present1130019.7C13Crystalline PESStearic acid16Present620020.4C14Crystalline PESStearic acid0.5Present1650020.1C15Crystalline PES—0Present1750020.3*Crystalline PES indicates crystalline polyester.*HB indicates hybrid crystalline polyester resin. <Preparation of Crystalline Resin Fine Particle Dispersions [D-C2] to [D-C15]> Crystalline resin fine particle dispersions [D-C2] to [D-C15] were obtained in a similar manner to the above <Preparation of crystalline resin fine particle dispersion [D-C1]> except that the crystalline resin [C1] was changed to the crystalline resins [C2] to [C15], respectively. [Preparation of Yellow Colorant Fine Particle Dispersion [PY]] While a solution obtained by adding 90 parts by mass of sodium dodecyl sulfate to 1600 parts by mass of deionized water was stirred, 420 parts by mass of C.I. Pigment Yellow 74 was gradually added thereto. The resulting solution was dispersed using a stirrer CLEARMIX (manufactured by M Technique Co., Ltd.) to prepare a magenta colorant fine particle dispersion [PY]. The colorant particles in the dispersion had a volume-based median diameter of 200 nm. [Preparation of Cyan Colorant Fine Particle Dispersion] <Preparation of Cyan Colorant Fine Particle Dispersion [PC1]> While a solution obtained by adding 226 parts by mass of sodium dodecyl sulfate to 1600 parts by mass of deionized water was stirred, 420 parts by mass of silicon phthalocyanine (compound represented by general formula (1-A)) was gradually added thereto. The resulting solution was dispersed using a stirrer CLEARMIX (manufactured by M Technique Co., Ltd.) to prepare a cyan colorant fine particle dispersion [PC1]. The colorant particles in the dispersion had a volume-based median diameter of 160 nm. <Preparation of Cyan Colorant Fine Particle Dispersion [PC2]> A cyan colorant fine particle dispersion [PC2] was prepared in a similar manner to the above <Preparation of cyan colorant fine particle dispersion [PC1]> except that the colorant added was changed from silicon phthalocyanine to copper phthalocyanine (C.I. Pigment Blue 15:3). The colorant particles in the dispersion had a volume-based median diameter of 110 nm. [Preparation of Magenta Colorant Fine Particle Dispersion] <Preparation of Magenta Colorant Fine Particle Dispersion [PM1]> A magenta colorant fine particle dispersion [PM1] was prepared in a similar manner to the [Preparation of yellow colorant fine particle dispersion [PY]] except that the colorant added was changed from C.I. Pigment Yellow 74 to C.I. Pigment Red 269. The colorant particles in the dispersion had a volume-based median diameter of 250 nm. <Preparation of Magenta Colorant Fine Particle Dispersion [PM2]> A magenta colorant fine particle dispersion [PM2] was prepared in a similar manner to the above [Preparation of yellow colorant fine particle dispersion [PY]] except that the colorant added was changed from C.I. Pigment Yellow 74 to C.I. Pigment Red 122. The colorant particles in the dispersion had a volume-based median diameter of 240 nm. <Preparation of Magenta Colorant Fine Particle Dispersion [PM3]> A magenta colorant fine particle dispersion [PM3] was prepared in a similar manner to the above [Preparation of yellow colorant fine particle dispersion [PY]] except that the colorant added was changed from C.I. Pigment Yellow 74 to a compound obtained by a reaction between the colorant (a) represented by general formula (2) and the metal-containing compound (b) represented by general formula (3) at a molar ratio of 1:1. The colorant particles in the dispersion had a volume-based median diameter of 240 nm. Note that a method for synthesizing the colorant represented by general formula (2) and a method for synthesizing the metal-containing compound (b) represented by general formula (3) are described below. (Synthesis of Colorant (a)) Hereinafter, the colorant (a) was synthesized according to the reaction formula represented by general formula (5). To 1.93 parts by mass of an intermediate 1, 1.53 parts by mass of an intermediate 2 was added, and 50 ml of toluene and 0.53 parts by mass of morpholine were further added thereto while being stirred. The resulting mixture was heated under reflux, and caused a reaction for eight hours while being dehydrated with an ester tube. After completion of the reaction, the reaction solution was concentrated, purified by column chromatography, and recrystallized from a mixed solvent of ethyl acetate/hexane to obtain 2.71 parts by mass of DX-1. DX-1 was identified by MASS, 1H-NMR, and IR spectrum, and was confirmed to be the target product. The purity of the obtained colorant (a) was 98% as a result of analysis by 1H-NMR. A visible absorption spectrum of the colorant (a) was measured (solvent: ethyl acetate). As a result, a maximum absorption wavelength was 535 nm, and a molar absorption coefficient was 71000 (L/mol·cm). (Synthesis of Metal-Containing Compound (b) (Metal: Cu)) Hereinafter, the metal-containing compound (b) was synthesized according to the reaction formula represented by general formula (6). —Synthesis of Compound B— To a 500 ml three-necked flask, 90.0 parts by mass of compound A, 21.5 parts by mass of cyanoacetic acid, 1.31 parts by mass of p-toluenesulfonic acid monohydrate, and 300 ml of toluene were added. The resulting mixture was heated under reflux using an ester tube for two hours while being dehydrated. The solvent was distilled off under reduced pressure, and then 500 ml of acetone was added thereto to perform recrystallization, thus obtaining 94.4 parts by mass of compound B. —Synthesis of Compound C— To a 100 ml three-necked flask, 5 parts by mass of compound B, 25 ml of toluene, 3.30 parts by mass of triethylamine, and 2.42 parts by mass of calcium chloride were added, and the resulting mixture was heated to 80° C. and stirred. After the internal temperature reached 80° C., 2.10 parts by mass of acetyl chloride was added dropwise thereto over one hour. After completion of the dropwise addition, the resulting mixture was cooled, and liquid separation was performed with dilute hydrochloric acid. Thereafter, the pH was neutralized with pure water, and the solvent was distilled off. 50 ml of toluene and 50 ml of ethyl acetate were added thereto to perform recrystallization, thus obtaining 4.30 parts by mass of compound C. —Synthesis of Metal-Containing Compound (b)— To a 200 nil three-necked flask, 2.00 parts by mass of compound C and 80 ml of acetone were added, and the resulting mixture was heated and stirred until the internal temperature reached 55° C. Thereafter, a solution obtained by dissolving 0.55 g of copper acetate monohydrate in 5 ml of a solvent of MeOH/water=5/1 was added dropwise thereto over 30 minutes. After completion of the dropwise addition, the precipitated solid was filtered to obtain 1.40 parts by mass of the metal-containing compound (b) (metal: Cu). The obtained metal compound (metal: Cu) had a transmittance of 98% (solvent: THF) at 500 nm and a purity of 98%. [Manufacturing of Toner] <Manufacturing of Yellow Toner [Ye1]> Into a reaction container equipped with a stirrer, a temperature sensor, and a cooling tube, 480 parts by mass (in terms of solid content) of vinyl-based resin fine particle dispersion [D-S1] and 350 parts by mass of deionized water were put. The pH was adjusted to 10 by adding a 5 mol/liter sodium hydroxide aqueous solution at room temperature (25° C.). Furthermore, 36.4 parts by mass (in terms of solid content) of the yellow colorant fine particle dispersion [PY] was added thereto, and 80 parts by mass of a 50% by mass magnesium chloride aqueous solution was added thereto over 10 minutes at 30° C. while being stirred. The obtained dispersion was allowed to stand for five minutes. Thereafter, the temperature thereof was raised to 80° C. over 60 minutes, and after reaching 80° C., 60 parts by mass (in terms of solid content) of the crystalline polyester resin fine particle dispersion [D-C1] was added thereto over 20 minutes. A stirring speed was adjusted such that a growth rate of the particle size was 0.01 μm/min, and the particles were grown until a volume-based median diameter measured by Coulter Multisizer 3 (manufactured by Coulter Beckman) reached 6.0 μm. Subsequently, 60 parts by mass (in terms of solid content) of the amorphous polyester resin dispersion [D-A1] was added thereto over 30 minutes. When the supernatant of the reaction liquid became transparent, an aqueous solution obtained by dissolving 80 parts by mass of sodium chloride in 300 parts by mass of deionized water was added thereto to stop the growth of the particle size. Subsequently, the solution was stirred at 80° C., and fusion-bonding of the particles was caused to proceed until the average circularity of the toner particles reached 0.970. Thereafter, the solution was cooled at a temperature falling rate of 0.5° C./min or more to lower the liquid temperature to 30° C. or lower. Subsequently, an operation of performing solid-liquid separation, redispersing the dehydrated toner cake in deionized water, and performing solid-liquid separation was repeated three times for cleaning After cleaning, the resulting product was dried at 35° C. for 24 hours to obtain toner matrix particles. To 100 parts by mass of the obtained toner matrix particles, 0.6 parts by mass of hydrophobic silica particles (number average primary particle size: 12 nm, hydrophobicity: 68), 1.0 part by mass of hydrophobic titanium oxide particles (number average primary particle size: 20 nm, hydrophobicity: 63), and 1.0 part by mass of solgel silica (number average primary particle size=110 nm) were added and mixed at 32° C. for 20 minutes at a rotating blade peripheral speed of 35 mm/sec using a Henschel mixer (manufactured by Nippon Coke & Engineering Co., Ltd.). After mixing, coarse particles were removed using a 45 μm opening sieve to obtain a yellow toner [Ye1]. The obtained yellow toner [Ye1] was mixed with a ferrite carrier having a volume average particle size of 32 μm coated with an acrylic resin so as to have a toner particle concentration of 6% by mass to be used as a two-component developer. <Manufacturing of yellow toners [Ye2] to [Ye24]> Yellow toners [Ye2] to [Ye24] were obtained in a similar manner to the above <Manufacturing of yellow toner [Ye1]> except that the amounts of the vinyl-based resin fine particle dispersion, the amorphous polyester resin fine particle dispersion, and the crystalline resin fine particle dispersion (each in terms of solid content) added and the type of crystalline resin were changed as illustrated in Table 3. Note that the % by mass of each of the vinyl-based resin, the amorphous polyester resin, and the crystalline resin illustrated in Table 3 represents the % by mass with respect to 600 parts by mass of the binder resin. Each of the obtained yellow toners [Ye2] to [Ye24] was mixed with a ferrite carrier having a volume average particle size of 32 μm coated with an acrylic resin so as to have a toner particle concentration of 6% by mass to be used as a two-component developer. TABLE 3Composition ration of binder resinCrystalline resin[% by mass]CrystalNucleatingVinyl-YellowVinylstructureNucleatingagent moietybasedAmorphousCrystallinetonerresinTypemoietyagent moiety[% by mass]HBMwresinPESresinYe1S1C1Crystalline PESStearic acid5Present11500801010Ye2S1C2Crystalline PESStearyl alcohol5Present12100801010Ye3S2C1Crystalline PESStearic acid5Present11500801010Ye4S3C1Crystalline PESStearic acid5Present11500801010Ye5S4C1Crystalline PESStearic acid5Present11500801010Ye6S1C3Crystalline PESStearic acid5Present15700801010Ye7S1C4Crystalline PESStearic acid5Present830801010Ye8S1C5Crystalline PESStearic acid5Present22000801010Ye9S1C6Crystalline PESStearic acid5Present29200801010Ye10S1C7Crystalline PESStearic acid5Absent10800801010Ye11S1C8Crystalline PESStearic acid2Present13800801010Ye12S1C9Crystalline PESStearic acid10Present9600801010Ye13S1C1Crystalline PESStearic acid5Present1150080173Ye14S1C1Crystalline PESStearic acid5Present1150080416Ye15S1C10CrystallineStearic acid5Present14100801010polyurethaneYe16S1C1Crystalline PESStearic acid5Present11500583210Ye17S1C11Crystalline PESOctanoic acid5Present12000801010Ye18S1C12Crystalline PESDotriacontanoic5Present11300801010acidYe19S1C13Crystalline PESStearic acid16Present6200801010Ye20S1C14Crystalline PESStearic acid0.5Present16500801010Ye21S5C1Crystalline PESStearic acid5Present11500801010Ye22S1——————80200Ye23S1C1Crystalline PESStearic acid5Present11500484210Ye24S1C15Crystalline PES—0Present17500801010*S1 to S5 use D-S1 to D-S5, respectively.*Crystalline PES and Amorphous PES indicate crystalline polyester and amorphous polyester, respectively.*HB indicates hybrid crystalline polyester resin. <Manufacturing of Cyan Toners [Cy1] to [Cy9]> Cyan toners [Cy1] to [Cy9] were obtained in a similar manner to the above <Manufacturing of yellow toner [Ye1]> except that the amounts of the vinyl-based resin fine particle dispersion, the amorphous polyester resin fine particle dispersion, and the crystalline resin fine particle dispersion (each in terms of solid content) added, the type of crystalline resin, and the type of colorant fine particle dispersion were changed as illustrated in Table 4. Note that the % by mass of each of the vinyl-based resin, the amorphous polyester resin, and the crystalline resin illustrated in Table 4 represents the % by mass with respect to 600 parts by mass of the binder resin. Each of the obtained cyan toners [Cy1] to [Cy9] was mixed with a ferrite carrier having a volume average particle size of 32 μm coated with an acrylic resin so as to have a toner particle concentration of 6% by mass to be used as a two-component developer. <Manufacturing of Magenta Toners [Ma1] to [Ma10]> Magenta toners [Ma1] to [Ma10] were obtained in a similar manner to the above <Manufacturing of yellow toner [Ye1]> except that the amounts of the vinyl-based resin fine particle dispersion, the amorphous polyester resin fine particle dispersion, and the crystalline resin fine particle dispersion (each in terms of solid content) added, the type of crystalline resin, and the type of colorant fine particle dispersion were changed as illustrated in Table 5. Note that the % by mass of each of the vinyl-based resin, the amorphous polyester resin, and the crystalline resin illustrated in Table 5 represents the % by mass with respect to 600 parts by mass of the binder resin. The content of each of PM1 to PM3 in a magenta colorant illustrated in Table 5 represents % by mass with respect to 36.4 parts by mass (in terms of solid content) of the magenta colorant. Each of the obtained magenta toners [Ma1] to [Ma10] was mixed with a ferrite carrier having a volume average particle size of 32 μm coated with an acrylic resin so as to have a toner particle concentration of 6% by mass to be used as a two-component developer. Note that the yellow toner [Ye], the cyan toner [Cy], and the magenta toner [Ma] in the present Example are the first toner, the second toner, and the third toner in the present invention, respectively. TABLE 4Composition ration of binder resinCrystalline resin[% by mass]Type ofCrystalNucleatingVinyl-CyanVinylcyanstructureNucleatingagent moietybasedAmorphousCrystallinetonerresincolorantTypemoietyagent moiety[% by mass]HBMwresinPESresinCy1S1PC1C1Crystalline PESStearic acid5Present11500801010Cy2S1PC1C8Crystalline PESStearic acid2Present13800801010Cy3S1PC1C15Crystalline PES—0Present17500801010Cy4S1PC2C15Crystalline PES—0Present17500801010Cy5S1PC2C1Crystalline PESStearic acid5Present11500801010Cy6S5PC1C15Crystalline PES—0Present17500801010Cy7S5PC1C1Crystalline PESStearic acid5Present11500801010Cy8S1PC1——————80200Cy9S1PC1C1Crystalline PESStearic acid5Present11500484210*S1 and S5 use D-S1 and D-S5, respectively.*Crystalline PES and Amorphous PES indicate crystalline polyester and amorphous polyester, respectively.*HB indicates hybrid crystalline polyester resin. TABLE 5Crystalline resinCrystalMagentaVinyl% by mass in magenta colorantstructureNucleatingtonerresinPM1PM2PM3Typemoietyagent moietyMa1S103070C1Crystalline PESStearic acidMa2S103070C8Crystalline PESStearic acidMa3S103070C15Crystalline PES—Ma4S100100C15Crystalline PES—Ma5S106040C15Crystalline PES—Ma6S150500C15Crystalline PES—Ma7S503070C15Crystalline PES—Ma8S503070C1Crystalline PESStearic acidMa9S103070———Ma10S103070C1Crystalline PESStearic acidComposition ration of binder resinCrystalline resin[% by mass]NucleatingVinyl-Magentaagent moietybasedAmorphousCrystallinetoner[% by mass]HBMwresinPESresinMa15Present11500801010Ma22Present13800801010Ma30Present17500801010Ma40Present17500801010Ma50Present17500801010Ma60Present17500801010Ma70Present17500801010Ma85Present11500801010Ma9———80200Ma105Present11500484210*S1 and S5 use D-S1 and D-S5, respectively.*Crystalline PES and Amorphous PES indicate crystalline polyester and amorphous polyester, respectively.*HB indicates hybrid crystalline polyester resin. [Evaluation Method] (i) Tacking Evaluation (i-a) Three Color Evaluation Onto a device obtained by modifying a fixing device of a multifunction device “bizhub PRESS (registered trademark) C1070” (manufactured by Konica Minolta Inc.) so as to be able to change surface temperatures of a fixing upper belt and a fixing lower roller, two-component developers containing a yellow toner, a cyan toner, and a magenta toner, respectively, were sequentially loaded. Using the above device, a fixing experiment of outputting a solid image with a toner adhesion amount of 10.2 g/m2on A4 size coated paper “OK Top Coat+(157.0 g/m2)” (manufactured by Oji Paper Co., Ltd.) such that the yellow toner was on the outermost layer in an environment of normal temperature and humidity (temperature 20° C., humidity 50% RH) was performed at a fixing temperature of 180° C. for 800 sheets. In order to record a paper surface temperature, out of images on the discharged sheets, to each of images on the first, 100th, 200th, 300th, 400th, 500th, 600th, and 700th sheets, a thermocouple “mold type surface sensor: MF-O-K” (manufactured by Toa Electric Inc.) was attached at the center of the sheet. After all 800 sheets on which images were fixed were loaded on a paper discharge tray, the sheets were left for eight hours until the paper temperature fell. A maximum temperature reached between the time when the sheets were discharged and the time when the sheets were cooled was defined as a measurement temperature of the sheets. The degree of adhesion between superimposed parts of an image after being left for eight hours was evaluated for the images on the first, 100th, 200th, 300th, 400th, 500th, 600th, and 700th sheets according to the following evaluation criteria. (Evaluation Criteria) OK: Roughness on a toner image surface is not visually recognized after superimposed sheets are peeled off from each other. NG: A toner image surface is rough after superimposed sheets are peeled off from each other. The above measurement temperature in an image that was OK according to the above evaluation criteria was defined as a tacking elimination temperature. Note that the above measurement temperature can be controlled by changing the volume of discharged paper air. When all the images on the first, 100th, 200th, 300th, 400th, 500th, 600th, and 700th sheets were NG, the volume of discharged paper air was increased, and a similar experiment was repeated until an OK level image was obtained. The tacking elimination temperature was judged according to the following evaluation criteria, and ⊙ and ◯ were determined as acceptable levels. No occurrence of tacking is desirable even if a sheet is discharged at a higher temperature. (Evaluation Criteria) ⊙: 60° C. or higher ◯: 58° C. or higher and lower than 60° C. Δ: 55° C. or higher and lower than 58° C. x: Lower than 55° C. (i-b) Secondary Color Evaluation Tacking was evaluated in a similar manner to the above (i-a) except that an image was formed using a cyan toner and a magenta toner, and the amount of toner adhering to paper was changed to 7.5 g/m2. Note that in the present Example, when an image is formed, a small amount of yellow toner is imparted for the purpose of preventing counterfeiting. (ii) Wax Adhesion Evaluation (ii-a) Three Color Evaluation A fixing device of a multifunction device “Accurio Press C3080” (manufactured by Konica Minolta Inc.) was modified so as to be able to change a surface temperature of a fixing upper belt in a range of 140 to 220° C. and to be able to change a surface temperature of a fixing lower roller in a range of 120 to 200° C. The two-component developers containing a yellow toner, a cyan toner, and a magenta toner, respectively, were sequentially loaded onto this modified device, and a solid image with a toner adhesion amount of 8.0 g/m2was formed on A4 (basis weight 157 g/m2) gloss-coated paper such that the yellow toner was on the outermost layer in a normal temperature and humidity (temperature 20° C., humidity 50% RH) environment, and fixed. A fixing rate during fixing was 460 mm/sec, and a fixing temperature (surface temperature of the fixing upper belt) was an under-offset temperature+35° C. After 100 sheets were printed, a wax adhesion state to a conveyance roller and readhesion from the conveyance roller to an image were visually evaluated, and 0, 0, and A were judged to be acceptable levels according to the following evaluation criteria. (Evaluation Criteria) ⊙: No wax adhesion is confirmed for both adhesion to the conveyance roller and readhesion to an image. ◯: Wax adhesion to the conveyance roller is slightly confirmed, but readhesion to an image is not confirmed, which is at a level having no problem with quality. Δ: Wax adhesion to the conveyance roller is confirmed, but readhesion to an image is hardly confirmed, which is at a level having no problem with quality. x: A large amount of wax adhesion to the conveyance roller is confirmed, and readhesion to an image occurs, which is at a level that cannot be put into practical use. (ii-b) Secondary Color Evaluation Wax adhesion was evaluated in a similar manner to the above (ii-a) except that an image was formed using a cyan toner and a magenta toner, and the amount of toner adhering to paper was changed to 6.0 g/m2. Note that in the present Example, when an image is formed, a small amount of yellow toner is imparted for the purpose of preventing counterfeiting. (iii) Evaluation of Low-Temperature Fixability (iii-a) Three Color Evaluation Onto a device obtained by modifying a fixing device of a multifunction device “bizhub PRESS (registered trademark) C1070” (manufactured by Konica Minolta Inc.) so as to be able to change a fixing temperature, a toner adhesion amount, a system speed, and surface temperatures of a fixing upper belt and a fixing lower roller, two-component developers containing a yellow toner, a cyan toner, and a magenta toner, respectively, were sequentially loaded. The amount of adhesion was set to 11.3 g/m2on A4 size high-quality paper “NPI high-quality (127.9 g/m2)” (manufactured by Nippon Paper Industries Co., Ltd.)” in a normal temperature and humidity (temperature 20° C., humidity 50% RH) environment. Thereafter, a fixing experiment of fixing an image having a size of 100 mm×100 mm such that the yellow toner was on the outermost layer was repeatedly performed while a set fixing temperature was raised in 2° C. increments from 110° C. to 180° C. A surface temperature of the fixing upper belt at a lowest fixing temperature at which image stains due to fixing offset were not visually confirmed was defined as a lowest fixing temperature (U.O. avoidance temperature), and low-temperature fixability was evaluated according to the following evaluation criteria. (Evaluation Criteria) ⊙: Lowest fixing temperature is lower than 125° C. ◯: Lowest fixing temperature is 125° C. or higher and lower than 130° C. Δ: Lowest fixing temperature is 130° C. or higher and lower than 135° C. x: Lowest fixing temperature is 135° C. or higher (iii-b) Secondary Color Evaluation Low-temperature fixability was evaluated in a similar manner to the above (iii-a) except that an image was formed using a cyan toner and a magenta toner, and the amount of toner adhering to paper was changed to 8.0 g/m2. Note that in the present Example, when an image is formed, a small amount of yellow toner is imparted for the purpose of preventing counterfeiting. (iv) Fold Fixability (iv-a) Three Color Evaluation Onto a device obtained by modifying a fixing device of a multifunction device “bizhub PRESS (registered trademark) C1070” (manufactured by Konica Minolta Inc.) so as to be able to change surface temperatures of a fixing upper belt and a fixing lower roller, two-component developers were sequentially loaded. The above device was modified such that a fixing temperature, a toner adhesion amount, and a system speed could be set freely. A fixing experiment of outputting a solid image with an adhesion amount of 11.3 g/m2on A4 size high-quality paper “NPI high-quality (127.9 g/m2)” (manufactured by Oji Paper Co., Ltd.) such that the yellow toner was on the outermost layer in an environment of normal temperature and humidity (temperature 20° C., humidity 50% RH) was repeatedly performed while a set fixing temperature was raised in 5° C. increments from 100° C. to 200° C. Subsequently, the printed matter obtained in the fixing experiment at each fixing temperature was folded with a folding machine under a weight load equivalent to 10 g/cm2such that the printed matter was valley-folded in a direction in which parts of the solid image came into contact with each other, and compressed air of 0.35 MPa was blown onto the solid images. The crease was ranked according to the following evaluation criteria. Results thereof are illustrated in Table below. (Evaluation Criteria)5: No crease4: Peeling is partially observed along a crease3: Fine linear peeling is observed along a crease2: Thick linear peeling is observed along a crease1: Peeling is significantly observed along a crease Among images of rank 2 or higher, the surface temperature of the fixing upper belt in the fixing experiment having the lowest fixing temperature was defined as a fixing lower limit temperature. For this fixing lower limit temperature, fold fixability was evaluated according to the following evaluation criteria. (Evaluation Criteria) ⊙: Fixing lower limit temperature is lower than 145° C. ◯: Fixing lower limit temperature is 145° C. or higher and lower than 150° C. Δ: Fixing lower limit temperature is 150° C. or higher and lower than 155° C. x: Fixing lower limit temperature is 155° C. or higher (iv-b) Secondary Color Evaluation Low-temperature fixability was evaluated in a similar manner to the above (iii-a) except that an image was formed using a cyan toner and a magenta toner, and the amount of toner adhering to paper was changed to 8.0 g/m2. Note that in the present Example, when an image is formed, a small amount of yellow toner is imparted for the purpose of preventing counterfeiting. The above evaluation tests (i) to (iv) were performed using the toner sets of Experiments 1 to 42 illustrated in Table 6. Note that the numerical values in the items of tacking, low-temperature fixability, and fold fixability in Table 6 represent measured values of temperature (° C.) measured for each evaluation. TABLE 6EvaluationThree color evaluationToner setLow-ExperimentYellowCyanMagentaWaxtemperatureFoldNo.tonertonertonerTackingadhesionfixabilityfixabilityExample1Ye1Cy3Ma360.7(⊙)⊙124.9(⊙)143.1(⊙)2Ye260.6(⊙)⊙124.8(⊙)143.3(⊙)3Ye360.7(⊙)⊙126.3(◯)145.2(◯)4Ye459.5(◯)Δ124.2(⊙)145.1(◯)5Ye561.1(⊙)⊙127.3(◯)146.1(◯)6Ye660.2(⊙)◯126.5(◯)144.4(⊙)7Ye759.1(◯)Δ124.1(⊙)143.9(⊙)8Ye860.2(⊙)◯129.2(◯)148.5(◯)9Ye960.7(⊙)◯129.8(◯)151.9(Δ)10Ye1060.3(⊙)Δ125.0(◯)145.9(◯)11Ye1157.7(Δ)Δ124.1(⊙)146.3(◯)12Ye1259.5(◯)◯127.8(◯)147.8(◯)13Ye1360.7(⊙)Δ129.8(◯)151.4(Δ)14Ye1459.3(◯)⊙124.5(⊙)150.7(Δ)15Ye1558.9(◯)◯126.7(◯)146.1(◯)16Ye1658.3(◯)Δ123.6(⊙)145.1(◯)17Ye1757.5(Δ)◯124.6(⊙)143.5(⊙)18Ye1858.2(◯)Δ129.9(◯)153.9(Δ)19Ye1959.2(◯)◯131.1(Δ)154.6(Δ)20Ye2057.0(Δ)Δ125.2(◯)148.2(◯)21Ye1Cy1Ma161.5(⊙)⊙133.1(Δ)153.1(Δ)22Cy2Ma261.0(⊙)⊙128.1(◯)144.6(⊙)23Cy3Ma460.3(⊙)⊙124.8(⊙)143.7(⊙)24Ma560.9(⊙)⊙126.3(◯)145.9(◯)25Ma661.1(⊙)⊙128.9(◯)149.1(◯)26Ye7CylMa159.8(◯)Δ131.9(Δ)150.7(Δ)27Cy2Ma259.3(◯)Δ129.7(◯)146.1(◯)28Cy3Ma658.8(◯)Δ131.0(Δ)152.2(Δ)29Ye8CylMa160.9(⊙)⊙133.1(Δ)154.7(Δ)30Cy2Ma260.1(⊙)◯130.6(Δ)148.9(◯)31Cy3Ma660.4(⊙)◯132.1(Δ)151.2(Δ)Comparative32Ye21Cy3Ma355.3(Δ)X134.1(Δ)149.6(◯)Example33Ye2256.1(Δ)X139.8(X)157.2(X)34Ye2354.6(X)X129.1(◯)147.8(◯)35Ye2454.9(X)X129.6(◯)148.8(◯)36Ye1Cy4Ma360.1(⊙)⊙126.1(◯)158.2(X)37Cy560.3(⊙)⊙125.4(◯)156.4(X)38Cy6Ma756.3(Δ)X125.1(◯)144.1(⊙)39Cy7Ma857.8(Δ)Δ128.9(◯)150.4(Δ)40Cy8Ma960.8(⊙)⊙137.1(X)162.2(X)41Ye22Cy8Ma961.2(⊙)Δ140.6(X)166.1(X)42Ye1Cy9Ma1059.5(◯)⊙131.2(Δ)156.2(X)EvaluationSecondary color evaluation (blue)Low-ExperimentWaxtemperatureFoldNo.TackingadhesionfixabilityfixabilityExample161.1(⊙)⊙121.9(⊙)133.1(⊙)261.1(⊙)⊙122.1(⊙)133.2(⊙)360.5(⊙)◯122.0(⊙)133.1(⊙)461.2(⊙)⊙121.8(⊙)133.3(⊙)559.8(◯)Δ122.1(⊙)133.2(⊙)659.5(◯)Δ122.2(⊙)133.2(⊙)758.8(◯)Δ121.6(⊙)133.3(⊙)858.9(◯)Δ122.2(⊙)133.1(⊙)957.9(Δ)Δ122.3(⊙)133.2(⊙)1058.7(◯)Δ122.2(⊙)133.3(⊙)1157.8(Δ)Δ122.1(⊙)133.2(⊙)1260.8(⊙)⊙122.3(⊙)133.1(⊙)1359.7(◯)Δ122.2(⊙)133.2(⊙)1461.3(⊙)⊙122.4(⊙)133.2(⊙)1558.4(◯)Δ122.3(⊙)133.1(⊙)1658.9(◯)Δ122.2(⊙)133.3(⊙)1756.9(Δ)Δ122.2(⊙)133.2(⊙)1858.9(◯)◯122.4(⊙)133.2(⊙)1960.1(⊙)⊙122.5(⊙)133.1(⊙)2056.5(Δ)Δ122.1(⊙)133.1(⊙)2163.5(⊙)⊙129.0(◯)144.5(Δ)2262.5(⊙)⊙125.1(◯)134.8(⊙)2360.8(⊙)⊙121.0(⊙)131.9(⊙)2461.3(⊙)⊙125.2(◯)139.6(◯)2561.4(⊙)⊙130.6(Δ)141.7(Δ)2663.2(⊙)⊙129.1(◯)144.1(Δ)2762.4(⊙)⊙125.2(◯)134.6(⊙)2861.0(⊙)◯130.2(Δ)140.5(Δ)2963.2(⊙)⊙129.3(◯)144.7(Δ)3061.5(⊙)◯125.2(◯)134.6(⊙)3159.5(◯)Δ131.1(Δ)140.9(Δ)Comparative3258.7(◯)Δ122.3(⊙)133.2(⊙)Example3355.7(Δ)X122.1(⊙)133.3(⊙)3457.3(Δ)Δ122.1(⊙)133.1(⊙)3555.6(Δ)X121.9(⊙)133.2(⊙)3661.0(⊙)⊙122.3(⊙)156.1(X)3761.2(⊙)⊙121.9(⊙)154.8(Δ)3853.6(X)X121.8(⊙)133.4(⊙)3955.8(Δ)X128.5(◯)142.9(Δ)4061.1(⊙)⊙131.4(Δ)157.2(X)4161.1(⊙)Δ131.6(Δ)157.5(X)4257.6(◯)Δ128.6(◯)152.1(Δ) [Results and Discussion] In the three color evaluation, low-temperature fixability was equal to or higher, tacking and wax adhesion were more suppressed, and fold fixability was better in Experiments 1 to 31 than those in Experiments 32 to 42. In addition, also in the secondary color evaluation in which only a small amount of yellow toner was used, low-temperature fixability was equal to or higher, tacking and wax adhesion were more suppressed, and fold fixability was better in Experiments 1 to 31 than those in Experiments 32 to 42. In particular, the melting point of the hydrocarbon wax in the yellow toner, the content and weight average molecular weight of the crystalline resin in the binder resin, the content and carbon number of the crystal nucleating agent moiety in the crystalline resin, and the content of the vinyl-based resin were each in a more preferable range in Experiments 1 and 2 than those in Experiments 3 to 20, and the crystalline resin was a hybrid amorphous polyester resin in Experiments 1 and 2. Therefore, tacking and wax adhesion were more suppressed, and the fold fixability was better in Experiments 1 and 2 than those in Experiments 3 to 20. For Experiments 1 to 3, 5 to 11, 13, 15 to 18, 20 to 25, and 29 to 31, evaluations of tacking and wax adhesion were better in the three color evaluation than those in the secondary color evaluation. It is considered that this is due to use of a larger amount of yellow toner containing a crystal nucleating agent in the three color evaluation than in the secondary color evaluation. In addition, the cyan toner also contains a crystalline resin and further contains the colorant represented by general formula (1). Therefore, it is considered that low-temperature fixability and fold fixability equal to or higher than those in the three color evaluation were exhibited in the secondary color evaluation of Experiments 1 to 31. In Experiment 4, a yellow toner containing wax having a melting point of 78.0° C. was used, and therefore it is considered that evaluations of tacking and wax adhesion were better in the secondary color evaluation in which the amount of yellow toner used was smaller than those in the three color evaluation. In Experiments 12 and 19, the amount of the crystal nucleating agent moiety in the yellow toner was larger than the more preferable range, and therefore it is considered that an interaction between the wax and the crystalline resin became too strong, and the wax and the crystalline resin were compatible with each other to delay crystallization of the crystalline resin. Therefore, it is considered that evaluations of tacking and wax adhesion were better in the secondary color evaluation in which the amount of yellow toner used was smaller than those in the three color evaluation. In Experiment 14, the content of the crystalline resin in the yellow toner was 15% by mass or more, and therefore it is considered that the vinyl-based resin was excessively plasticized. Therefore, it is considered that the evaluation of tacking was better in the secondary color evaluation in which the amount of yellow toner used was smaller than that in the three color evaluation. In Experiments 26 and 27, the weight average molecular weight (Mw) of the crystalline resin in the yellow toner was 1000 or less, and therefore it is considered that the crystalline resin was easily compatible with the vinyl-based resin and the wax. Therefore, it is considered that evaluations of tacking and wax adhesion were better in the secondary color evaluation in which the amount of yellow toner used was smaller than those in the three color evaluation. Meanwhile, in Experiment 32, the wax in the yellow toner was an ester wax, and therefore it is considered that tacking and wax adhesion deteriorated in the three color evaluation. In Experiments 33 and 41, the yellow toner on the outermost layer of the image did not contain the crystalline resin, and therefore it is considered that the wax adhesion deteriorated in addition to low-temperature fixability and fold fixability in the three color evaluation. In Experiment 34, the content of the vinyl-based resin in the yellow toner was less than 50% by mass, and therefore it is considered that domains of the crystalline resin could not be finely dispersed, and tacking and wax adhesion deteriorated in the three color evaluation. In Experiment 35, the yellow toner did not contain the crystal nucleating agent moiety, and therefore it is considered that tacking and wax adhesion deteriorated in the three color evaluation. In Experiments 36 and 37, the cyan toner did not contain the colorant represented by general formula (1), and therefore it is considered that fold fixability deteriorated in both the three color evaluation and the secondary color evaluation. In Experiments 38 and 39, the wax contained in each of the cyan toner and the magenta toner was an ester wax, and therefore it is considered that tacking and wax adhesion deteriorated in both the three color evaluation and the secondary color evaluation. In Experiment 40, each of the cyan toner and the magenta toner did not contain the crystalline resin, and therefore it is considered that low-temperature fixability and fold fixability deteriorated in both the three color evaluation and the secondary color evaluation. In Experiment 42, the content of the vinyl-based resin contained in each of the cyan toner and the magenta toner, which were contained in a lower layer than the yellow toner, was less than 50% by mass in the three color evaluation, and therefore it is considered that fold fixability deteriorated. The toner set according to an embodiment of the present invention has made it possible to suppress tacking of an image formed by a toner and to improve fold fixability. Therefore, it is expected that the present invention will make formation of a high-quality image with a toner easier and contribute to advancement and popularization of technology in this field. Although embodiments of the present invention have been described and illustrated in detail, the disclosed embodiments are made for purposes of illustration and example only and not limitation. The scope of the present invention should be interpreted by terms of the appended claims | 163,115 |
11860571 | DETAILED DESCRIPTION Hereinafter, embodiments consistent with the disclosure will be described with reference to drawings, which are merely examples for illustrative purposes and are not intended to limit the scope of the disclosure. In the drawings, the shape and size may be exaggerated, distorted, or simplified for clarity. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts, and a detailed description thereof may be omitted. Further, in the present disclosure, the disclosed embodiments and the features of the disclosed embodiments may be combined under conditions without conflicts. The described embodiments are some but not all of the embodiments of the present disclosure. Based on the disclosed embodiments, persons of ordinary skill in the art may derive other embodiments consistent with the present disclosure, all of which are within the scope of the present disclosure. The present disclosure provides an optical system using one or more polarization volume hologram (PVH) layer configured to reflect infrared (IR) light for, e.g., eye tracking purposes. A PVH layer includes a plurality of liquid crystal (LC) molecules which are spatially orientated to enable at least one optical function of the PVH layer, and is also referred to as, e.g., “polarization sensitive gratings,” “polarization sensitive optical elements,” “liquid crystal gratings,” or “chiral liquid crystal elements.”FIGS.1A-1Dschematically show an example PVH layer100consistent with the disclosure.FIG.1Ais a perspective view of the PVH layer100.FIG.1Bis a cross-sectional view of the PVH layer100in the y-z plane.FIG.1Cis a plan view of the PVH layer100in the x-y plane.FIG.1Dis a partial plan view of the PVH layer100in the x-y plane along the y-axis from a center region of the PVH layer100to an edge region of the PVH layer100. The optical function of a PVH layer can be determined based on the manipulation of optic axes of the LC molecules in the PVH layer. Hereinafter, an orientation of the optic axis of an LC molecule is also referred to as an orientation or alignment of the LC molecule. The manipulation of optic axes of the LC molecules in the PVH layer is a 3-dimensional (3D) alignment of the LC molecules. A PVH layer consistent with the disclosure can deflect, light via Bragg diffraction. The Bragg grating in the PVH layer can be created by adding a chiral dopant to induce helical twist along the vertical direction, e.g., the z-axis direction shown inFIGS.1A and1B. As shown inFIG.1B, in the z-axis direction of the PVH layer100, the LC molecules twist and the rotating angle changes continuously and periodically along the z-axis with a period of Λz. The period Λz(or pitch length p=2Λz) can be adjusted by controlling the helical twist power (HTP) and concentration of the chiral dopant. Similarly, an in-plane periodicity in the x-y plane is also introduced into the PVH layer100by, e.g., modifying the surface alignment of the PVH layer100to provide a rotation of the LC molecules in the x-y plane. As a result, the Bragg planes in the PVH layer100become slanted, as indicated by the slanted lines inFIG.1B. The distance between neighboring slanted lines is the Bragg period AB of the Bragg grating formed by the LC molecules in the PVH layer100. The Bragg period AB can depend on the z-axis period Λzof the LC molecules and a slanting angle of the Bragg planes with respect to a surface of the PVH layer100. The slanted Bragg planes can allow the PVH layer100to redirect incident light to be converged or diverged in reflection or in transmission. Thus, through further manipulation of the orientation of the LC molecules in the x-y plane, the PVH layer100can be configured to function as a lens, such as a reflective lens, that can, e.g., converge or diverge incident light. The PVH layer is thus also referred to as a “PVH lens.” Consistent with the disclosure, as shown inFIGS.1A and1C, the PVH layer100creates a respective lens profile via the in-plane (x-y plane) orientation (azimuth angle θ) of the LC molecules, in which the phase difference T=2θ. In the PVH layer100, the azimuth angles of LC molecules change continuously from a center102to an edge104of the PVH layer100, with a varied period Λ, i.e., a distance between two LC molecules whose azimuth angles differ from each other by 180°. The lens of the PVH layer100may include a certain symmetry in the arrangement of the LC molecules about an optical axis of the PVH layer100, which, for example, may pass through the center102of the PVH layer100. As shown inFIGS.1A and1C, the LC molecules in at least a portion of the PVH layer100are orientated or aligned rotationally-symmetrically (e.g., three-fold, four-fold, six-fold, or eight-fold) about the optical axis of the PVH layer100. In some embodiments, in the center portion of the PVH layer100, the LC molecules are aligned rotationally-symmetrically about the optical axis of the PVH layer100. In some embodiments, the rotational-symmetry of the LC molecules can be axisymmetry, i.e., the LC molecules in the at least one portion can be aligned axisymmetrically about the optical axis of the PVH layer100. The change of LC orientation from the center102to the edge104of the PVH layer is more clearly seen in the partial plan view ofFIG.1D. As shown inFIG.1D, for the LC orientation, a rate of period variation from the center102to the edge104of the PVH layer100is a function of distance from the center102, and increases with the distance from the center102. For example, the period at the center102(Λ0) is the longest, the period at the edge104(Λr) is the shortest, and the period in between (e.g., Λ1) is moderate, i.e., Λ0>Λ1> . . . >Λr. FIG.2Ais a schematic view showing a portion of an example optical system200consistent with the disclosure. The optical system200includes a substrate210and a PVH layer220. The substrate210provides support to the PVH layer220, and can be, for example, a piece of rigid material, such as glass, a piece of flexible material, such as plastic, or a functional device, such as a display screen. For illustrative purposes, inFIG.2A, the substrate210and the PVH layer220are shown as spaced apart from each other. In actual implementation, they can contact each other or be spaced apart from each other by, for example, one or more spacing members, or by being held at different places of a frame or a housing of the optical system200. In some embodiments, additional layer(s), such as protection layer(s) and/or buffer layer(s), can be arranged between the substrate210and the PVH layer220. The PVH layer220can be any PVH layer consistent with the disclosure, such as the PVH layer100described above in connection withFIGS.1A-1D. As shown inFIG.1B, the LC molecules are aligned in a helix twist with helix axis along the z-direction. The helix twist can be either left-handed or right-handed. A PVH layer may allow deflection for one circularly polarized light while the other polarization may transmit through. In some embodiments, a PVH layer can deflect circularly polarized light having a same handedness as the helix twist of the PVH layer and transmit circularly polarized light having an orthogonal handedness. In some embodiments, depending on how the LC molecules in the PVH layer are aligned, the PVH layer can either converge or diverge the incident light. For illustrative purpose and as an example, in the description below, the PVH layer220is described as having a helix twist of right handedness (indicated by hollow block in the figure). In some other embodiments, the PVH layer220can have a helix twist of left handedness. As shown inFIG.2A, incident light230includes two components that are polarized in mutually perpendicular (orthogonal) directions, i.e., a first incident light ray232having a right-handed circular polarization (indicated by hollow arrow in the figure) and a second incident light ray234having a left-handed circular polarization (indicated by solid arrow in the figure). Because the first incident light ray232has a same handedness as the helix twist of the PVH layer220, the first incident light ray232is reflected by the PVH layer220to form a reflected light ray236. Further, the PVH layer220does not change the handedness of the polarization of the first incident light ray232, and hence the reflected light ray236retains the handedness of the polarization, i.e., also having a right-handed circular polarization. On the other hand, because the second incident light ray234has a different handedness than the PVH layer220, the second incident light ray234passes through the PVH layer220without being reflected and without changing the handedness of the polarization. As described above, a PVH layer can reflect incident light by the Bragg grating formed by the LC molecules in the PVH layer. The angle between the incident light ray and the reflected light ray can depend on the wavelength of the light and the Bragg period of the Bragg grating in the PVH layer. Therefore, an angle α between the first incident light ray232and the reflected light ray236can depend on the Bragg period of the Bragg grating in the PVH layer224and the wavelength of the first incident light ray232. The optical power of a PVH layer determines a degree to which the PVH layer can converge or diverge light and can be inversely proportional to a focal length or effective focal length of the PVH layer. The optical power of the PVH layer can be adjusted by changing the alignment of the LC molecules in the PVH layer to change the angle of reflection at different points of the PVH layer. Changing the optical power of a PVH layer may also change a field of view (FOV) of the PVH layer. Similar to the optical power, the optical axis of a PVH layer can also be adjusted by changing the alignment of the LC molecules in the PVH layer. The direction of the optical axis of the PVH layer may or may not be perpendicular to the surface of the PVH layer. FIG.2Bis another schematic view of the optical system200. InFIG.2B, the optical system200is shown as a head-mounted display, such as smart glasses.FIG.2Bshows a portion of the optical system200, where the un-shown portion of the optical system200can be symmetric to the illustrated portion of the optical system200. As shown inFIG.2B, the optical system200further includes an optical sensor240configured to generate an image using polarized light reflected by the PVH layer220. In some embodiments, the optical sensor240can be sensible to light having a wavelength within a spectrum that includes IR spectrum. In some embodiments, the optical sensor240can be sensible to IR light but not visible light. The optical sensor240can be a camera and can include, for example, a charge-coupled device (CCD) sensor, a complementary metal-oxide-semiconductor (CMOS) sensor, or an N-type metal-oxide-semiconductor (NMOS) sensor. The optical sensor240can be mounted at any suitable part of the optical system200, so long as the optical sensor240can be arranged to face the PVH layer220to receive light reflected by the PVH layer220. In some embodiments, the optical system200can include a frame or a housing, and the optical sensor240can be mounted on the frame or the housing. As shown inFIG.2B, the optical system200further includes a mounting member250for mounting the optical system200to an object, such as a user's head. The optical sensor240can be mounted at the mounting member250. In some embodiments, the optical system200can include smart glasses and the mounting member250can include one or more temple arms. The optical sensor240can be mounted at one of the one or more temple arms and faces the PVH layer220. The optical sensor240can generate images of a region bounded by marginal rays indicated by the dashed lines inFIG.2B. Besides the characteristics of the optical sensor240itself, some other factors can also affect the region being imaged by the optical sensor240, such as the optical power of the PVH layer220and the direction of the optical axis of the PVH layer220. Both the optical power and the optical axis direction of the PVH layer220can be configured by configuring the alignment of the LC molecules in the PVH layer220. Therefore, consistent with the disclosure, the optical system200can be easily designed to image different regions of an object. For example, the object can be the user's head, and the PVH layer220can be designed in such a manner that the optical system200can image a portion, such as, a pupil area, of the user's eye, the entire eye of the user, an area near, such as above, below, left to, or right to, the eye of the user, or an area including the eye and the area near the eye. Thus, eye tracking can be realized. Consistent with the disclosure, because the optical power and the optical axis direction of the PVH layer220depend on the alignment of the LC molecules in the PVH layer220, the overall shape and dimensions of the PVH layer220can remain the same for different optical powers and/or optical axis directions. Further, because the optical power and the optical axis direction of the PVH layer220do not depend on the orientation of the surface of the PVH layer220, the PVH layer220can be designed to reflect an incident light ray at a large angle even if the incident light ray has a zero or small incident angle onto the PVH layer220. This provides more freedom in arranging the optical sensor240, and a more compact design of the optical system200can be achieved. In some embodiments, the optical system200can generate images by utilizing IR light emitted or reflected by the target being tracked, such as the user's eye. In some embodiments, as shown inFIG.2B, the optical system200further includes a light source260configured to emit a light beam to be reflected by the target toward the PVH layer220. The light beam emitted by the light source260can include a narrow spectrum or a relatively broad spectrum, and one or more wavelengths of the light beam are in the IR spectrum, i.e., the spectrum of the light source260can be within, overlap, or encompass the IR spectrum. In some embodiments, at least one wavelength in the spectrum of the light source260corresponds to the Bragg period of the Bragg grating formed by the LC molecules in the PVH layer220. In some embodiments, the light beam emitted by the light source260has a wavelength in the IR spectrum and corresponding to the Bragg period of the Bragg grating in the PVH layer220. The wavelength of the light beam can be, e.g., from about 800 nm to about 1600 nm, such as about 850 nm, about 940 nm, or about 980 nm. The Bragg period of the Bragg grating in the PVH layer220can be, e.g., from about 130 nm to about 270 nm, or centered at about 140 nm or about 156 nm. In some embodiments, the Bragg period can be longer, such as about 0.9 about 1 μm, or about 1.1 μm. FIG.3Ais a schematic view showing a portion of another example optical system300consistent with the disclosure. The optical system300includes a substrate310and a PVH composite film320formed over the substrate310. The substrate310provides support to the PVH composite film320, and can be, for example, a piece of rigid material, such as glass, a piece of flexible material, such as plastic, or a functional device, such as a display screen. As shown inFIG.3A, the PVH composite film320includes a first PVH layer322formed over the substrate310and a second PVH layer324coupled to the first PVH layer322. For illustrative purposes, inFIG.3A, the substrate310, the first PVH layer322, and the second PVH layer324are shown as spaced apart from each other. In actual implementation, they can contact each other or be spaced apart from each other by, for example one or more spacing members, or by being held at different places of a frame or a housing of the optical system300. In some embodiments, additional layer(s), such as protection layer(s) and/or buffer layer(s), may be arranged between each neighboring pair of the substrate310, the first PVH layer322, and the second PVH layer324. Each of the first PVH layer322and the second PVH layer324can be a PVH layer consistent with the disclosure, such as the PVH layer100described above in connection withFIGS.1A-1D. In some embodiments, the handedness of the helix twist of the first PVH layer322can be different from (orthogonal to) the handedness of the helix twist of the second PVH layer324. For example, one of the first PVH layer322and the second PVH layer324can have a left-handed helix twist and the other one of the first PVH layer322and the second PVH layer324can have a right-handed helix twist. For illustrative purposes and as examples, in the description below, the first PVH layer322is described as having a left handedness (indicated by solid block in the figure) and the second PVH layer324is described as having a right handedness (indicated by hollow block in the figure). In some other embodiments, the first PVH layer322can have a right handedness and the second PVH layer324can have a left handedness. As shown inFIG.3A, incident light330includes two components that are polarized in mutually perpendicular (orthogonal) directions: —a first incident light ray332having a right-handed circular polarization (indicated by hollow arrow in the figure) and a second incident light ray334having a left-handed circular polarization (indicated by solid arrow in the figure). Because the first incident light ray332has a same handedness as the helix twist of the second PVH layer324, the first incident light ray332is reflected by the second PVH layer324to form a first reflected light ray336. Further, the second PVH layer324does not change the handedness of the polarization of the first incident light ray332, and hence the first reflected light ray336retains the handedness of the polarization, i.e., also having a right-handed circular polarization. On the other hand, because the second incident light ray334has a different handedness than the second PVH layer324, the second incident light ray334passes through the second PVH layer324without being reflected and without changing the handedness of the polarization. When the second incident light ray334hits the first PVH layer322, it is reflected by the first PVH layer322that has a same handedness, forming a second reflected light ray338having a left-handed circular polarization. The second reflected light ray338passes through the second PVH layer324without being reflected and without changing the handedness of the polarization. In some embodiments, the first incident light ray332and the second incident light ray334can have an approximately same wavelength. In these embodiments, the deflection angle α1 between the first incident light ray332and the first reflected light ray336can depend on the Bragg period of the Bragg grating in the second PVH layer324; and the deflection angle α2 between the second incident light ray334and the second reflected light ray338can depend on the Bragg period of the Bragg grating in the first PVH layer322. In some embodiments, the first PVH layer322and the second PVH layer324can have different Bragg periods so that the angles α1 and α2 can be different from each other. In some embodiments, the LC molecules of the first PVH layer322and the LC molecules of the second PVH layer324can be arranged such that the first PVH layer322and the second PVH layer324have an approximately same optical power. In some other embodiments, the LC molecules of the first PVH layer322and the LC molecules of the second PVH layer324can be arranged in such a manner that the first PVH layer322and the second PVH layer324have different optical powers. Changing the optical power of a PVH layer may also change an FOV of the PVH layer. Therefore, the first PVH layer322and the second PVH layer324can be configured to have different FOVs. In some embodiments, one of the FOV of the first PVH layer322and the FOV of the second PVH layer324can encompass another one of the FOV of the first PVH layer322and the FOV of the second PVH layer324. In some embodiments, the LC molecules of the first PVH layer322and the LC molecules of the second PVH layer324can be arranged such that the optical axis of the first PVH layer322and the optical axis of the second PVH layer324point toward an approximately same direction. In some other embodiments, the LC molecules of the first PVH layer322and the LC molecules of the second PVH layer324can be arranged such that the optical axis of the first PVH layer322and the optical axis of the second PVH layer324point toward different directions. With different arrangements of the LC molecules in the first PVH layer322and the arrangement of the LC molecules in the second PVH layer324, different combinations of optical powers, FOVs, and optical axis directions can be achieved. For example, the first PVH layer322and the second PVH layer324can have an approximately same optical power, and their optical axes can point toward different directions. As another example, the first PVH layer322and the second PVH layer324can have different optical powers, and their optical axes can point toward different directions. As a further example, the optical axes of the first PVH layer322and the second PVH layer324can point toward an approximately same direction, but the first PVH layer322and the second and the second PVH layer324can have different optical powers so that the FOV of one of the first PVH layer322and the second PVH layer324can encompass the FOV of the other one of the first PVH layer322and the second PVH layer324. As a further example, the optical axes of the first PVH layer322and the second PVH layer324can point toward different directions, and the FOV of the first PVH layer322and the FOV of the second PVH layer324may or may not encompass each other, or may or may not overlap with each other. Various other combinations are possible but not listed here. FIG.3Bis another schematic view of the optical system300. InFIG.3B, the optical system300is shown as a head-mounted display, such as smart glasses.FIG.3Bshows a portion of the optical system300, where the un-shown portion of the optical system300can be symmetric to the illustrated portion of the optical system300. As shown inFIG.3B, the optical system300further includes an optical sensor340configured to generate a first image using polarized light reflected by the first PVH layer322and to generate a second image using polarized light reflected by the second PVH layer324. In some embodiments, the optical sensor340can be sensible to light having a wavelength within a spectrum that includes IR spectrum. In some embodiments, the optical sensor340can be sensible to IR light but not visible light. The optical sensor340can be a camera and can include, for example, a charge-coupled device (CCD) sensor, a complementary metal-oxide-semiconductor (CMOS) sensor, or an N-type metal-oxide-semiconductor (NMOS) sensor. The optical sensor340can be mounted at any suitable part of the optical system300, so long as the optical sensor340can be arranged to face the PVH composite film320to receive light reflected by the first PVH layer322and the light reflected by the second PVH layer324. In some embodiments, the optical system300can include a frame or a housing, and the optical sensor340can be mounted on the frame or the housing. As shown inFIG.3B, the optical system300further includes a mounting member350for mounting the optical system300to an object, such as a user's head. The optical sensor340can be mounted at the mounting member350. In some embodiments, the optical system300can include smart glasses and the mounting member350can include one or more temple arms. The optical sensor340can be mounted at one of the one or more temple arms and faces the PVH composite film320. As described above, optical powers and optical axis directions of the first PVH layer322and the second PVH layer324can be configured by manipulating the arrangements of the LC molecules in the first PVH layer322and the second PVH layer324. With different combinations of the arrangements of the LC molecules in the first PVH layer322and the second PVH layer324, different combinations of imaging regions can be realized. FIGS.3C and3Dshow two examples of imaging regions resulting from different combinations of arrangements of the LC molecules in (and hence different optical powers and/or optical axis directions of) the first PVH layer322and the second PVH layer324. InFIGS.3C and3D, the short-dashed lines indicate marginal rays bounding the imaging region of the first PVH layer322and the long-dashed lines indicate marginal rays bounding the imaging region of the second PVH layer324. In the example shown inFIG.3C, the optical power of the first PVH layer322can be smaller than the optical power of the second PVH layer324, and the optical axes of the first PVH layer322and the second PVH layer324can point to an approximately same direction or slightly different directions. The FOV of the second PVH layer324encompasses the FOV of the first PVH layer322. Thus, as shown inFIG.3C, the second PVH layer324can image a larger region than the first PVH layer322. For example, the first PVH layer322can image a pupil area of the user's eye and the second PVH layer324can image the entire eye of the user. In the example shown inFIG.3D, the optical powers of the first PVH layer322and the second PVH layer324can be approximately the same as each other, and the optical axes of the first PVH layer322and the second PVH layer324can point to different directions. The FOVs of the first PVH layer322and the second PVH layer324can be approximately the same as each other. Thus, as shown inFIG.3D, the first PVH layer322and the second PVH layer324can image an approximately same region from different perspectives. That is, the image generated by the optical sensor340using the polarized light reflected by the first PVH layer322and the image generated by the optical sensor340using the polarized light reflected by the second PVH layer324can be approximately a same region of the target. For example, the first PVH layer322can image the pupil area from the left perspective (the lower perspective in the figure as presented in the drawing sheet, indicated by the short-dashed lines) and the second PVH layer324can image the pupil area from the right perspective (the upper perspective in the figure as presented in the drawing sheet, indicated by the long-dashed lines). When imaging is performed from only one perspective, accuracy of eye tracking may decrease when the user looks away from an image of the optical sensor340(formed by the first PVH layer322and/or the second PVH layer324). On the other hand, consistent with the disclosure, using two PVH layers to allow imaging the user's eye from different perspectives can increase the accuracy of eye tracking when the user's eye moves. For example, as shown inFIG.3D, when the user looks to the left, the first PVH layer322can provide a higher tracking accuracy, and when the user looks to the right, the second PVH layer324can provide a higher tracking accuracy. In the example shown inFIG.3D, the FOVs of the first PVH layer322and the second PVH layer324are approximately the same as each other. In some other embodiments, with the approximately same optical powers, the optical axis directions of the first PVH layer322and the second PVH layer324can be configured in such a manner that the FOVs of the first PVH layer322and the second PVH layer324are different from each other but do not encompass each other. In some embodiments, the optical system300can generate images by utilizing IR light emitted or reflected by the target being tracked, such as the user's eye. In some embodiments, as shown in, e.g.,FIG.3B, the optical system300further includes a light source362configured to emit a light beam to be reflected by the target toward the PVH composite film320. The light beam emitted by the light source362can include a narrow spectrum or a relatively broad spectrum, and one or more wavelengths of the light beam are in the IR spectrum, i.e., the spectrum of the light source362can be within, overlap, or encompass the IR spectrum. In some embodiments, the light source362can have a relatively broad spectrum, and at least one wavelength in the spectrum of the light source362corresponds to the Bragg period of the Bragg grating formed by the LC molecules in the first PVH layer322and/or the Bragg period of the Bragg grating formed by the LC molecules in the second PVH layer324. In some embodiments, the light beam emitted by the light source362can have a relatively narrow spectrum having a peak wavelength in the IR spectrum, and the peak wavelength can correspond to the Bragg period of the Bragg grating in the first PVH layer322and/or the Bragg period of the Bragg grating in the second PVH layer324. The wavelength of the light beam can be, e.g., from about 800 nm to about 1600 nm, such as about 850 nm, about 940 nm, or about 980 nm. The Bragg period of the Bragg grating in the first PVH layer322can be, e.g., from about 130 nm to about 270 nm, or centered at about 140 nm or about 156 nm. The Bragg period of the Bragg grating in the second PVH layer324can be the same as or different from the Bragg period of the Bragg grating in the first PVH layer322, and can be, e.g., from about 130 nm to about 270 nm, or centered at about 140 nm or about 156 nm. In some embodiments, the Bragg period in one or both of the first PVH layer322and the second PVH layer324can be longer, such as about 0.9 μm, about 1 μm, or about 1.1 μm. In some embodiments, as shown in, e.g.,FIG.3B, the light source362is a first light source362and the light beam emitted by the light source362is a first light beam, and the optical system300further includes a second light source364configured to emit a second light beam to be reflected by the target toward the PVH composite film320. The second light beam emitted by the second light source364can include a narrow spectrum or a relatively broad spectrum, and one or more wavelengths of the light beam are in the IR spectrum, i.e., the spectrum of the light source364can be within, overlap, or encompass the IR spectrum. In some embodiments, the spectrum of the second light beam can be different from the spectrum of the first light beam. In some embodiments, the first light beam can have a first wavelength in the IR spectrum, the second light beam can have a second wavelength in the IR spectrum, and the first wavelength and the second wavelength can be different from each other. In some embodiments, the first wavelength can correspond to the Bragg period of the Bragg grating formed by the LC molecules in the first PVH layer322, and the second wavelength can correspond to the Bragg period of the Bragg grating formed by the LC molecules in the second PVH layer324. For example, the first wavelength can be about 850 nm and the Bragg period of the Bragg grating in the first PVH layer322can be about 130 nm, and the second wavelength can be about 940 nm and the Bragg period of the Bragg grating in the second PVH layer324can be about 157 nm. In the embodiments described above in connection withFIGS.3A-3D, the optical sensor340generates a first image using polarized light reflected by the first PVH layer322and a second image using polarized light reflected by the second PVH layer324. The two images can be images of two different areas (which may be areas that one encompasses another, one overlaps another, and one separated from another), or images of the same area from different perspective. The polarized light reflected by the two PVH layers may be projected to an approximately same area of the optical sensor340. Therefore, if the polarized light reflected by the two PVH layers is received by the optical sensor340at the same time, the two images may superimpose on each other and the resulting image may also be referred to as a superimposed image. The superimposed image can be processed to obtain the two individual images. In some embodiments, the superimposed image can be processed based on characteristics of the target to separate the two images. For example, when the target is the user's eye, cues of the user's eye can be used in processing the superimposed image. The cues of the user's eye can include binocular cues, such as stereopsis, eye convergence, disparity, and yielding depth from binocular vision through exploitation of parallax, and/or monocular cues, such as size, grain, and motion parallax of the optokinetic response. Another approach is to temporally separate the first image and the second image, i.e., allowing the light reflected by the first PVH layer322and the light reflected by the second PVH layer324to enter the optical sensor340at different times. In some embodiments, as shown in, e.g.,FIG.3B, the optical system300further includes an optical switch370arranged between the PVH composite film320and the optical sensor340. The optical switch370can be attached to the optical sensor340or attached to another part of the optical system300, such as the mounting member350. In some embodiments, the optical switch370can be configured to be a part of the optical sensor340. The optical switch370can be configured to switch from a first state to a second state and vice versa. In the first state, the optical switch370can transmit the polarized light reflected by the first PVH layer322and block the polarized light reflected by the second PVH layer324. In the second state, the optical switch370can transmit the polarized light reflected by the second PVH layer324and block the polarized light reflected by the first PVH layer322. In some embodiments, the optical switch370can include a quarter-wave plate and a switchable linear polarizer. The quarter-wave plate can be configured to convert circularly polarized light reflected by the first PVH layer322into first linearly polarized light, and convert second circularly polarized light reflected by the second PVH layer324into second linearly polarized light. Because the first circularly polarized light and the second circularly polarized light have orthogonal polarization directions, the polarization direction of the first linearly polarized light and the polarization direction of the second linearly polarized light can also be orthogonal to each other. Correspondingly, the switchable linear polarizer can be configured to switch between the two orthogonal polarization directions. As such, when the polarization direction of the switchable linear polarizer is along the polarization direction of the first linearly polarized light, the optical switch370can transmit the light reflected by the first PVH layer322and block the light reflected by the second PVH layer324. On the other hand, when the polarization direction of the switchable linear polarizer is along the polarization direction of the second linearly polarized light, the optical switch370can transmit the light reflected by the second PVH layer324and block the light reflected by the first PVH layer322. In some embodiments, the linear polarizer can be rotated to switch between the two orthogonal polarization directions. In some embodiments, the linear polarizer can include two pieces of polarizing materials having orthogonal linear polarization directions, and the polarization direction of the linear polarizer can be switched by mechanically moving one of the two pieces of polarizing materials into the optical path between the quarter-wave plate and the optical sensor340. In some embodiments, the linear polarizer can include a switchable material that can change polarization direction under an external actuation. For example, the linear polarizer can include an LC film and the LC molecules in the LC film can rotate to different directions when different external voltages are applied, e.g., to form a half-wave plate. A quarter-wave plate can convert circularly polarized light into linearly polarized light when the following condition is satisfied: d×Δn=(2m+1)λ/4, where d and Δn denote the thickness and the birefringence of the quarter-wave plate, λ denotes the wavelength of the light in the vacuum, and m is a non-negative integer. Therefore, when the optical switch370includes a quarter-wave plate and a switchable linear polarizer, the light reflected by the first PVH layer322and the light reflected by the second PVH layer324may need to have an approximately same wavelength. In these embodiments, the optical system300may either have one light source, such as the light source362, or have multiple light sources, such as the light source362and the light source364, that emit light beams having approximately same wavelength. As described above, in some embodiments, the optical system300have two light sources—the light source362and the light source364, and the two light sources can emit light beams having different wavelengths (the first wavelength and the second wavelength) that can be reflected by the first PVH layer322and the second PVH layer324, respectively. In these embodiments, the optical switch can include a switchable absorber that can switch between two states. In one of the two states, the absorber can absorb light having the first wavelength but not the light having the second wavelength, and in the other one of the two states, the absorber can absorb light having the second wavelength but not the light having the first wavelength. The switchable absorber can switch between the two states under an external control. In the embodiments described above, the light reflected by the first PVH layer322and the light reflected by the second PVH layer324may be projected to an approximately same area of the optical sensor340and hence the image formed by the light from the first PVH layer322and the image formed by the light from the second PVH layer324may be superimposed on each other. In some other embodiments, the first PVH layer322and the second PVH layer324can be configured such that the light reflected by the first PVH layer322and the light reflected by the second PVH layer324can be projected to two different areas of the optical sensor340, so as to avoid the images being superimposed on each other. In these embodiments, electric signals from the two different areas of the optical sensor340can be processed separately to obtain the images from the two PVH layers. In some embodiments, the optical sensor340may be longer in one dimension as compared to the other dimension. In some embodiments, the optical system300may include two optical sensors, referred to as a first optical sensor and a second optical sensor, arranged side by side and each being associated with one circular polarizer covering an aperture of the corresponding optical sensor. A first circular polarizer associated with the first optical sensor can have a same handedness of polarization as the light reflected by the first PVH layer322. As such, light reflected by the second PVH layer324may be blocked by the first circular polarizer, while the light reflected by the first PVH layer322can transmit through the first circular polarizer and form image in the first optical sensor. Similarly, a second circular polarizer associated with the second optical sensor can have a same handedness of polarization as the light reflected by the second PVH layer324. As such, the light reflected by the first PVH layer322may be blocked by the second circular polarizer, while the light reflected by the second PVH layer324can transmit through the second circular polarizer and form image in the second optical sensor. In some embodiments, the optical system300may further include a geometric phase lens arranged between the PVH composite film320and the optical sensor340. The geometric phase lens can be configured to further divert light from one or both of the first PVH layer322and the second PVH layer324, and hence effectively alter the focal length or effective focal length of the first PVH layer322and/or the focal length or effective focal length of the second PVH layer324. As a result, a relative focal length of the first PVH layer322relative to the second PVH layer324can be altered. For example, the first PVH layer322may have a relatively short focal length and the second PVH layer324may have a relatively long focal length. Therefore, an effective depth of field of the optical system300as a whole may be increased. The geometric phase lens can be arranged at any suitable location along the optical path from the PVH composite film320to the optical sensor340. For example, the geometric phase lens can be arranged in front of the composite film320, between the first PVH layer322and the second PVH layer324, in front of the optical sensor340, or integrated within the optical sensor340. The operation of the optical system consistent with the disclosure, such as the optical system200or the optical system300described above, can be controlled locally by a controller of the optical system200.FIG.4shows a block diagram of an example controller400consistent with the disclosure. As shown inFIG.4, the controller400includes one or more processors410and one or more memories420coupled to the one or more processors410. The one or more memories420can store instructions that, when executed by the one or more processors410, cause the one or more processors410to perform a method consistent with the disclosure, such as one of the example functions described above. For example, the instructions can cause the one or more processors410to process and record the images generated by the optical sensor240or340, or to control the light source260or the light sources362and364to turn on or off. In the optical system300, the instructions can cause the one or more processors410to separate the superimposed image to obtain the two individual images according to, e.g., the example method described above. The instructions can also cause the one or more processors410to control the optical switch to switch between the first state and the second state. Each of the one or more processors410can include any suitable hardware processor, such as a microprocessor, a micro-controller, a central processing unit (CPU), a graphic processing unit (GPU), a network processor (NP), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another programmable logic device, discrete gate or transistor logic device, discrete hardware component. Each of the one or more memories420can include a non-transitory computer-readable storage medium, such as a random access memory (RAM), a read only memory, a flash memory, a hard disk storage, or an optical media. The foregoing description of the embodiments of the disclosure have been presented for the purpose of illustration. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments of the disclosure may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Embodiments of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims. | 46,106 |
11860572 | DETAILED DESCRIPTION An augmented or virtual reality system, such as a head-mounted display (HMD), may permit a user to interact with a variety of displayed holographic objects. In some examples, one or more holographic objects may occupy a volume of space. For example and with reference to the example use environment100shown inFIG.1, a user104wears a head-mounted display (HMD) device in the form of an augmented reality display system102. The augmented reality display system102displays virtual imagery to the user104via a see-through display system such that at least a portion of a real-world background is viewable concurrently with the displayed virtual imagery. While described in the context of an augmented reality display system and the use of a see-through display, it will be understood that examples described herein also may be enacted via a virtual reality display system or a video augmented reality display system in which a video image of a physical environment is obtained by a camera and then augmented with virtual image data when displayed to a user of the system. In this example, the HMD102displays a three-dimensional holographic volume in the form of a virtual house106displayed within the field of view108of the augmented reality display system102. Additional holographic objects may be located inside the volume of the virtual house106. These objects are occluded from view by the HMD102such that the user104sees only exterior elements of the house (roof, walls, etc.). In some systems, if the user desires to view holographic objects located inside the house, they first must find an “edit mode” in their display system, select a separate control feature, and then manipulate the control feature to change their view. Such a control feature interposes a mediating interface between the user's actual input and the user's ability to change the view of occluded objects inside the house. For example, the user may be required to operate an editing affordance via digital manipulation, speech command, gaze direction, head direction, button press, or other manipulation, to change their view of the house. This approach is slow, highly precise, and requires indirect manipulation by the user. Accordingly, examples of interaction modes are disclosed that relate to viewing inside a holographic volume in a potentially more natural, intuitive, and efficient manner. Briefly and as described in more detail below, in some examples a user of a display system may reveal holographic objects located within a holographic volume by simply moving one or both hands of the user. In some examples, location data of at least a portion of a hand is received from a sensor. Based on the location data, a change in location of the hand relative to the holographic volume is determined. Based at least on the change in location of the hand relative to the holographic volume, one or more occluded holographic objects associated with the holographic volume, which were previously occluded from view, are displayed via the display system. As used herein, in some examples location and location data may comprise 3 degree-of-freedom location/data (such as position or orientation information relative to 3 orthogonal axes). In some examples, location and location data may comprise 6 degree-of-freedom location/data, including position information along 3 perpendicular axes and changes in orientation through rotation about the three perpendicular axes (yaw, pitch and roll). In some examples and as described in more detail below, using articulated hand location data obtained from a sensor, a slicing plane is defined along an axis that is aligned with the backside of the user's palm of a hand. On one side of the slicing plane, holographic objects within the holographic volume are displayed, while on the other side of the slicing plane other holographic objects within the volume are not displayed to the user. As the user moves her hand the slicing plane is correspondingly relocated, and holographic objects within the volume are correspondingly displayed or occluded. In this manner, the slicing plane may provide a “flashlight” experience in which the user may easily and selectively reveal previously occluded holographic objects within the volume. In some examples, both hands of the user may each define a slicing plane. In some examples, the plane is defined along an axis that is aligned with the palm of the user's hand. When the user's palms at least partially face each other, the slicing planes may define a sub-volume within the holographic volume in which holographic objects are displayed, and outside of which holographic objects are occluded. This can create an experience of the user “holding” and dynamically resizing a volume of space between the user's hands in which holographic objects within the volume are revealed. In some examples, both hands of the user may define a sub-volume (spherical, oblong, polyhedral, or other shape) between the hands within the holographic volume in which holographic objects are displayed, and outside of which holographic objects are occluded. This can create an experience of the user “holding” and dynamically resizing a “beach ball”, “football” or other portion of space between the user's hands in which holographic objects within the volume are revealed. As a more specific example and with reference toFIG.2, in an augmented-reality scenario, user104may view the holographic house106and other holographic objects located within the house (occluded from view inFIG.2) in a stationary frame of reference for the real-world. The term “stationary frame of reference” indicates that the house is fixed in position relative to the real-world as a user moves through the use environment100. The house and the internally-located objects are displayed in the real-world using a coordinate location (e.g. Cartesian coordinates) within a coordinate system of the stationary frame of reference. As described in more detail in the examples below, the user104orients one or both hands120,124to be within the field of view108of the augmented reality display system102. In some examples, moving one or both hands120,124to be within the field of view108triggers an interaction mode that enables the user to reveal holographic objects located within the holographic volume by simply moving one or both hands120,124. In some examples, the user may trigger an interaction mode as described herein by penetrating the holographic volume of house106with one or both hands120,124. In other examples, the interaction mode may be triggered in any suitable manner, such as via verbal command, button press, etc. As mentioned above and as described in more detail below, in some examples the augmented reality display system102uses one or more sensors to capture depth image data of the real-world use environment100and detects, via the depth image data, an appendage (hand120,124) of the user104. Such image data may represent articulated hand image data that represents multiple joints, lengths, and/or surfaces of the hand. In this manner, the system may track the location of one or more joints, lengths, surfaces, and digits of the hand and/or planes defined by the hand. In some examples, the augmented reality display system102may fit a skeletal model to each image in a series of depth images, and apply one or more gesture filters to detect whether the user has performed a recognized gesture. In other examples, the augmented reality display system102may received depth image data and/or other image data from one or more cameras external to the display system. With reference now toFIG.3, in this example the left hand120of user104is within the field of view of HMD102. A holographic volume in the form of another house300is displayed via the HMD. The HMD102receives location data of the hand120that may include a backside point location130on the upper portion134of the user's hand opposite to the palm side (see alsoFIG.1) and one or more other locations of the hand. Using such location data, a slice plane304may be defined that is substantially parallel to the surface of the upper portion134of the hand. As described in more detail below, the user may conveniently and naturally move hand120to correspondingly move the slicing plane304through the house300to selectively reveal and occlude from view other holographic objects located within the volume of house300. In the example ofFIG.3, an affordance of the slicing plane304is displayed via the HMD102to enable the user to more clearly perceive the current location of the plane. In this example, the affordance comprises a translucent pane that defines the boundaries of the slicing plane. In other examples, other suitable affordances (such as a simple rectangle, glowing outline, etc.) may be displayed. In other examples, an affordance of the slicing plane304may not be displayed. In some examples, the slicing plane304may be “snapped” to align with one or more of a closest axis of the holographic volume. In the example ofFIG.3, the upper portion134of hand120is most closely aligned with the Y-Z plane of the three mutually orthogonal coordinate planes. Accordingly, the slicing plane304is snapped to align with the Y-Z plane. In this example, the X-Y-Z axis and corresponding three orthogonal planes are determined with respect to the surfaces of the holographic house300. In other examples the coordinate axis and corresponding orthogonal planes may be determined and set in any suitable manner. In some examples and as described below, the slicing plane304may be locked to the closest axis to which it is snapped. In this manner, the user may freely move her hand within the volume, including rotating her hand about such axis, while the slicing plane remains constrained to move along a single axis. In the example ofFIG.3, when the slicing plane304is snapped to the Y-Z axis, the user may move the slicing plane laterally along the X-axis to conveniently reveal and occlude other holographic objects within the house300. In this manner, the system maintains alignment of the slicing plane with the closest coordinate plane during movement of the hand. In other examples and as described below, a slicing plane may be free to move about all three axes from 0-360 degrees, and thereby follow the orientation of the upper portion304of the user's hand. In the example ofFIG.3, the slicing plane304selectively reveals holographic objects that are located behind the upper portion134of the hand (e.g., rearward in the negative X-axis direction). In this example, the slicing plane304may operate like a “flashlight” to reveal previously hidden or occluded holographic objects located in a predetermined revealing direction relative to the plane—in this example, in the negative X-axis direction relative to the plane. In other examples, other revealing directions may be utilized, such as in the positive X-axis direction relative to the plane. With reference now toFIGS.4-7, another example of utilizing one hand to manipulate a slicing plane within a holographic volume is provided. These figures show the user's view through the see-through display of an augmented reality device, such as HMD102. As shown inFIG.4, a holographic volume in the form of a house model400is displayed via HMD102to a user. The house model400comprises a plurality of holographic objects, including structural features (floors, ceilings, walls, etc.) and furnishing features not visible in this view (tables and chairs, a couch, bookshelf, sink, etc.). In the view shown inFIG.4, the user's left hand404and the corresponding slicing plane406have not penetrated the holographic volume of house model400. Accordingly, certain holographic objects (a table and chairs, couch, bookshelf, sink) internal to the house are occluded from view by other objects (walls, ceilings) and are not displayed. In this example, the user sees his hand404through the see-through display of HMD102. Additionally, in this example a hand affordance is displayed to the user in the form of multiple blocks410that indicate a location of a particular joint or other feature of his hand. Other examples of affordances that may be displayed to indicate hand404include glowing outlines, colored overlays, etc. In other examples, a hand affordance may not be displayed. As shown inFIG.5, as the user move his hand404in the negative X-axis direction toward the house model400, when the hand and slicing plane406penetrate the volume of the house model400, previously occluded holographic objects within the volume, such as sofa418, bookcase420, and table and chairs424, are revealed and displayed, while other previously displayed objects (front wall430, first side wall434, second side wall438, and ceiling442(seeFIG.4)) are not displayed. In this example and as described above, the upper portion of hand404is most closely aligned with the Y-Z plane, and thus the slicing plane406is snapped to align with the Y-Z plane. In this example, the slicing plane is also locked to the Y-Z plane. Also in this example, the revealing direction relative to slicing plane, indicated at444, is the negative X-axis direction. Accordingly and as shown inFIGS.5and6, previously occluded holographic objects (not visible inFIG.4) that are located in the negative X-axis direction from the upper portion of hand404are revealed and displayed. In this manner, the user can conveniently scan through the house model400with just one hand to selectively reveal and display internal holographic objects. Also in this example, as the hand404moves rearward in the negative X-axis direction, certain holographic objects located in the positive X-axis direction from the palm are once again occluded from view by other objects and are not displayed. For example, inFIG.6the sofa418shown inFIG.5is now occluded by the front wall430. FIG.7shows an example of the user's right hand408oriented such that the upper portion of the hand is aligned with the X-Y plane. Accordingly, previously occluded holographic objects that are located in the positive Z-axis direction from the upper portion of hand408are now revealed and displayed. With reference now toFIGS.8-11, in some examples two hands may be utilized to manipulate two slicing planes within a holographic volume. In some examples, each slicing plane corresponding to each hand may be operated as described above forFIGS.4-7. In some examples, the revealing direction may be reversed as compared to the examples illustrated inFIGS.4-7, such that previously occluded holographic objects that are located in front of the palm of each hand are revealed and displayed. As illustrated inFIGS.8and9, when the user inserts his two hands404,408into the house model400, slicing planes450and454parallel to the upper portions of the left hand404and right hand408, respectively, are generated and operate to reveal previously occluded holographic objects that are located between the slicing planes (between the user's hands). For example and with reference toFIG.9, by positioning his hands404,408as shown, the user causes the system to cease displaying the interior wall456and bookcase420shown inFIG.8, which corresponding reveals and causes the system to display the holographic lamp456and dog460that had been located behind and occluded by the wall and bookcase. Additionally and in this example, the slicing planes450,454are maintained substantially parallel with the upper surfaces of the corresponding hands during movement of the hands. In some examples, one or more of the slicing planes may be aligned or substantially aligned parallel with the surface of the user's palm. With reference toFIGS.10-11, in some examples a user may orient his two hands to be substantially orthogonal or otherwise angled with respect to one another. In this manner, a wedge-shaped volume may be created in which previously occluded objects located within the wedge-shaped volume are revealed and displayed. In the example ofFIGS.10-11, the user forms a right angle wedge with his two hands404and408, and moves the wedge within the house model400to selectively reveal holographic objects by moving the wedge to contain/bound those objects, such that they are between the palms of the two hands. In a similar manner, the user may move the wedge to selectively occlude different internal holographic objects when they are outside or not contained/bounded by the wedge. As with the example described above, the two hands404and408manipulate two slicing planes (not shown) within the holographic volume of the house model400. As noted above, each slicing plane corresponding to each hand may be located and oriented such that previously occluded holographic objects that are located in front of the palm of each hand are revealed and displayed. In the example ofFIGS.10and11, the user positions his hands404and408to contain the bookcase420(seeFIG.10) within the wedge formed by the hands, whereby the bookcase is no longer displayed and an electrical outlet466is revealed and now displayed on the interior wall456. In this example and as described in more detail below, the house model400may comprise multiple layers of holographic objects, with each layer of objects being selectively displayed via manipulation of a slicing plane. The bookcase420may be a member of a first layer of holographic objects and the interior wall456and outlet466may be a member of a second, different layer. Accordingly, in this example and as shown inFIG.11, based on manipulation of the slicing planes corresponding to hands404and408, the system ceases to display the bookcase420(first layer of holographic objects) and displays the second layer of holographic objects that includes the interior wall456and outlet466. With reference now toFIGS.12-19, in some examples two hands may be utilized to manipulate two slicing planes that define a slicing volume within a holographic volume. In some examples, the slicing volume may comprise a geometric shape such as a polyhedron that may be enlarged and shrunken via movement of one or both hands of the user. As shown in these Figures and described below, in some examples the user's hand may remain outside the holographic volume and may manipulate the virtual slicing volume via one or more affordances. As shown in the example ofFIGS.12-15, a user500wearing HMD102may use both hands to manipulate the size and shape of a cuboid volume504to selectively reveal previously occluded holographic objects. In this example, the user500triggers generation of the cuboid volume by “grasping” two outer holographic affordances510,514displayed by the HMD102and moving them in opposing directions. Using depth image data from the HMD102, the system may recognize each hand512,516executing the “grasping” gesture interacting with the corresponding affordance510,514. In response to detecting the grasping gestures interacting with the affordances, the augmented reality display system102displays an initial slicing plane518through the holographic body520. In other examples, the initial slicing plane and the cuboid volume described below may not be displayed via HMD102. As shown inFIGS.13-15as the user500moves his hands512,516apart, the cuboid volume504is generated and displayed and grows in volume. In some examples, the cuboid volume504is constrained to change size along its length in the Z-axis direction roughly parallel with the extent of the holographic body520, such that its height and width are fixed. In other examples, both or all of its dimensions may be changed via user manipulation of the affordances510,514. In this example, the outer skin556of the body520is gradually “removed” (not displayed) as the rectangular end planes of the cuboid volume504are advanced. As shown in these figures, holographic objects internal to the body that were previously occluded and not displayed are revealed and displayed as the cuboid volume504expands to include/contain the objects. For example, the heart524, veins528and arteries532are displayed inFIG.15but occluded and not displayed inFIG.13. In some examples and as noted above, a holographic volume may comprise multiple layers of holographic objects, where each layer of objects may be revealed via manipulation of a slicing plane or slicing volume as described herein. In some examples and as described below, as each sub-layer of objects is revealed and displayed, the preceding layer of objects is correspondingly removed from view. With reference to the example shown inFIGS.16-19, inFIG.16the user500performs a release gesture interacting with the affordances510and514. Based on the release gesture, the state of display of the holographic body520in which the heart524, veins528and arteries532are displayed is frozen, and in this example the cuboid volume affordance is no longer displayed. In this example, the veins528and arteries532are associated with a first layer of holographic objects within the holographic body520. As shown inFIGS.17-19, the user500next performs the grasping gesture interacting with the two inner holographic affordances540,544and moving them in opposing directions. In this example, the inner affordances540,544are associated with the second layer of holographic objects that includes the heart524. As the user500moves his hands apart, another cuboid volume570is generated and grows in volume. As shown in these figures, this second layer cuboid volume570operates in a subtractive manner, wherein the previously displayed veins528and arteries532that are associated with the first layer of holographic objects are now removed from display when the second layer cuboid volume is expanded to include/contain these objects. In this manner, the user500may now more clearly see the heart524. Accordingly and in different examples, such as the examples ofFIGS.4-7and8-11, a slicing plane or slicing volume may be additive, meaning that the plane may operate to reveal and display previously occluded holographic objects. Additionally or alternatively, a slicing plane or slicing volume may be subtractive, meaning that the slicing plane/volume may operate to cease displaying or remove from view previously displayed holographic objects associated with different layers of objects, such as in the example ofFIGS.17-19in which multiple layers of holographic objects are utilized to selectively subtract certain objects from view. In at least one embodiment, a slicing plane or volume may be additive with respect to some holographic objects and subtractive with respect to other holographic objects. In some examples, different layers of holographic objects may be assigned to different hands of a user, such that movement of a hand operates to manipulate a subset of objects that are associated with the particular layer of objects assigned to that hand. In some examples, a developer or user may customize different layers of objects to be associated with particular hands or with particular affordances. With reference now to the example ofFIGS.20and21, in some examples a user may use both hands to manipulate a spherical slicing volume within a holographic volume. As described below, the user may utilize his left hand404and right hand408to manipulate a spherical slicing volume within the model house400. It will also be appreciated that a spherical slicing volume as described herein also may be utilized with a virtual reality experience in which the HMD102comprises a non-see-through display. As shown inFIG.21, when the user's hands404,408extend into the displayed model house400, a spherical slicing volume indicated at610is generated having a volume that corresponds to a distance between the user's hands. In some examples, the center of the spherical slicing volume610may be the midpoint between the user's hands. For example, a location on the palm of each hand may be tracked by the HMD102, and the midpoint between these locations may be selected as the center of the sphere. In some examples and as shown inFIG.21, an affordance indicating the boundary of the spherical slicing volume610may be displayed. For example, a glowing, translucent globe corresponding to the current shape of the spherical slicing volume may be displayed. In other examples, any suitable affordance may be displayed to indicate the spherical slicing volume. As shown inFIG.21, within the spherical slicing volume610holographic objects that were previously occluded from view, such as the sofa418and ottomon480, are revealed and displayed to the user. The volume of the spherical slicing volume610is based on the distance between tracked locations on the user's hands. When the user expands the distance between his hands, the diameter630of the spherical slicing volume610is correspondingly increased, and additional holographic objects and/or portions of such objects that were previously occluded from display are now displayed.FIG.22shows an example of the user expanding the distance between his hands404,408to reveal additional holographic objects in the room, such as the coat rack650and chair654. Correspondingly, when the user reduces the distance between his hands, the diameter630of the spherical slicing volume610is correspondingly reduced, and previously displayed holographic objects and/or portions of such objects may be occluded and/or removed from being displayed. Accordingly, in some examples, a spherical slicing volume may be generated by receiving first location data of at least a portion of a first hand and second location data of at least a portion of a second hand. Based on the first location data and the second location data, a change in distance between the first and the second hand is determined. Based at least on the change in distance between the first hand and the second hand, at least a portion of an additional holographic object associated with the holographic volume is then displayed. In other examples, a spherical slicing volume may be generated and manipulated via a single hand of the user. For example and with reference now toFIG.23, a user may make a pinch gesture touching her index finger802to her thumb804, then separate and expand the distance between the finger and thumb to generate a spherical slicing volume808. As noted above, in other examples any suitable shape of slicing volume may be generated and manipulated between two hands or two digits of one hand as described herein. For example,FIG.24shows one example of generating and manipulating a cuboid volume812with one hand. In some examples, additional data may be displayed with a slicing plane or slicing volume. For example and with a slicing plane, dimension data may be displayed that shows the distance from the plane to a designated starting point in the holographic volume or elsewhere in the environment. With reference again toFIG.5, in one example the distance D from slicing plane406to the holographic interior wall may be displayed via HMD102. In examples of a spherical slicing volume, the current volume, radius and/or diameter of the sphere may be displayed. For example and with reference toFIG.21, the diameter630of the spherical slicing volume610may be displayed between the user's hands. In some examples, multiple users of different HMDs may share hand tracking data and/or slicing planes/volumes based on their hand tracking data. For example, where two users of two different HMDs are viewing the same holographic volume, each device/user may have different roles/functionalities in how they manipulate the volume in an additive or subtractive manner. In one example, a first HMD and first user may manipulate a slicing plane(s) within the holographic house model400as describe above with reference toFIGS.4-7and8-11, and the results of such manipulations may be shared with and displayed by the second HMD to the second user. Similarly, the second HMD and second user may manipulate a spherical slicing volume within the holographic house model400as describe above with reference toFIGS.20-21, and the results of such manipulations may be shared with and displayed by the first HMD to the first user. It will be appreciated that many other variations and combinations of sharing slicing planes/volumes among multiple devices/users are possible and within the scope of the present disclosure. FIG.25is a block diagram illustrating an example use environment902comprising a display system900. In some examples, the display system900may comprise a head-mounted display device, such as the augmented reality display device102ofFIGS.1through24. In other examples, the display system900may comprise a virtual reality display system or a video augmented reality system.FIG.25illustrates example components and modules that may be used to display and manipulate slicing planes and volumes in the manners disclosed above, and omits other components for clarity. In some examples, all logic may be executed locally on the display system900. In other examples, some logic may be executed remotely, such as by one or more remotely located computing devices904via a computer network906, or by another local device (e.g. a network edge device). In different examples of display systems according to the present disclosure, one or more components and/or modules of display system900may be omitted, and one or more additional components and/or modules may be added. The display system900may comprise one or more image sensors908configured to capture image data of a real-world surroundings. The one or more image sensors include a depth image sensor(s)910configured to capture depth image data, and optionally may include a visible light image sensor(s)912configured to capture visible light image data. Examples of suitable depth sensors for use as depth image sensor910include a time of flight camera, a depth camera, and a stereo camera arrangement. Examples of suitable visible light image sensors for use as visible light sensors912include an RGB camera and a grayscale camera. The display system900further comprises computing hardware, such as memory and logic devices, examples of which are described below in the context ofFIG.27. Various software, firmware, and/or hardware modules may be implemented in such computing hardware. For example, the display system900may comprise a scene mapping module914configured to receive image data (depth and optionally visible light) from the one or more image sensors908and generate a three-dimensional surface reconstruction or other depth map of the use environment902based at least on the image data received. The display system900may store the depth map generated by the scene mapping module914as physical scene data918. The physical scene data918includes surface data920. In some examples, the surface data920may comprise a surface reconstruction (e.g. a mesh representation of the surface), and further may comprise processed depth data in which portions of mesh data are replaced with planes corresponding to identified surfaces. In addition to physical scene data918, the display system900may store holographic object data924comprising information regarding holographic objects associated with applications that are executable by the display system900. The depicted holographic object data924comprises data for each of one or more holographic objects, indicated as objects1through N. Data stored for each object926may comprise instructions for displaying the object, and may specify a size, a shape, a color, and/or other characteristics for displaying the object. The display system900may further comprise a gesture detection module934configured to receive image data (depth and/or visible light) from the one or more image sensors908and process the image data via an image processing component936to detect possible user gestures. The image processing component936may comprise a skeletal classifier938configured to detect and classify an object as a skeleton or part of a skeleton. For example, the skeletal classifier938may fit a skeletal model to depth image data received in which a skeleton is represented by a collection of nodes that represent locations of the human body and that are connected in a form that approximates the form of the human body. In a more specific example, the skeletal classifier938may be configured to detect and classify a hand or other appendage(s) of a user when the appendage(s) is within a field of view of the image sensor(s)908. In some examples, articulated hand data may be generated to represent detailed positions and orientations of a user's hand(s). The image processing component936may comprise one or more gesture filters940configured to detect gestures performed by a user. Example gesture filters940include one or more filters for recognizing a user grasping gesture(s) (e.g. a grab, a pinch, etc.) and one or more filters for a user release gesture(s) (e.g., a reverse grab, reverse pinch, etc.). The display system900may further comprise a holographic volume interaction module942configured to detect user manipulations of slicing planes and volumes described herein, as well as user interactions with displayed holographic volumes and objects that are intended to reveal and hide other holographic objects located within a holographic volume as described herein. The holographic volume interaction module942may receive gesture data from the gesture detection module934, physical scene information from the physical scene data918, and also receive holographic object data924, e.g. regarding the locations of displayed holographic objects compared to the holographic volume and/or real-world surfaces and objects (e.g. user fingers, tables, floor, walls, etc.). Physical scene data918may include articulated hand location data from one or more hands, which may be used to determine the location, size, and other parameters of a slicing plane or slicing volume as described herein. Using this data and information, the holographic volume interaction module942then outputs, to one or more displays950, the holographic objects and/or portions of holographic objects described herein, including holographic objects revealed via movement and/or relocation of a slicing plane or volume. The holographic volume interaction module942also may utilize this data and information to selectively occlude or not display certain holographic objects and/or portions of objects as a function of movement and/or relocation of a slicing plane or volume as described herein. The one or more displays950may be see-through with respect to a real-world background, or may be opaque. In addition to a display(s)950, the display system900may comprise one or more other output devices and/or input devices. For example, the display system900may include one or more speakers952configured to output audio, one or more microphones954, and various other input and output devices not shown inFIG.25. With reference now toFIGS.26A and26B, a flow diagram depicting an example method700for displaying holographic objects using a first hand and a second hand is provided. The following description of method700is provided with reference to the components described herein and shown inFIGS.1-25and27, but it will be appreciated that method700also may be performed in other contexts using other suitable components. At704the method700may include displaying via a display system a holographic object associated with a holographic volume. At708the method700may include receiving, from a sensor, first location data of at least a portion of a first hand and second location data of at least a portion of a second hand. At712the method700may include determining, based on the first location data and the second location data, a change in distance between the first hand and the second hand. At716the method700may include, based at least on the change in distance between the first hand and the second hand, displaying via the display system at least a portion of an additional holographic object associated with the holographic volume. At720the method700may include generating a slicing volume between the first hand and the second hand. At724the method700may include, wherein the slicing volume is defined by a first slicing plane parallel to a surface of the first hand and a second slicing plane parallel to a surface of the second hand. At728the method700may include, wherein the first slicing plane is maintained parallel to the surface of the first hand and the second slicing plane is maintained parallel to the surface of the second hand during movement of the hands. At732the method700may include, wherein the slicing volume comprises a spherical volume or a polyhedral volume. At736the method700may include modifying a volume of the slicing volume based on the change in distance between the first hand and the second hand. At740the method700may include, based on the modified volume of the slicing volume, displaying via the display system at least a portion of the additional holographic object. With reference now toFIG.26B, at744the method700may include, after displaying via the display system the portion of the additional holographic object, reducing the volume of the slicing volume based on a reduction in distance between the first hand and the second hand. At748the method700may include, based on the reduced volume of the slicing volume, ceasing to display via the display system the portion of the additional holographic object. In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product. FIG.27schematically shows an example computing system1100that can enact one or more of the methods and processes described above. Computing system1100is shown in simplified form. Computing system1100may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), head-mounted display devices (e.g. augmented reality display systems102and900), and/or other computing devices. Computing system1100includes a logic machine1102and a storage machine1104. Computing system1100may optionally include a display subsystem1106, input subsystem1108, communication subsystem1110, and/or other components not shown inFIG.11. Logic machine1102includes one or more physical devices configured to execute instructions. For example, the logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result. The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic machine may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic machine optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. Storage machine1104includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage machine1104may be transformed—e.g., to hold different data. Storage machine1104may include removable and/or built-in devices. Storage machine1104may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage machine1104may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that storage machine1104includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a finite duration. Aspects of logic machine1102and storage machine1104may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example. The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system1100implemented to perform a particular function. In some cases, a module, program, or engine may be instantiated via logic machine1102executing instructions held by storage machine1104. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc. It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices. When included, display subsystem1106may be used to present a visual representation of data held by storage machine1104. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display subsystem1106may likewise be transformed to visually represent changes in the underlying data. Display subsystem1106may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic machine1102and/or storage machine1104in a shared enclosure, or such display devices may be peripheral display devices. When included, input subsystem1108may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity. When included, communication subsystem1110may be configured to communicatively couple computing system1100with one or more other computing devices. Communication subsystem1110may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system1100to send and/or receive messages to and/or from other devices via a network such as the Internet. Another example provides a computing device, comprising: a logic subsystem comprising a processor; and memory storing instructions executable by the logic subsystem to: display via a display system a holographic object associated with a holographic volume, the holographic object occluding an occluded holographic object that is not displayed; receive, from a sensor, location data of at least a portion of a hand; use the location data of the hand to locate a slicing plane or a slicing volume within the holographic volume; and based on the location of the slicing plane or the slicing volume, display via the display system at least a portion of the occluded holographic object. The computing device may additionally or alternatively include, wherein the instructions are executable to: use the location data of the hand to locate the slicing plane; and define a revealing direction relative to the slicing plane, wherein the occluded holographic object is displayed based on being located in the revealing direction from the slicing plane. The computing device may additionally or alternatively include, wherein the instructions are executable to locate the slicing plane substantially parallel with an upper surface of the hand. The computing device may additionally or alternatively include, wherein the instructions are executable to maintain the slicing plane substantially parallel with the upper surface of the hand during movement of the hand. The computing device may additionally or alternatively include, wherein the instructions are executable to align the slicing plane with a closest coordinate plane of three mutually orthogonal coordinate planes. The computing device may additionally or alternatively include, wherein the instructions are executable to maintain alignment of the slicing plane with the closest coordinate plane during movement of the hand. The computing device may additionally or alternatively include, wherein the location data comprises articulated hand data. The computing device may additionally or alternatively include, wherein the location data comprises articulated hand data of two digits of the hand, and the instructions are executable to: use the articulated hand data of the two digits of the hand to locate the slicing volume within the holographic volume; and based on the location of the slicing volume, display via the display system at least a portion of the occluded holographic object. The computing device may additionally or alternatively include, wherein the hand is a left hand, the slicing plane is a left hand slicing plane, and the instructions are executable to: receive, from the sensor, location data of at least a portion of a right hand; use the location data of the right hand to locate a right hand slicing plane within the holographic volume; and based on the location of the left hand slicing plane and the right hand slicing plane, display via the display system at least a portion of the occluded holographic object. The computing device may additionally or alternatively include, wherein the holographic volume comprises a plurality of layers that each comprise one or more holographic objects, and the instructions are executable to: display a first layer of a plurality holographic objects via manipulation of the slicing plane or the slicing volume; and cease displaying one or more of the plurality of holographic objects previously displayed in the first layer based on manipulation of another slicing plane or another slicing volume to display a second layer of the plurality of layers of one or more holographic objects. The computing device may additionally or alternatively include, wherein the instructions are executable to display via the display system an affordance indicating the slicing plane or the slicing volume. Another example provides method enacted on a computing device, the method comprising: displaying via a display system a holographic object associated with a holographic volume; receiving, from a sensor, first location data of at least a portion of a first hand and second location data of at least a portion of a second hand; determining, based on the first location data and the second location data, a change in distance between the first hand and the second hand; and based at least on the change in distance between the first hand and the second hand, displaying via the display system at least a portion of an additional holographic object associated with the holographic volume. The method may additionally or alternatively include generating a slicing volume between the first hand and the second hand; modifying a volume of the slicing volume based on the change in distance between the first hand and the second hand; and based on the modified volume of the slicing volume, displaying via the display system at least a portion of the additional holographic object. The method may additionally or alternatively include, wherein the slicing volume is defined by a first slicing plane parallel to a surface of the first hand and a second slicing plane parallel to a surface of the second hand. The method may additionally or alternatively include, wherein the first slicing plane is maintained parallel to the surface of the first hand and the second slicing plane is maintained parallel to the surface of the second hand during movement of the hands. The method may additionally or alternatively include, wherein the slicing volume comprises a spherical volume or a polyhedral volume. The method may additionally or alternatively include, after displaying the portion of the additional holographic object, reducing the volume of the slicing volume based on a reduction in distance between the first hand and the second hand; and based on the reduced volume of the slicing volume, ceasing to display via the display system the portion of the additional holographic object. Another example provides head-mounted display device, comprising: a see-through display system; a logic subsystem comprising one or more processors; and memory storing instructions executable by the logic subsystem to: display via the see-through display system a holographic object associated with a holographic volume, the holographic object occluding an occluded holographic object that is not displayed; receive, from a sensor, location data of at least a portion of a hand; use the location data of the hand to locate a slicing plane or a slicing volume within the holographic volume; and based on the location of the slicing plane or the slicing volume, display via the see-through display system at least a portion of the occluded holographic object. The head-mounted display device may additionally or alternatively include, wherein the instructions are executable to: use the location data of the hand to locate the slicing plane; and define a revealing direction relative to the slicing plane, wherein the occluded holographic object is displayed based on being located in the revealing direction from the slicing plane. The head-mounted display device may additionally or alternatively include, wherein the hand is a left hand, the slicing plane is a left hand slicing plane, and the instructions are executable to: receive, from the sensor, location data of at least a portion of a right hand; use the location data of the right hand to locate a right hand slicing plane within the holographic volume; and based on the location of the left hand slicing plane and the right hand slicing plane, display via the display system at least a portion of the occluded holographic object. It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed. The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof. | 53,178 |
11860573 | DETAILED DESCRIPTION Embodiments consistent with the present disclosure will be described with reference to the accompanying drawings, which are merely examples for illustrative purposes and are not intended to limit the scope of the present disclosure. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or similar parts, and a detailed description thereof may be omitted. Further, in the present disclosure, the disclosed embodiments and the features of the disclosed embodiments may be combined. The described embodiments are some but not all of the embodiments of the present disclosure. Based on the disclosed embodiments, persons of ordinary skill in the art may derive other embodiments consistent with the present disclosure. For example, modifications, adaptations, substitutions, additions, or other variations may be made based on the disclosed embodiments. Such variations of the disclosed embodiments are still within the scope of the present disclosure. Accordingly, the present disclosure is not limited to the disclosed embodiments. Instead, the scope of the present disclosure is defined by the appended claims. As used herein, the terms “couple,” “coupled,” “coupling,” or the like may encompass an optical coupling, a mechanical coupling, an electrical coupling, an electromagnetic coupling, or any combination thereof. An “optical coupling” between two optical elements refers to a configuration in which the two optical elements are arranged in an optical series, and a light (or beam) output from one optical element may be directly or indirectly received by the other optical element. An optical series refers to optical positioning of a plurality of optical elements in a light path, such that a light output from one optical element may be transmitted, reflected, diffracted, converted, modified, or otherwise processed or manipulated by one or more of other optical elements. In some embodiments, the sequence in which the plurality of optical elements are arranged may or may not affect an overall output of the plurality of optical elements. A coupling may be a direct coupling or an indirect coupling (e.g., coupling through an intermediate element). The phrase “at least one of A or B” may encompass all combinations of A and B, such as A only, B only, or A and B. Likewise, the phrase “at least one of A, B, or C” may encompass all combinations of A, B, and C, such as A only, B only, C only, A and B, A and C, B and C, or A and B and C. The phrase “A and/or B” may be interpreted in a manner similar to that of the phrase “at least one of A or B.” For example, the phrase “A and/or B” may encompass all combinations of A and B, such as A only, B only, or A and B. Likewise, the phrase “A, B, and/or C” has a meaning similar to that of the phrase “at least one of A, B, or C.” For example, the phrase “A, B, and/or C” may encompass all combinations of A, B, and C, such as A only, B only, C only, A and B, A and C, B and C, or A and B and C. When a first element is described as “attached,” “provided,” “formed,” “affixed,” “mounted,” “secured,” “connected,” “bonded,” “recorded,” or “disposed,” to, on, at, or at least partially in a second element, the first element may be “attached,” “provided,” “formed,” “affixed,” “mounted,” “secured,” “connected,” “bonded,” “recorded,” or “disposed,” to, on, at, or at least partially in the second element using any suitable mechanical or non-mechanical manner, such as depositing, coating, etching, bonding, gluing, screwing, press-fitting, snap-fitting, clamping, etc. In addition, the first element may be in direct contact with the second element, or there may be an intermediate element between the first element and the second element. The first element may be disposed at any suitable side of the second element, such as left, right, front, back, top, or bottom. When the first element is shown or described as being disposed or arranged “on” the second element, term “on” is merely used to indicate an example relative orientation between the first element and the second element. The description may be based on a reference coordinate system shown in a figure, or may be based on a current view or example configuration shown in a figure. For example, when a view shown in a figure is described, the first element may be described as being disposed “on” the second element. It is understood that the term “on” may not necessarily imply that the first element is over the second element in the vertical, gravitational direction. For example, when the assembly of the first element and the second element is turned 180 degrees, the first element may be “under” the second element (or the second element may be “on” the first element). Thus, it is understood that when a figure shows that the first element is “on” the second element, the configuration is merely an illustrative example. The first element may be disposed or arranged at any suitable orientation relative to the second element (e.g., over or above the second element, below or under the second element, left to the second element, right to the second element, behind the second element, in front of the second element, etc.). When the first element is described as being disposed “on” the second element, the first element may be directly or indirectly disposed on the second element. The first element being directly disposed on the second element indicates that no additional element is disposed between the first element and the second element. The first element being indirectly disposed on the second element indicates that one or more additional elements are disposed between the first element and the second element. The term “processor” used herein may encompass any suitable processor, such as a central processing unit (“CPU”), a graphics processing unit (“GPU”), an application-specific integrated circuit (“ASIC”), a programmable logic device (“PLD”), or any combination thereof. Other processors not listed above may also be used. A processor may be implemented as software, hardware, firmware, or any combination thereof. The term “controller” may encompass any suitable electrical circuit, software, or processor configured to generate a control signal for controlling a device, a circuit, an optical element, etc. A “controller” may be implemented as software, hardware, firmware, or any combination thereof. For example, a controller may include a processor, or may be included as a part of a processor. The term “non-transitory computer-readable medium” may encompass any suitable medium for storing, transferring, communicating, broadcasting, or transmitting data, signal, or information. For example, the non-transitory computer-readable medium may include a memory, a hard disk, a magnetic disk, an optical disk, a tape, etc. The memory may include a read-only memory (“ROM”), a random-access memory (“RAM”), a flash memory, etc. The term “film” and “layer” may include rigid or flexible, self-supporting or free-standing film, coating, or layer, which may be disposed on a supporting substrate or between substrates. The term “layer” used herein may be in any suitable form, such as coating, film, plate, etc. In some situations, the term “layer” may be interchangeable with the term “coating,” “film,” and/or “plate.” The phrases “in-plane direction,” “in-plane orientation,” “in-plane rotation,” “in-plane alignment pattern,” and “in-plane pitch” refer to a direction, an orientation, a rotation, an alignment pattern, and a pitch in a plane of a film or a layer (e.g., a surface plane of the film or layer, or a plane parallel to the surface plane of the film or layer), respectively. The term “out-of-plane direction” indicates a direction that is non-parallel to the plane of the film or layer (e.g., perpendicular to the surface plane of the film or layer, e.g., perpendicular to a plane parallel to the surface plane). For example, when an “in-plane” direction refers to a direction within a surface plane, an “out-of-plane” direction may refer to a thickness direction perpendicular to the surface plane, or a direction that is not parallel with the surface plane. The term “orthogonal” as used in “orthogonal polarizations” or the term “orthogonally” as used in “orthogonally polarized” means that an inner product of two vectors representing the two polarizations is substantially zero. For example, two lights (or beams) with orthogonal polarizations or two orthogonally polarized lights may be two linearly polarized lights with polarizations in two orthogonal directions (e.g., an x-axis direction and a y-axis direction in a Cartesian coordinate system) or two circularly polarized lights with opposite handednesses (e.g., a left-handed circularly polarized light and a right-handed circularly polarized light). In the present disclosure, an angle of a light or beam (e.g., a diffraction angle of a diffracted beam or an incidence angle of an incident beam) with respect to a normal of a surface can be defined as a positive angle or a negative angle, depending on the positional relationship between a propagation direction of the beam and the normal of the surface. For example, when the propagation direction of the beam is clockwise from the normal, the angle of the propagation direction may be defined as a positive angle, and when the propagation direction of the beam is counter-clockwise from the normal, the angle of the propagation direction may be defined as a negative angle. Polarization holography (or polarization interference) is a process widely used to fabricate polarization holograms, such as polarization hologram elements based on liquid crystals, and those based on birefringent photo-refractive holographic materials other than liquid crystals. Polarization holograms are polarization selective optical elements (“PSOEs”) fabricated via polarization holography. Polarization holograms may be fabricated to have a short in-plane pitch, e.g., within a sub-micron range and comparable to visible wavelengths. Polarization holography entails a polarization interference between two beams with different polarizations (or the same polarization) in order to generate a spatially varying polarization field or a polarization interference pattern in space. When the polarization holography is used to fabricate multiple polarization holograms with varying in-plane pitches and varying orientations (e.g., orientations of grating fringes) on a single substrate (e.g., on one or both sides of a wafer), the processes of recording the multiple polarization holograms one by one on the substrate, and precisely aligning the multiple polarization holograms during the fabrication are time consuming and challenging in conventional technologies. In view of the limitations of conventional methods for fabricating PSOEs, the present disclosure provides a more efficient and cost-effective system and method for fabricating PSOEs, such as polarization hologram elements. The system may include a mask configured to split (e.g., forward diffract) an input beam into two polarized beams, which may be used in the polarization holography for generating a polarization interference pattern. A mask is an optical element used in the polarization holography, and configured with predetermined optical structures, such as predetermined microstructures, predetermined sub-wavelength structures, or predetermined optic axis orientation pattern, etc. In some embodiments, the mask may forwardly diffract an input beam into two diffracted polarized beams, at least one of which may carry (or may be encoded with) the optical properties or optical information of the mask. For example, the two polarized beams may be referred to as a signal beam and a reference beam. The signal beam may carry predetermined optical properties of the mask, such as those determined by the predetermined optical structures (e.g., predetermined microstructures, predetermined sub-wavelength structures, or predetermined optic axis orientation pattern, etc.). The reference beam may not carry (or may carry an insignificant amount of) the optical properties of the mask. In some embodiments, the mask may be a polarization selective optical element, such as a PVH mask, a PBP mask, a polarization selective SRG mask. In some embodiments, the mask may be a non-polarization selective optical element, such as a non-polarization selective SRG. In some embodiments of the present disclosure, a polarization conversion optical element (or polarization conversion element) may be disposed between the mask and a recording medium layer. The polarization conversion element may be configured to convert a polarization of at least one of the two polarized beams to a desirable polarization. For example, the polarization conversion element may be configured to receive the two polarized beams from the mask and output two polarized beams with the opposite handednesses, which may interference with one another in space to generate a polarization interference pattern. The polarization interference pattern may be recorded in the recording medium layer after the recording medium layer is sufficiently exposed to the polarization interference pattern. The recording medium layer may include a surface recording medium or a volume (or bulk) recording medium. The surface recording medium and the volume recording medium may be polarization sensitive recording media. The exposed recording medium layer including the surface recording medium may be further used as a photo-alignment material (“PAM”) layer for a birefringent medium layer lately disposed on the recording medium layer. The birefringent medium layer (and the recording medium layer) may form a fabricated PSOE. The exposed recording medium layer including the volume recording medium may form a fabricated PSOE itself. The optical property of the fabricated PSOE may be determined, in part, by the polarization interference pattern, which includes the optical information of the mask. The mask used in the systems and methods of the present disclosure may be any suitable mask, such as an SRG (which may be polarization selective or polarization non-selective) mask, a PBP mask, a PVH mask, etc. The mask may be configured to forwardly diffract an input beam as two polarized beams, a signal beam and a reference beam. In some embodiments, an SRG mask may forwardly diffract an input beam as two linearly polarized beams. In some embodiments, a PVH mask may forwardly diffract an input beam as two circularly polarized beams. In some embodiments, the SRG may function or operate as an optically isotropic grating. For example, the SRG may be a polarization non-selective grating. In some embodiments, the SRG may function or operate as an optically anisotropic grating. For example, the SRG may be a polarization selective grating. In some embodiments, the SRG may be fabricated based on an inorganic material, such as metals or oxides. In some embodiments, the input beam may be a polarized beam having a wavelength λ. In some embodiments, the input beam may be decomposed into two linearly polarized components with a substantially same beam or light intensity and a suitable phase delay between the two linearly polarized components. For example, the input beam may be a linearly polarized beam, a circularly polarized beam, or an elliptically polarized beam, etc. In some embodiments, the input beam may be a collimated beam. In some embodiments, the input beam may be incident onto the SRG at an incidence angle θI. The SRG may be configured to substantially forwardly diffract the input beam as a 0thorder diffracted beam and a −1storder diffracted beam. In some embodiments, the 0thorder diffracted beam and the −1storder diffracted beam may have a wavelength that is substantially the same as the wavelength of the input beam. In some embodiments, the 0thorder diffracted beam and the −1storder diffracted beam may be linearly polarized beams having orthogonal linear polarizations. For example, the 0thorder diffracted beam may be an s-polarized beam, and the −1storder diffracted beam may be a p-polarized beam. In some embodiments, the 0thorder diffracted beam and the −1storder diffracted beam may be linearly polarized beams having a substantially same linear polarization. In some embodiments, the SRG may be configured to operate at a Littrow configuration for the input beam. Diffraction angles of the 0thorder diffracted beam and the −1storder diffracted beam may have a substantially same absolute value and opposite signs. The diffraction angle of the 0thorder diffracted beam may be substantially equal to the incidence angle of the input beam. An angle formed between the 0thorder diffracted beam and the −1storder diffracted beam may have an absolute value that is about twice the absolute value of the incidence angle of the input beam. In some embodiments, the SRG operating at the Littrow configuration may also substantially backwardly diffract the input beam into a +1st order diffracted beam. An diffraction angle of the +1st order diffracted beam may be substantially equal to the incidence angle of the input beam. That is, the +1storder diffracted beam may propagate in a direction opposite to the propagating direction of the input beam. In some embodiments, the 0thorder diffracted beam and the −1storder diffracted beam may have a substantially equal light intensity. In some embodiments, the 0thorder diffracted beam and the −1storder diffracted beam may have different light intensities. The system may also include a waveplate optically coupled to the SRG and configured to receive the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) from the SRG. In some embodiments, the waveplate may be directly optically coupled to the SRG without another optical element disposed therebetween. In some embodiments, the waveplate may be directly optically coupled to the SRG without a gap therebetween. In some embodiments, the waveplate may be indirectly optically coupled to the SRG with another optical element disposed therebetween, which may or may not alter at least one of the propagation direction or the polarization of the 0thorder diffracted beam and the −1storder diffracted beam. The waveplate may be configured to convert the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) into two circularly polarized beams having orthogonal circular polarizations. In some embodiments, the waveplate may function as a quarter-wave plate (“QWP”) for the 0thorder diffracted beam and the −1storder diffracted beam having the same wavelength as the input beam, and convert the 0thorder diffracted beam and the −1storder diffracted beam into two circularly polarized beams with opposite handednesses, e.g., a right-handed circularly polarized (“RHCP”) beam and a left-handed circularly polarized (“LHCP”) beam. In some embodiments, an angle formed between the two circularly polarized beams with opposite handednesses may be substantially equal to the angle formed between the 0thorder diffracted beam and the −1storder diffracted beam. In some embodiments, the two circularly polarized beams with opposite handednesses may have a substantially equal light intensity. In some embodiments, the two circularly polarized beams with opposite handednesses may have different light intensities. The two circularly polarized beams with opposite handednesses output from the waveplate may interfere with each other to generate a polarization interference pattern, to which a polarization sensitive recording medium layer may be exposed to record the polarization interference pattern therein. The two circularly polarized beams with opposite handednesses may also be referred to as two recording beams. The two recording beams (and the input beam) may have a wavelength within an absorption band of the polarization sensitive recording medium layer, e.g., ultraviolet (“UV”), violet, blue, or green beams. In some embodiments, the two recording beams (and the input beam) may be laser beams, e.g., UV, violet, blue, or green laser beams. The superposition of the two recording beams may result in a superposed wave that has a substantially uniform intensity and a varying linear polarization. For example, the linear polarization direction of the superposed wave may spatially vary within a spatial region in which the two circularly polarized beams interfere with one another. In other words, the superposed wave may have a linear polarization with an orientation (or a polarization direction) that is spatially varying within the spatial region in which the two circularly polarized beams interfere with one another. The superposition of the two recording beams may result in a polarization interference pattern. The polarization interference pattern may also be referred to as a pattern of the spatially varying orientation (or polarization direction) of the linear polarization of the superposed wave or a pattern of the varying linear polarization of the superposed wave. In some embodiments, the orientation of the linear polarization may periodically vary within the spatial region. A pattern of the periodic, spatial variation of the orientation of the linear polarization that is recorded in the recording medium layer may define a grating pattern in the recording medium. A period of the grating pattern (or an in-plane pitch of the pattern of the spatially varying orientation of the linear polarization) may be determined by the incidence angle and the wavelength of the input beam incident onto the SRG. In some embodiments, the polarization sensitive recording medium layer may include a photo-alignment material configured to have a photo-induced optical anisotropy when exposed to the polarization interference pattern. Thus, the polarization interference pattern (or the pattern of the spatially varying orientation of the linear polarization of the superposed wave) may be recorded at (e.g., in or on) the polarization sensitive recording medium layer to define an orientation pattern of an optic axis of the polarization sensitive recording medium layer. The defined orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to the grating pattern. In other words, the SRG may function as a mask for recording a grating pattern into the polarization sensitive recording medium layer. SRGs with different parameters may function as respective masks for recording multiple different grating patterns into the polarization sensitive recording medium layer. For example, a first SRG may be used to generate a first polarization interference pattern (and hence a first grating pattern) that may be recorded in a first region (or portion) of the polarization sensitive recording medium layer (or a first polarization sensitive recording medium layer), and a second SRG may replace the first SRG to generate a second polarization interference pattern (and hence a second grating pattern) that may be recorded into a second region (or portion) of the polarization sensitive recording medium layer (or a second polarization sensitive recording medium layer). The first portion and the second portion may be located at the same side or different sides of the polarization sensitive recording medium layer. In some embodiments, the system may further include a light source configured to emit a first beam having a wavelength. In some embodiments, the first beam emitted by the light source may be a diverging beam with a substantially small beam size. In some embodiments, the system may further include a beam conditioning device configured to collimate and expand the first beam as a second beam that is a collimated and expanded beam with a predetermined beam size. In some embodiments, the beam size of the second beam output from the beam conditioning device may be comparable with (e.g., larger than or substantially equal to) an aperture size of the polarization sensitive recording medium layer. An aperture of the polarization sensitive recording medium layer may refer to an opening area of the polarization sensitive recording medium layer that is exposed to the polarization interference pattern (or that may receive the illumination of the polarization interference pattern) during an exposure. An aperture size of the polarization sensitive recording medium layer may refer to a size of the aperture of the polarization sensitive recording medium layer. An aperture shape of the polarization sensitive recording medium layer may refer to a shape of the aperture of the polarization sensitive recording medium layer. In some embodiments, the size of the entire polarization sensitive recording medium layer may be larger than the aperture size of the polarization sensitive recording medium layer. Multiple grating patterns may be recorded in different regions of the polarization sensitive recording medium layer through multiple exposures, e.g., using different SRGs or the same SRG. In some embodiments, the system may further include a light or beam deflecting element configured to deflect the second beam received from the light conditioning device to alter the propagating direction of the second beam. The second beam may propagate toward the SRG as the input beam. The light deflecting element may be any suitable element configured to alter the propagating direction of the second beam, such as a reflector, a grating, a beam splitting element, etc. For example, a mirror (a type of the reflector) may be used to alter the propagating direction of the second beam. In the following descriptions and in the figures, for discussion and illustrative purposes, a reflector is used as an example of the light deflecting element. In some embodiments, the system may further include a first movable stage coupled to the reflector. The first movable stage may be configured to adjust a position and/or an orientation (e.g., a tilting angle) of the reflector. When the orientation of the reflector is adjusted, the incidence angle of the input beam reflected by the reflector onto the SRG may be adjusted, for example, to a predetermined incidence angle. In some embodiments, the system may further include a second movable stage on which the polarization sensitive recording medium layer is disposed. The second movable stage may be translational and/or rotatable to adjust at least one of a position and an orientation of the polarization sensitive recording medium layer disposed thereon relative to the input beam incident onto the SRG, which is disposed over the polarization selective recording medium layer. In some embodiments, the system may further include a controller communicatively coupled with the first and second movable stages, and configured to control the operations of the first and second movable stages. Multiple grating patterns may be recorded into different regions (or portions) of the polarization sensitive recording medium layer through multiple exposures. In some embodiments, the multiple grating patterns may be substantially identical, e.g., the multiple grating patterns may have the same parameters, such as the same grating period, the same grating orientation, the same aperture size, and the same aperture shape, etc. In some embodiments, at least two of the grating patterns may have at least one different parameter, such as different grating periods, different grating orientations, different aperture sizes, and/or different aperture shapes, etc. In some embodiments, the grating period of the grating pattern recorded into the polarization sensitive recording medium layer may be at least partially determined by the incidence angle and the wavelength of the input beam incident onto the SRG, and may be variable through varying the incidence angle and/or the wavelength of the input beam incident onto the SRG. The incidence angle and the wavelength of the input beam, and the parameters (e.g., surface profile, duty cycle, etch depth, refractive index, and/or grating period, etc.) of the SRG may satisfy a predetermined relationship to achieve the Littrow configuration for the SRG. When the incidence angle and/or the wavelength of the input beam varies, the parameters (e.g., surface profile, duty cycle, etch depth, refractive index, and/or grating period, etc.) of the SRG may vary accordingly, such that the SRG may still operate at the Littrow configuration for the input beam having a different incidence angle and/or a different wavelength. In some embodiments, different SRGs with different parameters may be used as masks for recording grating patterns with different grating periods into the polarization sensitive recording medium layer. When the incidence angle and wavelength of the input beam incident onto the SRG are fixed values, the grating orientation of the grating pattern (or orientations of grating fringes) recorded into the polarization sensitive recording medium layer may be varied through varying the orientation of the polarization sensitive recording medium layer, e.g., through rotating the polarization sensitive recording medium layer in a predetermined direction (e.g., clockwise or counter-clockwise). In some embodiments, the size of the grating pattern recorded into the polarization sensitive recording medium layer may be varied through varying the beam size of the input beam and/or the aperture size of the polarization sensitive recording medium layer. In some embodiments, the shape of the grating pattern recorded into the polarization sensitive recording medium layer may be varied through varying the beam shape of the input beam and/or the aperture shape of the polarization sensitive recording medium layer. In some embodiments, a birefringent medium may be dispensed, e.g., coated or deposited, on the polarization sensitive recording medium layer that has been exposed to the polarization interference pattern to form a birefringent medium layer. The birefringent medium may include one or more birefringent materials having an intrinsic birefringence, such as non-polymerizable LCs or polymerizable LCs (e.g., reactive mesogens (“RMs”)). The polarization sensitive recording medium layer may be configured to at least partially align optically anisotropic molecules (e.g., LC molecules, or RM molecules, etc.) in the birefringent medium to form the grating pattern. Thus, the grating pattern recorded in the polarization sensitive recording medium layer may be transferred to the birefringent medium. In some embodiments, the aligned birefringent medium may be polymerized to solidify and form the birefringent medium layer. A polarization selective grating may be obtained. In some embodiments, when multiple grating patterns are recorded in different regions of the polarization sensitive recording medium layer, the polarization sensitive recording medium layer may be configured to at least partially align optically anisotropic molecules (e.g., LC molecules, or RM molecules, etc.) disposed in corresponding regions of the birefringent medium layer to produce respective grating patterns. Multiple polarization selective gratings may be obtained after the aligned birefringent medium layer is polymerized. In the disclosed embodiments, the SRG may function as a mask for recording a corresponding grating pattern into the polarization sensitive recording medium layer. The SRGs with different parameters may function as different masks for recording different grating patterns into the polarization sensitive recording medium layer. Compared to a conventional polarization selective grating that operates at the Littrow configuration to diffract an incident beam as two diffracted beams with different polarizations, the SRG of the present disclosure fabricated from, e.g., an inorganic material, may have a higher damage threshold than the conventional polarization selective grating. In addition, the SRG of the present disclosure may have a higher diffraction efficiency at a short grating period (e.g., 300 nm˜500 nm) than the conventional polarization selective grating. Thus, the SRG of the present disclosure may provide an improved reliability and an increased power efficiency for the fabrication of the PSOEs. Fabricating PSOEs (e.g., gratings) through the SRG(s) may expedite the fabrication iteration with a more reliable inorganic mask, a finer spatial resolution, and an enhanced alignment precision, and a higher throughput. The disclosed fabrication system and method may provide a cost-effective and contactless solution for the fabrication of polarization selective gratings (e.g., PVH gratings, or PBP gratings, etc.) with any desirable 1D or 2D diffraction efficiency profile (e.g., any non-uniform diffraction efficiency profile), which may be implemented in numerous applications in a variety of technical fields. In some applications, a polarization selective grating (e.g., a PVH grating, or a PBP grating, etc.) with a non-uniform diffraction efficiency may improve the optical performance of an optical assembly or system in which the polarization selective grating is implemented. FIG.1Aillustrates a schematic three-dimensional (“3D”) view of a polarization selective optical element (“PSOE”)100with an incident light102incident onto the PSOE100along a −z-axis, according to an embodiment of the present disclosure. Although the PSOE100is shown as a rectangular plate shape for illustrative purposes, the PSOE100may have any suitable shape, such as a circular shape. In some embodiments, one or both surfaces along the light propagating path of the incident light102may have curved shapes. The PSOE100may include suitable sub-wavelength structures, liquid crystals, photo-refractive holographic materials, or any combination thereof. In some embodiments, the PSOE100may be fabricated based on an isotropic or anisotropic material. In some embodiments, the PSOE100may be fabricated based on a birefringent medium, e.g., liquid crystal (“LC”) materials, which may have an intrinsic orientational order of optically anisotropic molecules that can be locally controlled. In some embodiments, the PSOE100may be fabricated based on a photosensitive polymer, such as an amorphous polymer, an LC polymer, etc., which may generate an induced (e.g., photo-induced) optical anisotropy and/or an induced (e.g., photo-induced) optic axis orientation when subjected to a polarized light irradiation. In some embodiments, the PSOE100may include a birefringent medium layer. The birefringent medium layer115may have a first surface115-1and a second surface115-2opposite to the first surface115-1. The first surface115-1and the second surface115-2may be surfaces along the light propagating path of the incident light102. The birefringent medium layer115may include optically anisotropic molecules configured with a three-dimensional (“3D”) orientational pattern to provide a polarization selective optical response. In some embodiments, the birefringent medium layer115of the PSOE100may include an LC material, and an optic axis of the LC material may be configured with a spatially varying orientation in at least one in-plane direction. For example, the optic axis of the LC material may periodically or non-periodically vary in at least one in-plane linear direction, in at least one in-plane radial direction, in at least one in-plane circumferential (e.g., azimuthal) direction, or a combination thereof. The LC molecules may be configured with an in-plane orientation pattern, in which the directors of the LC molecules may periodically or non-periodically vary in the at least one in-plane direction. In some embodiments, the optic axis of the LC material may also be configured with a spatially varying orientation in an out-of-plane direction. The directors of the LC molecules may also be configured with spatially varying orientations in an out-of-plane direction. For example, the optic axis of the LC material (or directors of the LC molecules) may twist in a helical fashion in the out-of-plane direction. In some embodiments, the PSOE100may be a polarization selective grating.FIGS.1B-1Dschematically illustrate a portion of a periodic in-plane orientation pattern of optically anisotropic molecules112of the PSOE100, according to various embodiments of the present disclosure. For discussion purposes, rod-like LC molecules112are used as examples of the optically anisotropic molecules112of the birefringent medium layer115. The rod-like LC molecule112may have a longitudinal direction (or a length direction) and a lateral direction (or a width direction). The longitudinal direction of the LC molecule112may be referred to as a director of the LC molecule112or an LC director. An orientation of the LC director may determine a local optic axis orientation or an orientation of the optic axis at a local point of the birefringent medium layer115. The term “optic axis” may refer to a direction in a crystal. A light propagating in the optic axis direction may not experience birefringence (or double refraction). An optic axis may be a direction rather than a single line: lights that are parallel to that direction may experience no birefringence. The local optic axis may refer to an optic axis within a predetermined region of a crystal. FIGS.1B-1Dschematically illustrate an x-y sectional view of a portion of the periodic in-plane orientation pattern of the LC directors (indicated by arrows188inFIG.1B) of the LC molecules112located in close proximity to or at a surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115shown inFIG.1A. For illustrative purposes, the LC directors of the LC molecules112shown inFIGS.1B-1Dare presumed to be in the surface of the birefringent medium layer115or in a plane parallel with the surface with substantially small tilt angles with respect to the surface. The LC directors located in close proximity to or at the surface (at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115may rotate periodically in at least one in-plane direction (e.g., an x-axis direction). As shown inFIG.1B, the LC molecules112located in close proximity to or at a surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115, may be configured with LC directors continuously rotating in a predetermined direction (e.g., an x-axis direction) along the surface (or in a plane parallel with the surface). The continuous rotation of the LC directors may form a periodic rotation pattern with a uniform (e.g., same) in-plane pitch Pin. The predetermined direction may be any suitable direction along the surface (or in a plane parallel with the surface) of the birefringent medium layer115. For illustrative purposes,FIG.1Bshows that the predetermined direction is the x-axis direction. The predetermined direction may be referred to as an in-plane direction, the pitch Pinalong the in-plane direction may be referred to as an in-plane pitch or a horizontal pitch. The pattern with the uniform (or same) in-plane pitch Pinmay be referred to as a periodic LC director in-plane orientation pattern. The in-plane pitch Pinis defined as a distance along the in-plane direction (e.g., the x-axis direction) over which the LC directors rotate by a predetermined value (e.g., 180°). In other words, in a region substantially close to (including at) the surface of the birefringent medium layer115, local optic axis orientations of the birefringent medium layer115may vary periodically in the in-plane direction (e.g., the x-axis direction) with a pattern having the uniform (or same) in-plane pitch Pin. In addition, at the surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115, the directors of the LC molecules112may rotate in a predetermined rotation direction, e.g., a clockwise direction or a counter-clockwise direction. Accordingly, the rotation of the directors of the LC molecules112at the surface of the birefringent medium layer115may exhibit a handedness, e.g., right handedness or left handedness. In the embodiment shown inFIG.1B, at the surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115, the directors of the LC molecules112may rotate in a clockwise direction. Accordingly, the rotation of the directors of the LC molecules112at the surface of the birefringent medium layer115may exhibit a left handedness. FIG.1Cschematically illustrates a portion of the periodic in-plane orientation pattern of the directors (indicated by arrows188inFIG.1C) of the LC molecules112located in close proximity to or at a surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115. In the embodiment shown inFIG.1C, at the surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115, the directors of the LC molecules112may rotate in a counter-clockwise direction. Accordingly, the rotation of the directors of the LC molecules112at the surface of the birefringent medium layer115may exhibit a right handedness. The directors of the LC molecules112located in close proximity to or at a surface of the birefringent medium layer115shown inFIG.1Band the directors of the LC molecules112located in close proximity to or at a surface of the birefringent medium layer115shown inFIG.1Cmay have mirror symmetric orientation patterns. FIG.1Dschematically illustrates a portion of the periodic in-plane orientation pattern of the directors (indicated by arrows188inFIG.1D) of the LC molecules112located in close proximity to or at a surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115. It is noted that inFIG.1D, only some directors are indicated by arrows188. Arrows are not shown for all directors for the simplicity of illustration. In the embodiment shown inFIG.1D, at the surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115, domains in which the directors of the LC molecules112may rotate in a clockwise direction (referred to as domains DL) and domains in which the directors of the LC molecules112may rotate in a counter-clockwise direction (referred to as domains DR) may be alternatingly arranged in both x-axis and y-axis directions. The domains DL and the domains DR are schematically enclosed by dotted squares. In some embodiments, the domains DL and the domains DR may have substantially the same size. In some embodiments, the width of each domain may be substantially equal to the value of the in-plane pitch Pin. Although not shown, in some embodiments, the domains DL and the domains DR may be alternatingly arranged in at least one direction along the surface of the (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115. In some embodiments, the width of each domain may be an integer multiple of the values of the in-plane pitch Pin. In some embodiments, the domains DL and the domains DR may have different sizes. FIGS.1E-1Gschematically illustrate a y-z sectional view of a portion of out-of-plane orientations of the LC directors (indicated by arrows188inFIG.1B) of the LC molecules112in the PSOE100shown inFIG.1A, according to various embodiments of the present disclosure. As shown inFIG.1E, inside (or within, in) a volume of the birefringent medium layer115, the LC molecules112may be arranged in a plurality of helical structures117with a plurality of helical axes118and a helical pitch Phalong the helical axes. The azimuthal angles of the LC molecules112arranged along a single helical structure117may continuously vary around a helical axis118in a predetermined rotation direction, e.g., clockwise direction or counter-clockwise direction. In other words, the LC directors of the LC molecules112arranged along a single helical structure117may continuously rotate around the helical axis118in a predetermined rotation direction to continuously change the azimuthal angle. Accordingly, the helical structure117may exhibit a handedness, e.g., right handedness or left handedness. The helical pitch Phmay be defined as a distance along the helical axis118over which the LC directors rotate around the helical axis118by 360°, or the azimuthal angles of the LC molecules vary by 360°. In the embodiment shown inFIG.1E, the helical axes118may be substantially perpendicular to the first surface115-1and/or the second surface115-2of the birefringent medium layer115. In other words, the helical axes118of the helical structures117may be in a thickness direction (e.g., a z-axis direction) of the birefringent medium layer115. That is, the LC molecules112may have substantially small tilt angles (including zero degree tilt angles), and the LC directors of the LC molecules112may be substantially orthogonal to the helical axis118. The birefringent medium layer115may have a vertical pitch Pv, which may be defined as a distance along the thickness direction of the birefringent medium layer115over which the LC directors of the LC molecules112rotate around the helical axis118by 180° (or the azimuthal angles of the LC directors vary by 180°). As shown inFIG.1E, the LC molecules112from the plurality of helical structures117having a first same orientation (e.g., same tilt angle and azimuthal angle) may form a first series of slanted and parallel refractive index planes114periodically distributed within the volume of the birefringent medium layer115. Although not labeled, the LC molecules112with a second same orientation (e.g., same tilt angle and azimuthal angle) different from the first same orientation may form a second series of slanted and parallel refractive index planes periodically distributed within the volume of the birefringent medium layer115. Different series of slanted and parallel refractive index planes may be formed by the LC molecules112having different orientations. In the same series of parallel and periodically distributed, slanted refractive index planes114, the LC molecules112may have the same orientation and the refractive index may be the same. Different series of slanted refractive index planes may correspond to different refractive indices. When the number of the slanted refractive index planes (or the thickness of the birefringent medium layer) increases to a sufficient value, Bragg diffraction may be established according to the principles of volume gratings. Thus, the slanted and periodically distributed refractive index planes114may also be referred to as Bragg planes114. Within the birefringent medium layer115, there may exist different series of Bragg planes. A distance (or a period) between adjacent Bragg planes114of the same series may be referred to as a Bragg period PB. The different series of Bragg planes formed within the volume of the birefringent medium layer115may produce a varying refractive index profile that is periodically distributed in the volume of the birefringent medium layer115. The birefringent medium layer115may diffract an input light satisfying a Bragg condition through Bragg diffraction. In the embodiment shown inFIG.1F, the helical axes118of helical structures117may be tilted with respect to the first surface115-1and/or the second surface115-2of the birefringent medium layer115(or with respect to the thickness direction of the birefringent medium layer115). For example, the helical axes118of the helical structures117may have an acute angle or obtuse angle with respect to the first surface115-1and/or the second surface115-2of the birefringent medium layer115. In some embodiments, the LC directors of the LC molecule112may be substantially orthogonal to the helical axes118(i.e., the tilt angle may be substantially zero degree). In some embodiments, the LC directors of the LC molecule112may be tilted with respect to the helical axes118at an acute angle. The birefringent medium layer115may have a vertical periodicity (or pitch) Pv. For discussion purposes,FIGS.1E and1Fshow that the Bragg planes114within the volume of the birefringent medium layer115are slanted Bragg planes, which form an acute angle with respect to at least one of the first surface115-1or the second surface115-2of the birefringent medium layer115. In some embodiments, the Bragg planes may be configured to be vertical Bragg planes, which may be substantially perpendicular to a surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115. In some embodiments, the Bragg planes may be configured to be substantially horizontal Bragg planes, which are perpendicular to the surface (e.g., at least one of the first surface115-1or the second surface115-2) of the birefringent medium layer115. In the embodiment shown inFIG.1G, in a volume of the birefringent medium layer115, along the thickness direction (e.g., the z-axis direction) of the birefringent medium layer115, the directors (or the azimuth angles) of the LC molecules112may remain in the same orientation (or value) from the first surface115-1to the second surface115-2of the birefringent medium layer115. In some embodiments, the thickness of the LC layer910may be configured as d=λ/(2*Δn), where λ is a design wavelength, Δn is the birefringence of the LC material of the birefringent medium layer115, and Δn=ne−no, where neand noare the extraordinary and ordinary refractive indices of the LC material, respectively. Referring toFIGS.1E-1G, in some embodiments, the PSOE100including the birefringent medium layer115in which the LC directors have out-of-plane orientations shown inFIG.1EorFIG.1Fmay function as a PVH element (e.g., a PVH grating). A slant angle α of the PVH element including the birefringent medium layer115may be defined as α=90°−β, where β=arctan (Pv/Pin). In some embodiments, when the slant angle is within a range of 0°<α<45°, the PSOE100may function as a transmissive PVH element (e.g., transmissive PVH grating). In some embodiments, when the slant angle is within a range of 45°<α<90°, the PSOE100may function as a reflective PVH element (e.g., reflective PVH grating). The diffraction efficiency of a PVH element may be affected by various parameters, such as the thickness, the birefringence, and/or the slant angle α of the PVH element, etc. The birefringence and the slant angle α of the PVH element may be related to the material properties of a birefringent medium forming the PVH element. For example, the birefringence of the PVH element may be related to the birefringence of the birefringent medium, and the slant angle α of the PVH element may be related to a chirality of the birefringent medium. When two birefringent media having a substantially same chirality are used to form two PVH elements respectively, provided that the in-plane pitches of the two PVH elements are substantially the same, the slant angles of the two PVH elements may be substantially the same. When two birefringent media having a substantially same chirality are used to form two PVH elements respectively, provided that the in-plane pitches of the two PVH elements are different, the slant angles of the two PVH elements may be different. When two birefringent media having different chiralities are used to form two PVH elements respectively, provided that the in-plane pitches of the two PVH elements are substantially the same, the slant angles of the two PVH elements may be different. In some embodiments, the birefringent medium layer115in which the LC directors have out-of-plane orientations shown inFIG.1Gmay function as a PBP grating. Referring toFIGS.1B-1G, the in-plane pitch Pinof the PSOE100(e.g., a PVH grating or a PBP grating) may determine, in part, the optical properties of the PSOE100(e.g., a PVH grating or a PBP grating). For example, the in-plane pitch Pinmay determine the diffraction angles of diffracted beams. In some embodiments, the diffraction angle of a diffracted beam with a wavelength within a predetermined wavelength range may increase as the in-plane pitch Pindecreases. FIG.2Aschematically illustrates an x-z sectional view of a system200configured to generate a polarization interference pattern and record the polarization interference pattern in a recording medium layer210, according to an embodiment of the present disclosure. As shown inFIG.2A, the recording medium layer210may be disposed on a substrate205. The system200may include a light source201configured to emit a beam S222of a wavelength within an absorption band of the recording medium layer210. For example, the beam S222may be a UV, violet, blue, or green beam. In some embodiments, the beam S222may be a diverging beam. In some embodiments, the light source201may be a laser light source, e.g., a laser diode, configured to emit a laser beam S222(e.g., a blue laser beam with a center wavelength of about 460 nm). The system200may include a beam conditioning device (or spatial filtering device)203. The beam conditioning device203may be configured to condition (e.g., polarize, expand, collimate, filter, remove noise from, etc.) the beam S222received from the light source201to be a collimated beam S226with a predetermined beam size and a predetermined polarization. In some embodiments, the beam conditioning device203may include a first lens203a, a pinhole aperture203c, and a second lens203barranged in an optical series. In some embodiments, one or more of the first lens203a, the pinhole aperture203c, and the second lens203bmay be mounted on a movable mechanism for adjusting the relative distances therebetween. In some embodiments, the pinhole aperture203cmay be coupled with an adjustment mechanism configured to adjust the size of the aperture. The first lens203amay be configured to focus the diverging beam S222to an on-axis focal point where the pinhole aperture203cis located. When the diverging beam S222is an input Gaussian beam S222, the first lens203amay be configured to transform the input Gaussian beam S222into a central Gaussian spot (on the optical axis) and side fringes representing unwanted “noise.” The opening of the pinhole aperture203cmay be configured to be centered on the central Gaussian spot, and the size of the opening of the pinhole aperture203cmay be configured to pass the central Gaussian spot and block the “noise” fringes. Thus, the noise in the input Gaussian beam S222may be filtered by the pinhole aperture203c, and a “clean” output Gaussian beam S224may be output by the pinhole aperture203cand received by the second lens203b. The second lens203bmay be configured to collimate and expand the beam S224as the collimated beam S226with a predetermined beam size. In some embodiments, the beam conditioning device203may also be referred to as a spatial filtering device. In some embodiments, the beam conditioning device203may further include one or more optical elements (e.g., a polarizer, and/or a waveplate, etc.) configured to change the polarization of the beam S222or to polarize the beam S222, and output the beam S226with a predetermined polarization. The one or more optical elements may be disposed at suitable positions in the beam conditioning device203, e.g., before the first lens203a, after the second lens203b, or between the first lens203aand the second lens203b. In some embodiments, the beam S226may be an at least partially polarized beam. In some embodiments, the beam S226may be decomposed into two linearly polarized components with a substantially equal light intensity and a suitable phase delay between the two linearly polarized components. For example, the beam S226may be a linearly polarized beam, a circularly polarized beam, or an elliptical polarized beam, etc. The system200may include light deflecting element, such as a reflector (e.g., a mirror)207configured to reflect the beam S226as a beam S228toward a mask211. In this embodiment, an SRG211is used as an example of the mask211. The SRG211may be disposed over a polarization conversion element213. In this embodiment, a waveplate213is used as an example of the polarization conversion. The waveplate213may be disposed between the SRG211and the recording medium layer210. Beams output from the SRG211may be further processed by the waveplate213before the beams interfere with one another to generate a polarization interference pattern for recording in the recording medium layer210. The orientation of the reflector207may be adjustable to adjust the incidence angle θ of the beam S228incident onto the SRG211. In some embodiments, the reflector207may be mounted on a first movable stage209. The first movable stage209may be configured to be translatable and/or rotatable. For example, in some embodiments, the first movable stage209may be translatable in one or more linear directions, thereby translating or moving the reflector (e.g., mirror)207in the one or more linear directions. In some embodiments, the first movable stage209may be rotatable around one or more local axes of the first movable stage209, such as an axis of rotation passing through the center of the first movable stage209, thereby rotating the reflector (e.g., mirror)207around the axis of rotation of the first movable stage209. In some embodiments, a controller217may be communicatively coupled with the first movable stage209, and may control the operations and/or movements of the first movable stage209. The controller217may include a processor or processing unit221. The processor may by any suitable processor, such as a central processing unit (“CPU”), a graphic processing unit (“GPU”), etc. The controller217may include a storage device223. The storage device223may be a non-transitory computer-readable medium, such as a memory, a hard disk, etc. The storage device223may be configured to store data or information, including computer-executable program instructions or codes, which may be executed by the processor221to perform various controls or functions according to the methods or processes disclosed herein. FIG.2Bschematically illustrates the SRG211and the waveplate213included in the system200shown inFIG.2A, according to an embodiment of the present disclosure. In the embodiment shown inFIG.2B, the SRG211is shown as spaced apart from the waveplate213by a gap. In some embodiments, the SRG211and the waveplate213may be stacked without a gap. Referring to the enlarged view of a portion of the SRG211inFIG.2B, the SRG211may include a plurality of microstructures211a(e.g., rectangular pillars with sizes at the micron level or nano level) defining or forming a plurality of grooves211b. The microstructures211aare schematically illustrated as solid grey rectangular structures, and the grooves211bare shown as spaces between the solid black portions inFIG.2B. The SRG211may have the following parameters shown in the enlarged view of the portion of the SRG211inFIG.2B. A grating period P of the SRG211may be defined as a distance between microstructures211a(also referred to as grating lines)211b. In some embodiments, the grating period P may be uniform or constant for all microstructures211a. In some embodiments, at least one grating period P between two microstructures211amay be different from another grating period P between another two microstructures211a. That is, in some embodiments, the grating period P may vary along the SRG211. In the following descriptions, for discussion purposes and illustrative purposes, the grating period P is presumed to be constant or uniform. An inverse of the grating period P may be referred to as a grating resolution, which may be represented by the number of grating lines per mm (lines/mm). A depth d of the SRG211may be defined as a depth of the grating grooves211bor a height of the microstructures211a. In some embodiments, the depth d of the SRG211may also be referred to as etch depth of the grooves211bwhen the grooves211bare formed via etching. A linewidth L of the SRG211may be defined as a width of a single microstructure211aof the SRG211. A duty cycle of the SRG211may be defined as a ratio between the linewidth L and the grating period P. An aspect ratio of the SRG211may be defined as a ratio between the depth d and the width of a grating groove211b(the width of the grating groove211bmay be a difference between the grating period P and the linewidth L). A high aspect ratio indicates a deep grating groove. A grating profile of the SRG211refers to the cross-sectional shape of the grating grooves211bor the microstructure211a, which may be rectangular, sinusoidal, triangular, trapezoidal, or more complex shapes. In some embodiments, the SRG211may be fabricated based on an organic material, such as an amorphous polymer. In some embodiments, the SRG211may be fabricated based on an inorganic material, such as metals or metal oxides (e.g., Al2O3) that may be used for manufacturing metasurfaces. In some embodiments, the material of the SRG211may be optically isotropic, and the SRG211may function as an optically isotropic grating. In some embodiments, the material of the SRG211may be optically anisotropic, and the SRG211may function as an optically anisotropic grating. For illustrative purposes,FIG.2Bshows the SRG211as a binary non-slanted grating with a periodic rectangular profile. That is, the grating profile of the SRG211shown inFIG.2Bmay be rectangular. In some embodiments, the grating profile of the SRG211may be symmetric and non-rectangular, for example, sinusoidal, triangular, or trapezoidal, etc. In some embodiments, the SRG211may be a binary slanted grating in which the microstructures211aare slanted. In some embodiments, through configuring the parameters of the SRG211, such as the grating profile, the duty cycle, the depth or etch depth, and/or the refractive index, etc., the SRG211may be configured to diffract an input beam that is at least partially polarized, similar to a conventional polarization selective grating operating at the Littrow configuration. In some embodiments, when the SRG211operates at the Littrow configuration for an incident beam S228with an incidence angle θIand a wavelength λ, the +1st order diffracted beam may be reflected in the reverse direction of the incident beam S228, and the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be transmitted through as linearly polarized beams. In some embodiments, the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be linearly polarized beams with orthogonal polarizations. In some embodiments, the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be linearly polarized beams with a substantially same polarization. The 0thorder diffracted beam S233may be referred to as a reference beam, which may not carry, or may carry an insignificant amount of optical properties of the SRG211(i.e., the mask). The −1storder diffracted beam S232may be referred to as a signal beam, which may carry the optical information of the SRG211. In some embodiments, when the SRG211operates at the Littrow configuration for the incident beam S228with the incidence angle θIand the wavelength λ, a diffraction angle θ−1Dof the −1storder diffracted beam S232may have a substantially same value as that of the incidence angle θIof the incident beam S228and a sign opposite to that of the incidence angle θI, i.e., θ−1D=−θI. The diffraction angles of the 0thorder diffracted beam S233and the −1storder diffracted beam S232may have a substantially equal value and opposite signs. The diffraction angle θ0Dof the 0thorder diffracted beam S233may be substantially equal to the incidence angle θIof the incident beam S228, i.e., θ0D=θI. The grating equation for the Bragg or Littrow configuration may be expressed as λ=2P·sin(θ0D). An angle formed between the 0thorder diffracted beam S233and the −1storder diffracted beam S232may have a value that is twice the value of the incidence angle θIof the incident beam S228. When the incidence angle θIof the incident beam S228is presumed to be θ, the diffraction angles of the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be +θ and −θ, respectively. The angle formed between the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be 20. In some embodiments, the SRG211may forwardly diffract the incident beam S228as the 0thorder diffracted beam S233and the −1storder diffracted beam S232at a substantially same diffraction efficiency (or a substantially equal light intensity). In some embodiments, the SRG211may forwardly diffract the incident beam S228as the 0thorder diffracted beam S233and the −1storder diffracted beam S232at different diffraction efficiencies (or different light intensities). In some embodiments, when the wavelength λ of the incident beam S228and the period P of the SRG211satisfy the following relationship, ⅔≤λ/P≤2, only the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be transmitted, and the SRG211may exhibit no other diffraction orders than the 0thorder diffracted beam S233and the −1storder diffracted beam S232, or the other diffraction orders are negligible. Compared to a conventional polarization selective grating that operates at the Littrow configuration to diffract an input beam as two diffracted beams with different polarizations, the SRG211may have a higher damage threshold, and a higher diffraction efficiency at a short grating period (e.g., 300 nm˜500 nm). The waveplate213may be configured to receive the 0thorder diffracted beam S233and the −1st order diffracted beam S232from the SRG211. The waveplate213may be configured to convert the 0thorder diffracted beam S233and the −1storder diffracted beam S232into two circularly polarized beams S235and S234with opposite handednesses. For example, the waveplate213may be configured to covert the 0thorder diffracted beam S233into the circularly polarized beam S235, which is a right-handed circularly polarized (“RHCP”) beam or a left-handed circularly polarized (“LHCP”) beam. The waveplate213may be configured to covert the −1storder diffracted beam S232into the circularly polarized beam S234, which may be an LHCP beam or an RHCP beam. In some embodiments, the circularly polarized beams S235and S234may have a substantially equal amount of energy (or a substantially same light intensity). In some embodiments, the circularly polarized beams S235and S234may have different amounts of energy (or different light intensities). An angle formed between the circularly polarized beams S235and S234may be substantially equal to the angle formed between the 0thorder diffracted beam S233and the −1storder diffracted beam S232. That is, the angle formed between the circularly polarized beams S235and S234may have a value of 20 (twice of the incidence angle of the incident beam S228). In some embodiments, the waveplate213may function as a quarter-wave plate (“QWP”) for the 0thorder diffracted beam S233and the −1st order diffracted beam S232with the wavelength λ. The waveplate213may include a polarization axis, which may be oriented relative to the polarization directions of the 0thorder diffracted beam S233and the −1storder diffracted beam S232to covert the 0thorder diffracted beam S233and the −1storder diffracted beam S232into the circularly polarized beams S235and S234with opposite handednesses. In some embodiments, for an achromatic design, the waveplate213may include a multi-layer birefringent material (e.g., a polymer or liquid crystals) configured to produce a quarter-wave birefringence across a wide spectral range (or wavelength range). In some embodiments, for a monochrome design, an angle between the polarization axis (e.g., fast axis) of the waveplate213and the polarization direction of one of the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be about 45°, and an angle between the polarization axis (e.g., fast axis) of the waveplate213and the polarization direction of the other one of the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be about −45°. In some embodiments, the relative orientation between the polarization axis (e.g., fast axis) of the waveplate213and the polarization direction of one of the 0thorder diffracted beam S233and the −1storder diffracted beam S232may be adjustable. For example, the relative orientation may be adjusted through rotating a rotation stage to which the waveplate213is mounted. For example, in some embodiments, the angle formed between the polarization axis (e.g., fast axis) of the waveplate213and the polarization direction of the 0thorder diffracted beam S233may be about 45°, and the angle formed between the polarization axis (e.g., fast axis) of the waveplate213and the −1storder diffracted beam S232may be about −45°. Accordingly, the waveplate213may be configured to covert the 0thorder diffracted beam S233into the circularly polarized beam S235(which may be an RHCP beam), and covert the −1storder diffracted beam S232into the circularly polarized beam S234(which may be an LHCP beam). In some embodiments, the angle formed between the polarization axis (e.g., fast axis) of the waveplate213and the polarization direction of the 0thorder diffracted beam S233may be about −45°, and the angle formed between the polarization axis (e.g., fast axis) of the waveplate213and the −1storder diffracted beam S232may be about 45°. Accordingly, the waveplate213may be configured to covert the 0thorder diffracted beam S233into the circularly polarized beam S235(which may be an LHCP beam), and covert the −1storder diffracted beam S232into the circularly polarized beam S234(which may be an RHCP beam). The two circularly polarized beams S235and S234with opposite handednesses may interfere with each other to generate a polarization interference pattern, to which the recording medium layer210may be exposed. The superposition of the two circularly polarized beams S235and S234may result in a superposed wave that has a substantially uniform intensity and a linear polarization with a spatially periodically varying orientation (or a spatially periodically varying linear polarization orientation angle). That is, the superposition of the two circularly polarized beams S235and S234may result in a polarization interference pattern, which is a pattern of the spatially periodically varying orientation of the linear polarization of the superposed wave. The pattern of the spatially periodically varying orientation of the linear polarization may define a grating pattern for a polarization selective grating, such as that shown inFIG.1BorFIG.1C. An in-plane pitch (or grating period) PR-inof the grating pattern may be determined, in part, by the angle (e.g., 2θ) formed between the two circularly polarized beams S235and S234and the wavelength λ of the two circularly polarized beams S235and S234(which is also the wavelength λ of the incident beam S228). The recording medium layer210may be disposed at the substrate205. The substrate205may provide support and protection to various layers, films, and/or structures formed thereon. The recording medium layer210may include a polarization sensitive recording medium. For example, the recording medium layer210may include an optically recordable and polarization sensitive material (e.g., a photo-alignment material) configured to have a photo-induced optical anisotropy when exposed to a polarized light irradiation. Molecules (or fragments) and/or photo-products of the optically recordable and polarization sensitive material may be configured to generate an orientational ordering under a polarized light irradiation. In the disclosed embodiments, when exposed to the polarization interference pattern formed based on the interference of the two circularly polarized beams S235and S234with opposite handednesses, the recording medium layer210may be optically patterned with an orientation pattern of an optic axis of the recording medium layer210. The orientation pattern of the optic axis of the recording medium layer210may define a grating pattern. In some embodiments, the substrate205on which the recording medium layer210is disposed may be mounted on a second movable stage219. The second movable stage219may be translatable and/or rotatable, thereby translating the substrate205(on which the recording medium layer210is disposed) in one or more directions (e.g., in the x-axis direction, y-axis direction, and/or z-axis direction), and/or rotating the substrate205around one or more rotation axes (e.g., the yaw, roll, and/or pitch axes defined locally with respect to the second movable stage219. In some embodiments, the controller217may be communicatively coupled with the second movable stage219, and may control the operations and/or movements of the second movable stage219. Referring toFIGS.2A and2B, in some embodiments, the relative position (e.g., distance) between the first lens203aand the light source201, the relative position (e.g., distance) between the first lens203aand the pinhole aperture203c, the relative position (e.g., distance) between the pinhole aperture203cand the second lens203b, and/or the relative position (e.g., distance) between the first lens203aand the second lens203bmay be adjustable. For example, the first lens203a, the pinhole aperture203c, and/or the second lens203bmay be mounted at respective movable mechanisms. The movable mechanisms may be configured to translate the respectively mounted elements in a predetermined direction (e.g., an x-axis direction inFIG.2A). The beam size of the collimated beam S226may be adjustable through adjusting at least one of the relative positions (e.g., distances) among the light source201, the first lens203a, the pinhole aperture203c, and the second lens203b. In some embodiments, the controller217may be communicatively coupled with the respective movable mechanisms, and may control the operations and/or movements of the respective movable mechanisms. In some embodiments, the beam size of the collimated beam S226may be configured to be slightly larger than or substantially equal to an aperture size of the recording medium layer210. In some embodiments, the aperture size of the recording medium layer210may be substantially the same as a size of a region of the recording medium layer210to be exposed during an exposure (e.g., a single exposure). For example, the size of the region of the recording medium layer210to be exposed during an exposure may be substantially the same as a size of a grating pattern to be recorded in the recording medium layer210during the exposure. In some embodiments, the aperture size and aperture shape of the recording medium layer210may be adjustable through an adjustable iris diaphragm225disposed between the recording medium layer210and the waveplate213. The adjustable iris diaphragm225may be coupled to a suitable driving element, and may be adjusted manually or automatically through the control of the controller217to change the aperture size and/or aperture shape. FIGS.2C and2Dschematically illustrate a 3D perspective view of the system200configured to generate a polarization interference pattern and record the polarization interference pattern in the recording medium layer210. As shown inFIGS.2C and2D, the system200may include a base291and a bridge292mounted to the base291through two supporting columns293and294. The light source201, the beam conditioning device203, the reflector (e.g., mirror)207, and the first movable stage209on which the reflector (e.g., mirror)207is mounted, may be mounted on the bridge292. The second movable stage219, on which the SRG211, the waveplate213, the recording medium layer210, and the substrate205are mounted, may be mounted on the base291. It is understood that some elements shown inFIG.2A, such as the detailed structure of the light conditioning device203and the controller217, are not shown inFIGS.2C and2D.FIGS.2C and2Dshow that the system200may include one or more additional reflectors (e.g., mirrors), such as three reflectors227a,227b, and227c. The one or more additional reflectors may be mounted on the bridge292and/or the supporting column294. The one or more additional reflectors may be disposed between the beam conditioning device203and the reflector (e.g., mirror)207along a light propagation path. The combination of the reflectors227a,227b, and227cmay be configured to direct a collimated beam S226aoutput from the beam conditioning device203toward the reflector (e.g., mirror)207through a multi-fold or multi-turn light path defined by the reflectors227a,227b, and227c. For example, as shown inFIGS.2C and2D, the reflector227amay be configured to reflect the collimated beam S226apropagating in a first direction (e.g., −x-axis direction inFIG.2C) as a collimated beam S226bpropagating in a second direction (e.g., −y-axis direction inFIG.2C). The reflector227bmay reflect the collimated beam S226bpropagating in the second direction (e.g., −y-axis direction) as a collimated beam S226cpropagating in a third direction (e.g., −z-axis direction). The reflector227cmay reflect the collimated beam S226cpropagating in the third direction (e.g., −z-axis direction) as a collimated beam S226dpropagating in a fourth direction (e.g., x-axis direction) toward the reflector (e.g., mirror)207. The reflector207may reflect the collimated beam S226das a collimated beam S226etoward the SRG211mounted on the second movable stage219. The combination of the reflectors227a,227b, and227cenables a compact design for the entire system200. In some embodiments, the polarization state of the S226dmay be the same as the polarization state of the collimated beam S226a. The reflector207may reflect the collimated beam S226das a collimated beam S226e. The beam S226emay represent the beam S228shown inFIGS.2A and2B, which is incident onto the SRG211. The first movable stage209may be translatable along the length direction (or the x′-axis direction) of the bridge292, the height direction (or the y′-axis direction) of the bridge292, and/or a direction (or the z′-axis direction) perpendicular to the plane defined by the length direction and the width direction. For example, the first movable stage209may include at least one of an x′-axis linear stage movable in the x′-axis direction, a y′-axis linear stage movable in the y′-axis direction, or a z′-axis linear stage movable in the z′-axis direction. In some embodiments, the first movable stage209may be rotatable around at least one of a yaw axis, a roll axis, or a pitch axis defined on the first movable stage209. The translation and/or rotation of the first movable stage209may change the incidence angle of the beam S226e, and/or the portion of the SRG211which the beam S226eilluminates. When the portion of the SRG211which the beam S226eilluminates changes, the portion of the recording medium layer210that is exposed to the polarization interference pattern generated based on the beams output from the SRG211may also change. The second movable stage219may be translatable and/or rotatable. For example, the second movable stage219may include at least one of an a-axis linear stage movable in the x-axis direction, a y-axis linear stage movable in the y-axis direction, or an z-axis stage movable in the z-axis direction. In some embodiments, the second movable stage219may be rotatable around at least one of a yaw axis, a roll axis, or a pitch axis defined on the second movable stage219, such as on a portion of the second movable stage219on which the substrate205(or the recording medium layer210) is mounted. When the second movable stage219is translated in the x-axis, y-axis, and/or z-axis directions, and/or rotated in the yaw axis, roll axis, and/or pitch axis directions, the relative position and/or relative orientation of the recording medium layer210(or the SRG211) with respect to the beam S226emay change. FIG.2Dshows that the system200may include tele-centric vision cameras282aand282bconfigured for aligning the SRG211, the waveplate213, the recording medium layer210, and/or the substrate205. The tele-centric vision cameras282aand282bmay be mounted on suitable mounting and/or supporting devices. The example mounting and/or supporting devices on which the vision cameras282aand282bare mounted are for illustrative purposes only.FIG.2Dalso shows a terminal device280configured for receiving input from an operator for controlling the system200. The terminal device280may include a screen and/or an input/output device such as a keyboard, a mouse, etc. The terminal device280may include the controller217or may be connected with the controller217through a network connection, such as a wired or wireless connection. Referring toFIGS.2A-2D, in the system200for generating a polarization interference pattern and for recording the polarization interference pattern in the recording medium layer210, the same polarization interference pattern or different polarization interference patterns may be recorded in different regions or portions of the recording medium layer210through multiple exposures. In some embodiments, the same polarization interference pattern may be recorded at different portions of the recording medium layer210. In some embodiments, different polarization interference patterns may be recorded at different portions of the recording medium layer210. For example, between two exposures, the recording portions may be changed by changing the position and/or the orientation of the recording medium layer210relative to the beam S226e. For example, the second movable stage219may be controlled by the controller217to translate and/or rotate to change the position and/or the orientation of the recording medium layer210relative to the beam S226e. In some embodiments, between two exposures, the polarization interference pattern may be changed. In some embodiments, changing the polarization interference pattern may include changing the SRG211from a first SRG to a second, different SRG. In some embodiments, changing the polarization interference pattern may include changing the wavelength of the beam S226e. For example, the light source201may be changed or controlled to emit a beam of a different wavelength. In some embodiments, changing the polarization interference pattern may include changing the incidence angle of the beam S226eonto the SRG211. For example, the incidence angle of the beam S226eonto the SRG211may be changeable through changing the relative positions and/or relative orientations between the recording medium layer210and the beam S226ereflected by the reflector207and incident onto the SRG211. In some embodiments, the first movable stage209on which the reflector207is mounted, may be controlled by the controller217to translate and/or rotate to change the orientation of the beam S226erelative to the recording medium layer210. In some embodiments, the second movable stage219may be controlled by the controller217to translate and/or rotate to change the orientation of the recording medium layer210relative to the beam S226e. In some embodiments, changing the polarization interference pattern may include changing a beam size of S226e. For example, the controller217may control a moving mechanism (not shown), on which the first lens203a, the pinhole aperture203c, and the second lens203bare mounted, to adjust the relative position (e.g., distance) between the first lens203aand the light source201, the relative position (e.g., distance) between the first lens203aand the pinhole aperture203c, the relative position (e.g., distance) between the pinhole aperture203cand the second lens203b, and/or the relative position (e.g., distance) between the first lens203aand the second lens203b, and/or control the size of the pinhole aperture203cto change the beam size of the collimated beam S226a. Accordingly, the beam size of S226emay be changeable. In some embodiments, the controller217may control an adjustment mechanism coupled with the iris diaphragm225to adjust the opening area of the iris diaphragm225, thereby adjusting a size and/or a shape of the polarization interference pattern that is recorded into the recording medium layer210. In some embodiments, changing the polarization interference pattern may include changing a gap between the SRG211and the waveplate213. In some embodiments, increasing the gap may reduce the size of the polarization interference pattern that is recorded into the recording medium layer210. In some embodiments, an orientation of the polarization interference pattern relative to the recording medium layer210may be changeable through changing the relative orientation between the recording medium layer210and the beam S226e. For example, the second movable stage219may be controlled by the controller217to rotate (e.g., around the z-axis) to change the relative orientation between the recording medium layer210and the beam S226e. Each polarization interference pattern (or pattern of the spatially varying orientation of the linear polarization) may define an orientation pattern of the optic axis of the recording medium layer210in the respective recording region/portion. Different orientation patterns of the optic axis of the recording medium layer210in different regions/portions may correspond to grating patterns with different sizes, periods, orientations, positions, and/or shapes. For example, the grating period of the grating pattern may be adjustable through adjusting the angle formed between the two circularly polarized beams S235and S234and/or the predetermined wavelength λ of the two circularly polarized beams S235and S234. In some embodiments, the grating period of the grating pattern may be within a sub-micron range, e.g., may be within a visible wavelength range (e.g., 380 nm to 700 nm). The orientation of the grating pattern (or grating fringes) may be adjustable through rotating the substrate205, on which the recording medium layer210is disposed, around a predetermined rotation axis (e.g., the z-axis). That is, the orientation of the grating pattern (or grating fringes) may be adjustable through adjusting the rotation angle of the substrate205that supports the recording medium layer210around a predetermined axis (e.g., the z-axis). The position of the grating pattern may be adjustable through adjusting the location of the substrate205(and hence the location of the recording medium layer210) with respect to the SRG211and the waveplate213. In some embodiments, the size of the grating pattern may be adjustable through adjusting the relative position (e.g., distance) between the first lens203aand the light source201, the relative position (e.g., distance) between the first lens203aand the pinhole aperture203c, the relative position (e.g., distance) between the pinhole aperture203cand the second lens203b, and/or the relative position (e.g., distance) between the first lens203aand the second lens203b. In some embodiments, the size and/or the shape of the grating pattern may be adjustable through adjusting the opening area of the iris diaphragm225. In some embodiments, both sides of the recording medium layer210may be recorded with the polarization interference pattern. For example, a first side of the recording medium layer210may be recorded with one or more polarization interference patterns in one or more recording regions. Then the recording medium layer210may be flipped, and the second side of the recording medium layer210may be recorded with one or more polarization interference patterns in one or more recording regions. When recording different polarization interference patterns to the second side, the SRG211may be replaced with a different SRG, and/or the optical properties (e.g., wavelength, incidence angle, beam size, etc.) of the beam S226emay be changed. FIGS.3A-3Cschematically illustrate x-y sectional views of orientation patterns of the optic axis of the recording medium layer210defined in different portions of the recording medium layer210via the system200shown inFIGS.2A,2C, and2D, according to various embodiments of the present disclosure. For discussion purposes, inFIGS.3A-3C, an aperture size of the recording medium layer210may be substantially the same as a size of a predetermined region350of the recording medium layer210that is exposed to the polarization interference pattern during an exposure. An aperture shape of the recording medium layer210may be a shape of the predetermined region350, e.g., a square shape, a rectangular shape, a circular shape, etc. For discussion purposes,FIGS.3A-3Cschematically illustrate the periodic variation of the orientations of the optic axis of the recording medium layer210in one or two portions of the of the recording medium layer210. InFIGS.3A-3C, the arrows318represent the optic axis and the orientations of the optic axis. FIG.3Ashows a plurality of orientation patterns of the optic axis of the recording medium layer210defined in a plurality of different portions of the recording medium layer210through multiple exposures. The plurality of orientation patterns of the optic axis of the recording medium layer210in different portions of the recording medium layer210may correspond to a plurality of same grating patterns having the same grating period and the same grating orientation. For example, as shown inFIG.3A, eight orientation patterns301-1to301-8of the optic axis of the recording medium layer210may be defined and/or recorded in eight different portions of the recording medium layer210through eight exposures. For different exposures, the substrate205on which the recording medium layer210is disposed may be translated by the second movable stage219in the x-axis direction and y-axis direction. The eight patterns301-1to301-8may be arranged in a 2D array. For illustrative purposes,FIG.3Amerely shows the periodic variation of the orientation of the optic axis in the orientation pattern301-1. For example, the orientations of the optic axis may periodically vary in an in-plane direction, e.g., the x-axis direction. In some embodiments, a pitch POof the orientation pattern301-1may be referred to as a distance in the in-plane direction, over which the orientation of the optic axis rotates by a predetermined angle (e.g., 180°). In some embodiments, the pitch POof the orientation pattern301-1may correspond to the in-plane pitch Pinof a corresponding grating pattern. The eight orientation patterns301-1to301-8may correspond to eight grating patterns have the same size, the same in-plan pitch (or grating period), and the same grating orientation. FIG.3Bshows a plurality of orientation patterns of the optic axis of the recording medium layer210defined and/or recorded in a plurality of different portions of the recording medium layer210through multiple exposures. The plurality of orientation patterns of the optic axis defined in different portions of the recording medium layer210may correspond to a plurality of grating patterns having different in-plane pitches (or grating periods) and the same grating orientation. For example, as shown inFIG.3B, four orientation patterns303-1to303-4of the optic axis of the recording medium layer210may be defined in four different portions of the recording medium layer210through four exposures. For each exposure, the substrate205on which the recording medium layer210is disposed may be translated by the second movable stage219in the x-axis direction. The four orientation patterns303-1to303-4may be arranged in a 1D array. At least two of the four orientation patterns303-1to303-4may have different periods PO. For illustrative purposes,FIG.3Bmerely shows the periodic variation of the orientations of the optic axis in the orientation pattern303-1and the orientation pattern303-2. A period POof the orientation pattern303-1may be different from (e.g., larger than) a period POof the orientation pattern303-2. Accordingly, the in-plane pitch of the grating pattern corresponding to the orientation pattern303-1may be different from (e.g., larger than) the in-plane pitch of the grating pattern corresponding to the orientation pattern303-2. FIG.3Cshows a plurality of orientation patterns of the optic axis of the recording medium layer210defined and/or recorded in a plurality of different portions of the recording medium layer210through multiple exposures. The plurality of orientation patterns of the optic axis defined in different portions (or regions) of the recording medium layer210may correspond to a plurality of grating patterns having different grating orientations and the same in-plane pitch (or grating period). For example, as shown inFIG.3B, four orientation patterns305-1to305-4of the optic axis of the recording medium layer210may be defined in four different portions of the recording medium layer210through four exposures. For each exposure, the substrate205on which the recoding medium layer210is disposed may be translated by the second movable stage219in the x-axis direction. The four orientation patterns305-1to305-4may be arranged in a 1D array. At least two of the four orientation patterns305-1to305-4may have orientations of the optic axis periodically vary in different in-plane directions. The in-plane direction in which the orientations of the optic axis periodically vary may correspond to a grating orientation of a corresponding grating pattern. For illustrative purposes,FIG.3Cmerely shows the periodic variations of the orientations of the optic axis in the orientation pattern305-1and the orientation pattern305-2. For example, the orientation pattern305-1may have the orientation of the optic axis periodically vary in a first in-plane direction, e.g., the x-axis direction, and the orientation pattern305-2may have the orientation of the optic axis periodically vary in a second, different in-plane direction, e.g., the y-axis direction. Accordingly, the grating orientation of the grating pattern corresponding to the orientation pattern305-1may be different from the grating orientation of the grating pattern corresponding to the orientation pattern305-2. FIGS.4A-4Dschematically illustrate processes for fabricating a PSOE through a disclosed system, e.g., the system200shown inFIGS.2A,2C, and2D. The fabrication process shown inFIGS.4A-4Dmay include holographic recording of an alignment pattern in a photo-aligning film and alignment of an anisotropic material (e.g., an LC material) by the photo-aligning film. This alignment process may be referred to as a surface-mediated photo-alignment. In some embodiments, the PSOE fabricated based on the fabrication processes shown inFIGS.4A-4Dmay be a polarization selective grating, such as a PVH grating, or a PBP grating, etc. For illustrative purposes, the substrate and different layers, films, or structures formed thereon are shown as having flat surfaces. In some embodiments, the substrate and different layers or films or structures may have curved surfaces. As shown inFIG.4A, a recording medium layer410may be formed on a surface (e.g., a top surface) of a substrate405by dispensing, e.g., coating or depositing, a polarization sensitive material on the surface of the substrate405. Thus, the recording medium layer410may be referred to as a polarization sensitive recording medium layer. The polarization sensitive material included in the recording medium layer410may be an optically recordable and polarization sensitive material (e.g., a photo-alignment material) configured to have a photo-induced optical anisotropy when exposed to a polarized light irradiation. Molecules (or fragments) and/or photo-products of the optically recordable and polarization sensitive material may be configured to generate an orientational ordering under a polarized light irradiation. In some embodiments, the polarization sensitive material may be dissolved in a solvent to form a solution. The solution may be dispensed on the substrate405using any suitable solution coating process, e.g., spin coating, slot coating, blade coating, spray coating, or jet (ink-jet) coating or printing. The solvent may be removed from the coated solution using a suitable process, e.g., drying, or heating, leaving the polarization sensitive material on the substrate405to form the recording medium layer410. The substrate405may provide support and protection to various layers, films, and/or structures formed thereon. In some embodiments, the substrate405may be at least partially transparent in at least the visible wavelength band (e.g., about 380 nm to about 700 nm). In some embodiments, the substrate405may be at least partially transparent in at least a portion of the infrared (“IR”) band (e.g., about 700 nm to about 4 mm). The substrate405may include a suitable material that is at least partially transparent to lights of the above-listed wavelength ranges, such as, a glass, a plastic, a sapphire, or a combination thereof, etc. The substrate405may be rigid, semi-rigid, flexible, or semi-flexible. The substrate405may include a flat surface or a curved surface, on which the different layers or films may be formed. In some embodiments, the substrate405may be a part of another optical element or device (e.g., another opto-electrical element or device). For example, the substrate405may be a solid optical lens, a part of a solid optical lens, or a light guide (or waveguide), etc. In some embodiments, the substrate405may be a part of a functional device, such as a display screen. In some embodiments, the substrate405may be used to fabricate, store, or transport the fabricated PSOE. In some embodiments, the substrate405may be detachable or removable from the fabricated PSOE after the PSOE is fabricated or transported to another place or device. That is, the substrate405may be used in fabrication, transportation, and/or storage to support the PSOE provided on the substrate405, and may be separated or removed from the PSOE when the fabrication of the PSOE is completed, or when the PSOE is to be implemented in an optical device. In some embodiments, the substrate405may not be separated from the PSOE. After the recording medium layer410is formed on the substrate405, as shown inFIG.4B, the recording medium layer410may be exposed to a polarization interference pattern generated based on two recording beams440and442(also referred to as a first recording beam440or a reference beam440, and a second recording beam442or a signal beam442). The two recording beams440and442may be two coherent circularly polarized beams with opposite handednesses. In some embodiments, the two recording beams440and442may represent, respectively, the beam S235and the beam S234output from the stack of the SRG211operating at the Littrow configuration and the waveplate213shown inFIG.2B. The recording medium layer410may be optically patterned when exposed to the polarization interference pattern generated based on the two recording beams440and442during the polarization interference exposure process. An orientation pattern of an optic axis of the recording medium layer410in an exposed region may be defined by the polarization interference pattern under which the recording medium layer410is exposed during the polarization interference exposure process. In some embodiments, different regions of the recording medium layer410may be exposed to the same or different polarization interference patterns. The same or different orientation patterns of the optic axis of the recording medium410may be defined in respective exposed regions during the respective polarization interference exposure processes. In some embodiments, the recording medium layer410may include elongated anisotropic photo-sensitive units (e.g., small molecules or fragments of polymeric molecules). After being subjected to a sufficient exposure of the polarization interference pattern generated based on the two recording lights440and442, local alignment directions of the anisotropic photo-sensitive units may be induced in the recording medium layer410by the polarization interference pattern, resulting in an alignment pattern (or in-plane modulation) of an optic axis of the recording medium layer410due to a photo-alignment of the anisotropic photo-sensitive units. In some embodiments, the in-plane modulation of the optic axis of the recording medium layer410in the exposed region may correspond to a grating pattern, which may be similar to that shown inFIG.1BorFIG.1C. In some embodiments, multiple alignment patterns (which may be the same or different) may be recorded in different portions or regions of the recording medium layer410through multiple polarization interference exposure processes. The multiple alignment patterns may correspond to multiple grating patterns with the same or different sizes, shapes, grating periods, grating orientations, and/or handednesses of the in-plane modulation. In some embodiments, the handedness of the in-plane modulation (or alignment pattern) of the optic axis of the recording medium layer410in the exposed region may be controllable by controlling the handednesses of the recording beams440and442. For example, when the recording beam440is an RHCP beam and the recording beam442is an LHCP beam, the handedness of the in-plane modulation (or alignment pattern) of the optic axis of the recording medium layer410in the exposed region may be right-handed. When the recording beam440is an LHCP beam and the recording beam442is an RHCP beam, the handedness of the in-plane modulation (or alignment pattern) of the optic axis of the recording medium layer410in the exposed region may be left-handed. After the recording medium layer410is optically patterned, the recording medium layer410may be referred to as a patterned recording medium layer with an alignment pattern. In some embodiments, as shown inFIG.4C, a birefringent medium layer415may be formed on the patterned recording medium layer410by dispensing, e.g., coating or depositing, a birefringent medium onto the patterned recording medium layer410. The birefringent medium may include one or more birefringent materials having an intrinsic birefringence, such as non-polymerizable LCs or polymerizable LCs (e.g., RMs). For discussion purposes, in the following descriptions, the term “liquid crystal(s)” or “LC(s)” may encompass both mesogenic and LC materials. In some embodiments, the birefringent medium may also include or be mixed with other ingredients, such as solvents, initiators (e.g., photo-initiators or thermal initiators), chiral dopants, or surfactants, etc. In some embodiments, the birefringent medium may not have an intrinsic or induced chirality. In some embodiments, the birefringent medium may have an intrinsic or induced chirality. For example, in some embodiments, the birefringent medium may include a host birefringent material and a chiral dopant doped into the host birefringent material at a predetermined concentration. The chirality may be introduced by the chiral dopant doped into the host birefringent material, e.g., chiral dopant doped into nematic LCs, or chiral reactive mesogens (“RMs”) doped into achiral RMs. RMs may be also referred to as a polymerizable mesogenic or liquid-crystalline compound, or polymerizable LCs. In some embodiments, the birefringent medium may include a birefringent material having an intrinsic molecular chirality, and chiral dopants may not be doped into the birefringent material. The chirality of the birefringent medium may result from the intrinsic molecular chirality of the birefringent material. For example, the birefringent material may include chiral liquid crystal molecules, or molecules having one or more chiral functional groups. In some embodiments, the birefringent material may include twist-bend nematic LCs (or LCs in twist-bend nematic phase), in which LC directors may exhibit periodic twist and bend deformations forming a conical helix with doubly degenerate domains having opposite handednesses. The LC directors of twist-bend nematic LCs may be tilted with respect to the helical axis. Thus, the twist-bend nematic phase may be considered as the generalized case of the conventional nematic phase in which the LC directors are perpendicular to the helical axis. In some embodiments, a birefringent medium may be dissolved in a solvent to form a solution. A suitable amount of the solution may be dispensed (e.g., coated, or sprayed, etc.) on the patterned recording medium layer410to form the birefringent medium layer415. In some embodiments, the solution containing the birefringent medium may be coated on the patterned recording medium layer410using a suitable process, e.g., spin coating, slot coating, blade coating, spray coating, or jet (ink-jet) coating or printing. In some embodiments, the birefringent medium may be heated to remove the remaining solvent. This process may be referred to as a pre-exposure heating. The patterned recording medium layer410may be configured to provide a surface alignment (e.g., planar alignment, or homeotropic alignment, etc.) to optically anisotropic molecules (e.g., LC molecules, RM molecules, etc.) in the birefringent medium. For example, the patterned recording medium layer410may at least partially align the LC molecules or RM molecules in the birefringent medium that are in contact with the patterned recording medium layer410in the grating pattern. In other words, the LC molecules or RM molecules in the birefringent medium may be at least partially aligned along the local alignment directions of the anisotropic photo-sensitive units in the patterned recording medium layer410to form the grating pattern. Thus, the grating pattern recorded in the patterned recording medium layer410(or the in-plane orientation pattern of the optic axis of the recording medium layer410) may be transferred to the birefringent medium, and hence to the birefringent medium layer415. That is, the patterned recording medium layer410may function as a photo-alignment material (“PAM”) layer for the LCs or RMs in the birefringent medium. Such an alignment procedure may be referred to as a surface-mediated photo-alignment. In some embodiments, after the LCs or RMs in the birefringent medium are aligned by the patterned recording medium layer410, the birefringent medium may be heat treated (e.g., annealed) in a temperature range corresponding to a nematic phase of the LCs or RMs in birefringent medium to enhance the alignments (or orientation pattern) of the LCs and/or RMs (not shown inFIG.4C). This process may be referred to as a post-exposure heat treatment (e.g., annealing). In some embodiments, the process of heat treating (e.g., annealing) the birefringent medium may be omitted. In some embodiments, when the birefringent medium includes polymerizable LCs (e.g., RMs), after the RMs are aligned by the patterned recording medium layer410, the RMs may be polymerized, e.g., thermally polymerized or photo-polymerized, to solidify and stabilize the orientational pattern of the optic axis of the birefringent medium, thereby forming the birefringent medium layer415. In some embodiments, as shown inFIG.4D, the birefringent medium may be irradiated with, e.g., a UV light444. Under a sufficient UV light irradiation, the birefringent medium may be polymerized to stabilize the orientational pattern of the optic axis of the birefringent medium. In some embodiments, the polymerization of the birefringent medium under the UV light irradiation may be carried out in air, in an inert atmosphere formed, for example, by nitrogen, argon, carbon-dioxide, or in vacuum. Thus, a polarization selective grating400may be obtained based on the polarization interference exposure process and surface-mediated photo-alignment. In some embodiments, the process of thermo- or photo-polymerization of the birefringent medium may be omitted. In some embodiments, the polarization selective grating400fabricated based on the fabrication processes shown inFIGS.4A-4Dmay be a passive polarization selective grating, such as a passive PBP grating or a passive PVH grating. In some embodiments, as shown inFIG.4D, the substrate405and/or the recording medium layer410may be used to fabricate, store, or transport the polarization selective grating400. In some embodiments, the substrate405and/or the recording medium layer410may be detachable or removable from other portions of the polarization selective grating400after the other portions of the polarization selective grating400are fabricated or transported to another place or device. That is, the substrate405and/or the patterned recording medium layer410may be used in fabrication, transportation, and/or storage to support the birefringent medium layer415, and may be separated or removed from the birefringent medium layer415when the fabrication of the polarization selective grating400is completed, or when the polarization selective grating400is to be implemented in an optical device. In some embodiments, the substrate405and/or the recording medium layer410may not be separated from the polarization selective grating400. FIGS.5A and5Bschematically illustrate processes for fabricating a PSOE, according to another embodiment of the present disclosure. The fabrication processes shown inFIGS.5A and5Bmay include steps or processes similar to those shown inFIGS.4A-4D. The PSOE fabricated based on the processes shown inFIGS.5A and5Bmay include elements similar to those included in the PSOE fabricated based on the processes shown inFIGS.4A-4D. Descriptions of the similar steps and similar elements can refer to the descriptions rendered above in connection withFIGS.4A-4D. The PSOE fabricated based on the fabrication processes shown inFIGS.5A and5Bmay be an active PSOE, such as an active PBP grating or an active PVH grating, etc. Although the substrate and layers are shown as having flat surfaces, in some embodiments, the substrate and layers formed thereon may have curved surfaces. As shown inFIG.5A, two substrates405and405′ (referred to as a first substrate405and a second substrate405′) may be assembled to form an LC cell500. For example, the two substrates405and405′ may be bonded to each other via an adhesive412(e.g., optical adhesive412) to form the LC cell500. At least one (e.g., each) of the two substrates405and405′ may be provided with one or more conductive electrode layers and a patterned recording medium layer. For example, two conductive electrode layers540and540′ may be formed at opposing surfaces of the substrates405and405′, and two patterned recording medium layer410and410′ may be formed on opposing surfaces of the two conductive electrode layers540and540′. The patterned recording medium layers410and410′ may be fabricated at the opposing surfaces of the conductive electrode layers540and540′ following steps or processes similar to those shown inFIGS.4A and4B. The conductive electrode layer540or540′ may be transmissive and/or reflective at least in the same spectrum band as the substrate405or405′. The conductive electrode layer540or540′ may be a planar continuous electrode layer or a patterned electrode layer. As shown inFIG.5A, a gap or space may exist between the patterned recording medium layers410and410′. After the LC cell500is assembled, as shown inFIG.5B, active LCs that are reorientable by an external field, e.g., an electric field, may be filled into the space formed between the patterned recording medium layers410and410′ within the LC cell500to form an active LC layer505. The patterned recording medium layer410or410′ may function as a PAM layer for the active LCs filled into the LC cell500, such that the active LCs may be at least partially aligned by the patterned recording medium layer410or410′ according to an grating pattern to form the active LC layer505. Thus, the patterned recording medium layer410or410′ may also be referred to as PAM layers410and410′. The LC cell500filled with the active LCs may be sealed via, e.g., the adhesive412, and an active PSOE (e.g., polarization selective grating)510may be obtained. The active PSOE (e.g., polarization selective grating)510may be switchable by a voltage applied to the conductive electrode layers540and540′. The switching of the active PSOE510may be controlled by a controller (not shown) similar to the controller217shown inFIG.2A. For illustrative purposes,FIGS.5A and5Bshow that the patterned recording medium layers410and410′ (or PAM layers410and410′) may be disposed at opposing inner surfaces of the two substrates405and405′. In some embodiments, each of the PAM layers410and410′ disposed at the two substates405and405′ may be configured to provide a planar alignment (or an alignment with a small pretilt angle). The PAM layers410and410′ may provide parallel or anti-parallel surface alignments. In some embodiments, the PAM layers410and410′ disposed at the two substates405and405′ may be configured to provide hybrid surface alignments. For example, the PAM layer410disposed at the substate405may be configured to provide a planar alignment (or an alignment with a small pretilt angle), and the PAM layer410′ disposed at the other substate405′ may be configured to provide a homeotropic alignment. Although not shown, in some embodiments, only one of the substrates405and405′ may be provided with the PAM layer410or410′. For illustrative purposes,FIGS.5A and5Bshow that conductive electrode layers540and540′ may be disposed at the two substrates405and405′. The conductive electrode layer (540or540′) may be disposed between the patterned recording medium layer (410or410′) and the substrate (405or405′). In the embodiment shown inFIGS.5A and5B, each of the conductive electrode layers540and540′ may be a continuous planar electrode layer. A driving voltage may be applied to the conductive electrode layers540and540′ to generate a vertical electric field to reorient the LC molecules, thereby switching the optical properties of the active PSOE (e.g., polarization selective grating)510. As shown inFIG.5B, the conductive electrode layers540and540′ may be disposed at two sides of the active LC layer505. In some embodiments, the two conductive electrode layers540and540′ may be disposed at the same side of the active LC layer505. For example, as shown inFIG.5C, two substates405and405′ may be assembled to form an LC cell520. One substrate405′ (e.g., an upper substrate) may not be provided with a conductive electrode layer, while the other substrate405(e.g., a lower substrate) may be provide with two conductive electrode layers (e.g.,540aand540b) and an electrically insulating layer560disposed between the two conductive electrode layers. In other words, the two conductive electrode layers540aand540bmay be disposed at the same side of the active LC layer505. The two conductive electrode layers540aand540bmay be a continuous planar electrode layer540aand a patterned electrode layer540b. The patterned electrode layer540bmay include a plurality of striped electrodes arranged in parallel in an interleaved manner. After the LC cell520is filled with active LCs to form the active LC layer505, an active PSOE (e.g., polarization selective grating)525may be obtained. A voltage may be applied between the continuous planar electrode layer540aand the patterned electrode layer540bdisposed at the same side of the active LC layer505to generate a horizontal electric field to reorient the LC molecules, thereby switching the optical properties of the fabricated active PSOE525(e.g., polarization selective grating). In some embodiments, as shown inFIG.5D, two substates405and405′ may be assembled to form an LC cell570. One substrate405′ (e.g., an upper substrate) may not be provided with a conductive electrode layer, while the other substrate405(e.g., a lower substrate) may be provide with a conductive electrode layer580. The conductive electrode layer580may include interdigitated electrodes, which may include two individually addressable comb-like electrode structures (or arrays)541and542. After the LC cell560is filled with active LCs to form the active LC layer505, an active PSOE (e.g., polarization selective grating)575may be obtained. A voltage may be applied between the comb-like electrode structures541and542disposed at the same side of the active LC layer505to generate a horizontal electric field to reorient the LC molecules in the active LC layer505, thereby switching the optical properties of the fabricated active PSOE575(e.g., active polarization selective grating). Referring back toFIGS.5A-5D, in some embodiments, the recording medium layer(s) may not be optically patterned before the LC cell is assembled. Instead, the recording medium layer(s) may be optically patterned after the LC cell is assembled. For example, two substrates405and405′ may be assembled to form an LC cell. At least one of the two substrates405and405′ may be provided with one or more conductive electrode layers and a recording medium layer (that has not been optically patterned yet). Then the LC cell may be exposed to a polarization interference pattern, which may be similar to that shown inFIG.4B. Accordingly, the recording medium layer disposed at the substrate may be optically patterned to provide an alignment pattern corresponding to a grating pattern. After the LC cell is filled with active LCs and sealed, an active PSOE (e.g., polarization selective grating) may be obtained. FIGS.6A and6Bschematically illustrate processes for fabricating a PSOE (e.g., a polarization selective grating), according to another embodiment of the present disclosure. The fabrication processes may include holographic recording and bulk-mediated photo-alignment. The fabrication processes shown inFIGS.6A and6Bmay include steps similar to those shown inFIGS.4A and4B. The PSOE (e.g., polarization selective grating) fabricated based on the processes shown inFIGS.6A and6Bmay include elements similar to the PSOE (e.g., polarization selective grating) fabricated based on the processes shown inFIGS.4A and4B. Descriptions of the similar steps and similar elements, structures, or functions can refer to the descriptions rendered above in connection withFIGS.4A and4B. The PSOE (e.g., polarization selective grating) fabricated based on the fabrication processes shown inFIGS.6A and6Bmay be a passive PSOE (e.g., a passive polarization selective grating). Although the substrate and layers are shown as having flat surfaces, in some embodiments, the substrate and layers formed thereon may have curved surfaces. Similar to the embodiment shown inFIGS.4A and4B, the processes shown inFIGS.6A and6Bmay include dispensing (e.g., coating, depositing, etc.) a recording medium on a surface (e.g., a top surface) of a substrate605to form a recording medium layer620. The recording medium may be a polarization sensitive recording medium. The recording medium may include an optically recordable and polarization sensitive material (e.g., a photo-alignment material) configured to have a photoinduced optical anisotropy when exposed to a polarized light irradiation. Molecules (or fragments) and/or photo-products of the optically recordable and polarization sensitive material may generate anisotropic angular distributions in a film plane of a layer of the recording medium under a polarized light irradiation. In some embodiments, the recording medium may include or be mixed with other ingredients, such as a solvent in which the optically recordable and polarization sensitive materials may be dissolved to form a solution, and photo-sensitizers. The solution may be dispensed on the substrate605using a suitable process, e.g., spin coating, slot coating, blade coating, spray coating, or jet (ink-jet) coating or printing. The solvent may be removed from the coated solution using a suitable process, e.g., drying, or heating, leaving the recording medium on the substrate605. After the recording medium layer620is formed on the substrate605, as shown inFIG.6B, the recording medium layer620may be exposed to a polarization interference pattern generated based on two recording beams640and642. The two recording beams640and642may be referred to as the reference beam and the signal beam, respectively. The two recording beams640and642may be two coherent circularly polarized beams with opposite handednesses. In some embodiments, the two recording beams640and642may represent, respectively, the beam S235and the beam S234output from the SRG211operating at the Littrow configuration and the waveplate213shown inFIG.2B. The recording medium layer620may be optically patterned when exposed to the polarization interference pattern generated based on the two recording beams640and642during the polarization interference exposure process. An orientation pattern of an optic axis of the recording medium layer620in an exposed region may be defined during the polarization interference exposure process. In the embodiment shown inFIGS.6A and6B, the recording medium may include a photo-sensitive polymer. Molecules of the photo-sensitive polymer may include one or more polarization sensitive photo-reactive groups embedded in a main polymer chain or a side polymer chain. During the polarization interference exposure process of the recording medium layer620, a photo-alignment of the polarization sensitive photo-reactive groups may occur within (or in, inside) a volume of the recording medium layer620. That is, a 3D polarization field generated by the interface of the two recording beams640and642may be directly recorded within (or in, inside) the volume of the recording medium layer620. Such an alignment procedure shown inFIG.6Bmay be referred to as a bulk-mediated photo-alignment. In the embodiment shown inFIGS.6A and6B, an in-plane orientation pattern of the optic axis may be directly recorded in the recording medium layer620via the bulk-mediated photo-alignment in an exposed region. In some embodiments, the in-plane orientation pattern of the optic axis may correspond to a grating pattern. A step of disposing an additional birefringent medium layer on the patterned recording medium layer620may be omitted. The patterned recording medium layer620may function as a PSOE (e.g., polarization selective grating)600. In some embodiments, multiple in-plane orientation patterns of the optic axis (or multiple grating patterns) may be recorded in different regions of the recording medium layer620through multiple polarization interference exposure processes. The multiple grating patterns may correspond to multiple grating patterns with the same or different sizes, shapes, grating periods, grating orientations, and/or handedness of the in-plane modulation. In some embodiments, the photo-sensitive polymer included in the recording medium layer620may include an amorphous polymer, an LC polymer, etc. The molecules of the photo-sensitive polymer may include one or more polarization sensitive photo-reactive groups embedded in a main polymer chain or a side polymer chain. In some embodiments, the polarization sensitive photo-reactive group may include an azobenzene group, a cinnamate group, or a coumarin group, etc. In some embodiments, the photo-sensitive polymer may be an amorphous polymer, which may be initially optically isotropic prior to undergoing the polarization interference exposure process, and may exhibit an induced (e.g., photo-induced) optical anisotropy after being subjected to the polarization interference exposure process. In some embodiments, the photo-sensitive polymer may be an LC polymer, in which the birefringence and in-plane orientation pattern may be recorded due to an effect of photo-induced optical anisotropy. In some embodiments, the photo-sensitive polymer may be an LC polymer with a polarization sensitive cinnamate group embedded in a side polymer chain. An example of the LC polymer with a polarization sensitive cinnamate group embedded in a side polymer chain is an LC polymer M1. The LC polymer M1has a nematic mesophase in a temperature range of about 65° C. to about 400° C. An optical anisotropy may be induced by irradiating a film of the LC polymer M1with a polarized UV light (e.g., a laser light with a wavelength of 425 nm or 455 nm). In some embodiments, the induced optical anisotropy may be subsequently enhanced by more than an order of magnitude by annealing the patterned recording medium layer620at a temperature range of about 65° C. to about 400° C. In some embodiments, the annealing of the patterned recording medium layer620may be omitted. The LC polymer M1is an example of an LC polymer with a polarization sensitive cinnamate group embedded in a side polymer chain. The dependence of the photo-induced birefringence on exposure energy is qualitatively similar for other materials from liquid crystalline polymers of M series. Liquid crystalline polymers of M series are discussed in U.S. Patent Application Publication No. US 2020/0081398, which is incorporated by reference for all purposes (including the descriptions of the M series). In some embodiments, when the recording medium layer620includes an LC polymer, the patterned recording medium layer620may be heat treated (e.g., annealed) in a temperature range corresponding to a liquid crystalline state of the LC polymer to enhance the photo-induced optical anisotropy of the LC polymer (not shown inFIG.6B). The recording medium layer620for a bulk-mediated photo-alignment shown inFIG.6Bmay be relatively thicker than the recording medium layer410for a surface-mediated photo-alignment shown inFIGS.4B-4D. The substrate605may be similar to the substrate405shown inFIGS.4A-4D. In some embodiments, the substrate605may be used to fabricate, store, or transport the PSOE (e.g., polarization selective grating)600. In some embodiments, the substrate605may be detachable or removable from the PSOE600after the PSOE600is fabricated or transported to another place or device. That is, the substrate605may be used in fabrication, transportation, and/or storage to support the PSOE600provided on the substrate605, and may be separated or removed from the PSOE600when the fabrication of the PSOE600is completed, or when the PSOE600is to be implemented in an optical device. In some embodiments, the substrate605may not be separated from the PSOE600. FIG.10schematically illustrates a system1000configured to generate a polarization interference pattern that may be recorded in a recording medium layer1010, according to an embodiment of the present disclosure. The system1000may include elements, structures, and/or functions that are the same as or similar to those included in the system200shown inFIGS.2A-2D. Descriptions of the same or similar elements, structures, and/or functions can refer to the above descriptions rendered in connection withFIGS.2A-2D. The recording medium layer1010may be similar to the recording medium layer210shown inFIGS.2A-2D. As shown inFIG.10, the system1000may include a light source1001, a beam conditioning device1003, and an SRG1011, which may be similar to the light source1001, the beam conditioning device203, and the SRG211shown inFIGS.2A-2D, respectively. In some embodiments, the beam conditioning device1003may include a first lens1003a, a pinhole aperture1003c, and a second lens1003barranged in an optical series, which may be similar to the first lens203a, the pinhole aperture203c, and the second lens203bshown inFIG.2A. For example, the beam conditioning device1003may be configured to condition (e.g., polarize, expand, collimate, remove noise from, etc.) a beam S1022emitted from the light source1001, and output a collimated beam S226with a predetermined beam size and a predetermined polarization. The SRG1011may be orientated with respect to the optical axis of the beam conditioning device1003or a propagation direction of the beam S226, such that the beam S226may be incident onto the SRG1011at a predetermined incidence angle (which is non-zero). In some embodiments, the system1000may include a movable stage, on which the SRG1011may be mounted. The movable stage may be similar to the movable stage219shown inFIGS.2A,2C, and2D. The movable stage may be configured to translate and/or rotate the SRG1011, thereby adjusting the orientation and/or position of the SRG1011relative to the propagation direction of the beam S226. When the orientation and/or position of the SRG1011is adjusted, the incidence angle of the beam S226relative to the SRG1011may be adjustable. The SRG1011may be configured to operate at the Littrow configuration for the beam S226having an incidence angle and a wavelength. The SRG1011may be configured to forwardly diffract the beam S226substantially evenly into two paths: a first beam S1032in a first path (e.g., a reference path) and a second beam S1033in a second path (e.g., a signal path). In some embodiments, the first beam S1032and the second beam S1033may be a −1storder diffracted beam S1032and a 0thorder diffracted beam S1033, respectively. In some embodiments, the −1storder diffracted beam S1032and the 0thorder diffracted beam S1033may be two linearly polarized beams having orthogonal polarizations. In some embodiments, the −1storder diffracted beam S1032and the 0thorder diffracted beam S1033may be two linearly polarized beams having a substantially same polarization. In some embodiments, the −1storder diffracted beam S1032and the 0thorder diffracted beam S1033may have a substantially same light intensity. In some embodiments, the −1storder diffracted beam S1032and the 0thorder diffracted beam S1033may have different light intensities. Diffraction angles of the −1st order diffracted beam S1032and the 0thorder diffracted beam S1033may have a substantially same value and opposite signs. In some embodiments, the system1000may include one or more reflectors (e.g., mirrors)1015aand1015bconfigured to change the propagating direction of the first beam S1032by reflecting the first beam S1032in different directions. The combination of the reflectors1015aand1015bmay add multiple turns in the first path, such that the first beam S1032propagates in a direction substantially perpendicular to a propagation direction of the second beam S1033propagating in the second path. That is, a direction of the first path may be changed by the reflectors1015aand1015b, such that the first path becomes perpendicular to the second path at a non-polarizing beam splitter (“NPBS”)1019. In some embodiments, the system1000may include a first waveplate1013adisposed in the first path along which the first beam S1032propagates, and a second waveplate1013bdisposed in the second path along which the second beam S1033propagates. The first waveplate1013aand the second waveplate1013bmay be similar to the waveplate213shown inFIGS.2A-2D. The first waveplate1013aand the second waveplate1013bmay be configured to convert the first beam S1032and the second beam S1033into circularly polarized beams with orthogonal polarizations (e.g., opposite handednesses), respectively. For example, the first waveplate1013aand the second waveplate1013bmay be QWPs. A polarization axis of the first waveplate1013amay be oriented relative to the polarization direction of the first beam S1032to covert the first beam S1032into a circularly polarized beam S1036having a first handedness. The beam S1036may be a collimated beam having a planar wavefront. A polarization axis of the second waveplate1013bmay be oriented relative to the polarization direction of the second beam S1033to covert the second beam S1033into a circularly polarized beam S1035having a second handedness that is opposite to the first handedness. In some embodiments, the system1000may include a third lens (e.g., a focusing lens)1017disposed in the second path between the second waveplate1013band the recoding medium layer1010. The beam S1035(which may be a collimated beam having a planar wavefront) may be transmitted through the third lens1017as a beam S1037having a parabolic wavefront. In some embodiments, a distance between the third lens1017and the recoding medium layer1010may be about twice the focal length of the third lens1017. In some embodiments, the non-polarizing beam splitter (“NPBS”)1019may be disposed in the second path between the third lens1017and the recoding medium layer1010. The NPBS1019may be configured to combine the first beam S1032(which has become S1036) propagating along the first path), and the beam S1033(which has become S1037having a non-planar (e.g., parabolic) wavefront output propagating in the second path). For example, the NPBS1019may be configured to substantially transmit the beam S1037as a beam S1039propagating in the +z-axis direction (or along the direction of the second path), and substantially reflect the beam S1036propagating in the +y-axis direction as a beam S1038propagating in the +z-axis direction (or along the direction of the second path). The beam S1039and the beam S1038output from the NPBS1019may interfere with each other to generate a polarization interference pattern, which may be recorded in the recording medium layer1010. After a sufficient exposure, the polarization interference pattern may be recorded in the recording medium layer1010to define an orientation pattern of an optic axis of the recording medium layer1010. In some embodiments, the orientation of the optic axis of the recording medium layer1010may spatially vary in at least one in-plane direction (e.g., a plurality of radial directions) with a varying pitch. In some embodiments, the orientation pattern of the optic axis of the recording medium layer1010may correspond to a lens pattern. A polarization selective lens may be fabricated based on the exposed (or optically patterned) recording medium layer1010. For example, in some embodiments, the exposed (or optically patterned) recording medium layer1010may function as a polarization selective lens (e.g., a PBP lens or a PVH lens, etc.). In some embodiments, a birefringent medium may be disposed at (e.g., on) the exposed (or optically patterned) recording medium layer1010, similar to the process shown inFIG.4C. Optically anisotropic molecules (e.g., LC molecules) in the birefringent medium may be at least partially aligned by the exposed (or optically patterned) recording medium layer1010according to the lens pattern. In some embodiments, the birefringent medium disposed on the exposed (or optically patterned) recording medium layer1010may be further polymerized, similar to the process shown inFIG.4D. The polymerized birefringent medium may form a passive polarization selective lens (e.g., a passive PBP lens or a passive PVH lens, etc.). In some embodiments, two substrates each provided with the exposed (or optically patterned) recording medium layer1010may be arranged in parallel to form a cell with a space. A birefringent medium (e.g., active LCs) may be filled into the space of the cell. In some embodiments, at least one of the two substrates may include two electrodes configured to provide a driving voltage to the birefringent medium (e.g., active LCs). The cell filled with the birefringent medium (e.g., active LCs) may function as an active polarization selective lens (e.g., an active PBP lens, or an active PVH lens, etc.). Polarization selective gratings (e.g., PVH or PBP gratings, PVH or PBP lens, etc.) fabricated based on the fabrication processes and systems disclosed herein have various applications in a number of technical fields. Some exemplary applications in augmented reality (“AR”), virtual reality (“VR”), and mixed reality (“MR)” fields or some combinations thereof will be explained below. Near-eye displays (“NEDs”) have been widely used in a wide variety of applications, such as aviation, engineering, scientific research, medical devices, computer games, videos, sports, training, and simulations. NEDs can function as a VR device, an AR device, and/or an MR device. When functioning as AR and/or MR devices, NEDs are at least partially transparent from the perspective of a user, enabling the user to view a surrounding real world environment. Such NEDs are also referred to as optically see-through NEDs. When functioning as VR devices, NEDs are opaque such that the user is substantially immersed in the VR imagery provided via the NEDs. An NED may be switchable between functioning as an optically see-through device and functioning as a VR device. Pupil-replication (or pupil-expansion) waveguide display systems with diffractive coupling structures have been implemented in NEDs, which can potentially offer eye-glass form factors, a moderately large field of view (“FOV”), a high transmittance, and a large eyebox. A pupil-replication waveguide display system includes a display element (e.g., an electronic display) configured to generate an image light, and an optical waveguide configured to guide the image light to an eyebox provided by the waveguide display system. Diffraction gratings may be coupled with the optical waveguide, and may function as in-coupling and out-coupling diffractive elements. The optical waveguide may also function as an AR and/or MR combiner to combine the image light and a light from the real world environment, such that virtual images generated by the display element may be superimposed on real-world images or see-through images. In a pupil-replication waveguide display system, a waveguide coupled with the in-coupling and out-coupling diffractive elements may expand the exit pupil along a light propagation direction of a light propagating in and along the waveguide. As the light propagating in and along the waveguide is repeatedly diffracted out of the waveguide by the out-coupling diffractive element, with a portion of the light exiting the waveguide at each location of the waveguide, the illuminance (or light intensity) of the light exiting the waveguide may decrease (i.e., become weaker) along the light propagating direction. Thus, the illuminance (or light intensity) of the light output from the waveguide may be non-uniform along the waveguide. A uniform illuminance over an expanded exit pupil may be desirable for a pupil-replication waveguide display system to maintain a wide FOV. FIG.7Aillustrates a schematic diagram of a waveguide display system700, according to an embodiment of the present disclosure. The waveguide display system700may provide pupil replication (or pupil expansion). The waveguide display system700may be implemented in NEDs for VR, AR, and/or MR applications. The waveguide display system700may include an out-coupling diffractive element745(or out-coupling element745) including a polarization selective grating (e.g., a PVH grating) fabricated based on the disclosed methods and systems. In some embodiments, the polarization selective grating (e.g., the PVH grating) may be fabricated to provide a predetermined non-uniform diffraction efficiency profile (any suitable diffraction efficiency variation profile) in one or more dimensions. For example, the one or more dimensions may be the x-axis dimension, the y-axis dimension, or both. The predetermined non-uniform diffraction efficiency profile may provide a predetermined illuminance distribution (or profile) along the one or more dimensions of the expanded exit pupil. In some embodiments, with the predetermined non-uniform diffraction efficiency profile, the polarization selective grating (e.g., the PVH grating) may provide a predetermined illuminance distribution with an improved uniformity over an expanded exit pupil. The predetermined illuminance distribution may be any suitable illuminance distribution profile in the one or more dimensions, such as a Gaussian distribution or any other desirable distribution. In some embodiments, the predetermined illuminance distribution may not be uniform depending on the application need. As shown inFIG.7A, the waveguide display system700may include a light source assembly705, a waveguide710, and a controller715. The light source assembly705may include a light source720and an light conditioning system725. In some embodiments, the light source720may be a light source configured to generate an at least partially coherent light. The light source720may include, e.g., a laser diode, a vertical cavity surface emitting laser, a light emitting diode, or a combination thereof. In some embodiments, the light source720may be a display panel, such as a liquid crystal display (“LCD”) panel, a liquid-crystal-on-silicon (“LCoS”) display panel, an organic light-emitting diode (“OLED”) display panel, a micro light-emitting diode (“micro-LED”) display panel, a laser scanning display panel, a digital light processing (“DLP”) display panel, or a combination thereof. In some embodiments, the light source720may be a self-emissive panel, such as an OLED display panel or a micro-LED display panel. In some embodiments, the light source720may be a display panel that is illuminated by an external source, such as an LCD panel, an LCoS display panel, or a DLP display panel. Examples of an external source may include a laser, an LED, an OLED, or a combination thereof. The light conditioning system725may include one or more optical components configured to condition the light from the light source720. For example, the controller715may control the light conditioning system725to condition the light from the light source720, which may include, e.g., transmitting, attenuating, expanding, collimating, and/or adjusting orientation of the light. The light source assembly705may generate an image light730and output the image light730to an in-coupling element735disposed at a first portion of the waveguide710. The waveguide710may direct the image light730to an out-coupling element745disposed at a second portion of the waveguide710. The out-coupling element745may couple the image light730out of the waveguide710to an eye760positioned in an eye-box765of the waveguide display system700. An exit pupil762may be a location where the eye760is positioned in the eye-box165. Although one exit pupil762is shown for illustrative purposes, the waveguide display system700may provide a plurality of exit pupils. The in-coupling element735may couple the image light730into the waveguide710at an angle such that the image light730may propagate through total internal reflection (“TIR”) inside and along the waveguide710toward the out-coupling element745. The first portion and the second portion may be located at different ends of the waveguide710. The out-coupling element745may be configured to couple the image light730out of the waveguide710toward the eye760. In some embodiments, the in-coupling element735may couple the image light730into a TIR path inside the waveguide710. The image light730may propagate inside the waveguide710through TIR along the TIR path. The waveguide710may include a first surface or side710-1facing the real-world environment and an opposing second surface or side710-2facing the eye760. In some embodiments, as shown inFIG.7A, the in-coupling element735may be disposed at the second surface710-2of the waveguide710. In some embodiments, the in-coupling element735may be integrally formed as a part of the waveguide710at the second surface710-2. In some embodiments, the in-coupling element735may be separately formed, and may be disposed at (e.g., affixed to) the second surface710-2of the waveguide710. In some embodiments, the in-coupling element735may be disposed at the first surface710-1of the waveguide710. In some embodiments, the in-coupling element735may be integrally formed as a part of the waveguide710at the first surface710-1. In some embodiments, the in-coupling element735may be separately formed and disposed at (e.g., affixed to) the first surface710-1of the waveguide710. In some embodiments, the in-coupling element735may include one or more diffraction gratings, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors, or any combination thereof. In some embodiments, the in-coupling element735may include one or more diffraction gratings, such as a surface relief grating, a volume hologram, a polarization selective grating, a polarization volume hologram, a metasurface grating, another type of diffractive element, or any combination thereof. A pitch of the diffraction grating may be configured to enable total internal reflection (“TIR”) of the image light730within the waveguide710. As a result, the image light730may propagate internally within the waveguide710through TIR. The out-coupling element745may be disposed at the first surface710-1or the second surface710-2of the waveguide710. For example, as shown inFIG.7A, the out-coupling element745may be disposed at the first surface710-1of the waveguide710. In some embodiments, the out-coupling element745may be integrally formed as a part of the waveguide710, for example, at the first surface710-1. In some embodiments, the out-coupling element745may be separately formed and dispose at (e.g., affixed to) the first surface710-1of the waveguide710. In some embodiments, the out-coupling element745may be disposed at the second surface710-2of the waveguide710. For example, in some embodiments, the out-coupling element745may be integrally formed as a part of the waveguide710at the second surface710-2. In some embodiments, the out-coupling element745may be separately formed and disposed at (e.g., affixed to) the second surface710-2of the waveguide710. In some embodiments, the out-coupling element745may include one or more diffraction gratings, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors, or any combination thereof. In some embodiments, the out-coupling element745may include one or more diffraction gratings. A pitch of the diffraction grating may be configured to cause the incident image light730to exit the waveguide710, i.e., redirecting the image light730so that the TIR no longer occurs. In other words, the diffraction grating of the out-coupling element745may couple the image light730that has been propagating inside and along the waveguide710through TIR out of the waveguide710via diffraction. In some embodiments, the out-coupling element745may also be referred to as an out-coupling grating745. In some embodiments, the out-coupling element745may include a polarization selective grating (e.g., a PVH grating) fabricated based on the disclosed fabrication processes and systems. In some embodiments, the PVH grating may be fabricated to have a predetermined slant angle variation in one or more dimensions, e.g., within a plane perpendicular to a thickness direction of the PVH grating. The PVH grating included in the out-coupling element745may be configured to provide a predetermined (e.g., a non-uniform) diffraction efficiency profile, e.g., a predetermined 1D or 2D diffraction efficiency profile in an x-y plane, to image lights incident onto different portions of the surface of the PVH grating at predetermined incidence angles, with predetermined incidence wavelengths and predetermined polarizations. In some embodiments, the PVH grating included in the out-coupling element745may diffract the image lights out of the waveguide710at different diffraction efficiencies at different positions along the propagation direction of the image light (e.g., along the x-axis direction of the waveguide710). As discussed above, in a conventional pupil-replication waveguide display system, the waveguide may expand the exit pupil in the propagation direction of the image light propagating along and inside the waveguide. As the image light propagates along the waveguide, a portion of the image light may be diffracted out of the waveguide by the out-coupling element745. Thus, the intensity of the image light diffracted out of the waveguide710may decrease (e.g., become weaker) in the propagating direction. Accordingly, the illuminance of the image light output from the waveguide may be non-uniform (e.g., may decrease) along the propagation direction of the image light (or the direction in which the exit pupil is expanded). In the waveguide display system700according to the present disclosure, through implementing a PVH grating that provides a non-uniform diffraction efficiency profile, different diffraction efficiencies may be provided at different locations along the waveguide for diffracting the image light730out of the waveguide. For example, the slant angle of the PVH grating may be configured to vary at least along the +x-axis direction in the embodiment shown inFIG.7A, resulting in a varying (e.g., non-uniform) diffraction efficiency of the PVH at least along the +x-axis direction inFIG.7A. In some embodiments, the diffraction efficiency of the PVH may increase along the +x-axis direction. As a result, when the intensity of the image light730decreases as the image light730propagates along the propagating direction, the illuminance of the light output from the waveguide710may be uniform due to the increasing diffraction efficiency along the propagating direction. Thus, the uniformity of the illuminance of the image light730output from the waveguide at least along the +x-axis direction (or the exit pupil expansion direction) may be improved. Although not shown inFIG.7A, in some embodiments, when the image light730propagating along and inside the waveguide710is diffracted by the PVH grating included in the out-coupling element745out of the waveguide710, the out-coupling element745may be configured to provide a uniform illuminance in two dimensions (e.g., the x-axis direction and the y-axis direction) of the expanded exit pupil. In addition, the PVH grating may diffract an image light toward regions outside of the eyebox760with a relatively small (e.g., negligible) diffraction efficiency, and diffract an image light toward regions inside the eyebox760with a relatively large diffraction efficiency. Thus, the loss of the image light directed to regions outside of the eyebox760may be reduced. As a result, the power consumption of the light source assembly705may be significantly reduced, while the power efficiency of the waveguide display system700may be significantly improved. The waveguide710may include one or more materials configured to facilitate the total internal reflection of the image light730. The waveguide710may include, for example, a plastic, a glass, and/or polymers. In some embodiments, the waveguide710may have a relatively small form factor. For example, the waveguide710may be about 50 mm wide along the x-dimension, 30 mm long along the y-dimension, and 0.5-1 mm thick along the z-dimension. The controller715may be communicatively coupled with the light source assembly705, and may control the operations of the light source assembly705. In some embodiments, the waveguide710may include additional elements configured to redirect, fold, and/or expand the pupil of the light source assembly705. For example, as shown inFIG.7A, a directing element740may be coupled to the waveguide710. The directing element740may be configured to redirect the received input image light730to the out-coupling element745, such that the received input image light730is coupled out of the waveguide710via the out-coupling element745. In some embodiments, the directing element740may be arranged at a location of the waveguide710opposing the location of the out-coupling element745. In some embodiments, the directing element740may be disposed at the second surface710-2of the waveguide710. For example, in some embodiments, the directing element740may be integrally formed as a part of the waveguide710at the second surface710-2. In some embodiments, the directing element740may be separately formed and disposed at (e.g., affixed to) the second surface710-2of the waveguide710. In some embodiments, the directing element740may be disposed at the first surface710-1of the waveguide710. For example, in some embodiments, the directing element740may be integrally formed as a part of the waveguide710at the first surface710-1. In some embodiments, the directing element740may be separately formed and disposed at (e.g., affixed to) the first surface710-1of the waveguide710. In some embodiments, the directing element740and the out-coupling element745may have a similar structure. In some embodiments, the directing element740may include one or more diffraction gratings, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors, or any combination thereof. In some embodiments, the directing element740may include one or more diffraction gratings, such as a surface relief grating, a volume hologram, a polarization selective grating, a polarization volume hologram, a metasurface grating, another type of diffractive element, or any combination thereof. The directing element740may also be referred to as a folding grating740or a directing grating740. In some embodiments, the directing element740may include one or more polarization selective gratings (e.g., PVH gratings) fabricated based on disclosed fabrication processes and systems. The PVH grating included in the directing element740may provide a predetermined, non-uniform diffraction efficiency profile in at least one dimension within a plane perpendicular to a thickness direction of the PVH. For example, the PVH grating may include a slant angle variation in one or more dimensions within the plane perpendicular to the thickness direction of the PVH. In some embodiments, multiple functions, e.g., redirecting, folding, and/or expanding the pupil of the light generated by the light source assembly705may be combined into a single grating, e.g., the out-coupling grating745. In such embodiments, the directing element740may be omitted. In some embodiments, the waveguide display system700may include a plurality of waveguides710disposed in a stacked configuration (not shown inFIG.7A). At least one (e.g., each) of the plurality of waveguides710may be coupled with or include one or more diffractive elements (e.g., in-coupling element, out-coupling element, and/or directing element), which may be configured to direct the image light730toward the eye760. In some embodiments, the plurality of waveguides710disposed in the stacked configuration may be configured to output an expanded polychromatic image light (e.g., a full-color image light). In some embodiments, the waveguide display system700may include one or more light source assemblies705and/or one or more waveguides710. In some embodiments, at least one (e.g., each) of the light source assemblies705may be configured to emit a monochromatic image light of a specific wavelength band corresponding to a primary color (e.g., red, green, or blue) and a predetermined FOV (or a predetermined portion of an FOV). In some embodiments, the waveguide display system700may include three different waveguides710configured to deliver component color images (e.g., primary color images) by in-coupling and subsequently out-coupling, e.g., red, green, and blue lights, respectively, in any suitable order. In some embodiments, the waveguide display assembly700may include two different waveguides configured to deliver component color images (e.g., primary color images) by in-coupling and subsequently out-coupling, e.g., a combination of red and green lights, and a combination of green and blue lights, respectively, in any suitable order. In some embodiments, at least one (e.g., each) of the light source assemblies705may be configured to emit a polychromatic image light (e.g., a full-color image light). FIG.7Billustrates a conventional waveguide display system700′ in which an out-coupling element745′ including one or more diffraction elements (e.g., one or more PVH gratings) having a uniform diffraction efficiency in the x-axis direction. As shown inFIG.7B, when the image light730propagates inside and along the waveguide710through TIR, as portions of the image light730are diffracted out of the waveguide710by the out-coupling element745′ at different locations, the intensity of the image light730becomes lower in the light propagating direction, as schematically indicated by the gradually reducing thickness of the lines730-1,730-2, and730-3. As a result, the intensity (or illuminance) of output lights731-1,731-2, and731-3output from the waveguide710gradually decreases. Thus, the conventional waveguide display system700′ with diffraction elements providing a uniform diffraction efficiency in the x-axis direction may provide a non-uniform illuminance for the output lights (or the replicated pupils). According to an embodiment of the present disclosure, the PVH grating with a non-uniform diffraction efficiency fabricated based on the disclosed processes and systems may improve the uniformity of the output illuminance of the output image light.FIG.7Cillustrates the diffraction of the image light by the out-coupling element745including a PVH grating having a non-uniform diffraction efficiency, according to an embodiment of the present disclosure.FIG.7Cshows that in the disclosed waveguide display system700shown inFIG.7A, the PVH grating included in the out-coupling element745may be configured to have a gradually increasing diffraction efficiency along the x-axis direction. For example, at three exemplary diffraction points A, B, and C, the diffraction efficiency of the PVH may gradually increase. Thus, at point A where the intensity of the image light730is the largest, the diffraction efficiency may be the smallest. At point B, the intensity of the image light730may be lower than the intensity at point A. Hence, the diffraction efficiency at point B may be higher than the diffraction efficiency at point A. At point C, the intensity of the image light730may be further reduced. Thus, at point C, the diffraction efficiency may be further increased as compared to the diffraction efficiency at point B, and the diffraction efficiency at point C may be the highest. As a result of the non-uniform diffraction efficiency provided at different portions of the PVH grating, the illuminance (or intensity) of the image light732-1,732-2, and732-3output from the waveguide710may become more uniform as compared with the conventional configuration shown inFIG.7B. For discussion purposes, inFIG.7C, the diffraction efficiency of the PVH grating is presumed to be non-uniform in one dimension. It is understood that the diffraction efficiency of the PVH grating may be non-uniform in two dimensions, i.e., the x-axis direction and the y-axis direction. The PVH grating may have any suitable diffraction efficiency distribution profile(s) in one dimension or two dimensions. FIG.8Aillustrates a schematic diagram of an optical system800according to an embodiment of the present disclosure. For illustrative purposes, a near-eye display (“NED”) is used as an example of the optical system800, in which one or more PSOE (e.g., PVH gratings) fabricated based on the disclosed processes and systems may be implemented. For the convenience of discussion, the optical system800may also be referred to as the NED800. In some embodiments, the NED800may be referred to as a head-mounted display (“HMD”). The NED800may present media content to a user, such as one or more images, videos, audios, or a combination thereof. In some embodiments, an audio may be presented to the user via an external device (e.g., a speaker and/or a headphone). The NED800may operate as a VR device, an AR device, an MR device, or a combination thereof. In some embodiments, when the NED800operates as an AR and/or MR device, a portion of the NED800may be at least partially transparent, and internal components of the NED800may be at least partially visible. As shown inFIG.8A, the NED800may include a frame810, a right display system820R, and a left display system820L. In some embodiments, certain device(s) shown inFIG.8Amay be omitted. In some embodiments, additional devices or components not shown inFIG.8Amay also be included in the NED800. The frame810may include a suitable mounting structure configured to mount the right display system820R and the left display system820L to a body part (e.g. a head) of the user (e.g., adjacent a user's eyes). The frame810may be coupled to one or more optical elements, which may be configured to display media to users. In some embodiments, the frame810may represent a frame of eye-wear glasses. The right display system820R and the left display system820L may be configured to enable the user to view content presented by the NED800and/or to view images of real-world objects (e.g., each of the right display system820R and the left display system820L may include a see-through optical element). In some embodiments, the right display system820R and the left display system820L may include any suitable display assembly (not shown) configured to generate a light (e.g., an image light corresponding to a virtual image) and to direct the image light to an eye of the user. In some embodiments, the NED800may include a projection system. For illustrative purposes,FIG.8Ashows the projection system may include a projector835coupled to the frame810. FIG.8Bis a cross-section view of half of the NED800shown inFIG.8Ain accordance with an embodiment of the present disclosure. For the purposes of illustration,FIG.8Bshows the cross-sectional view associated with the left display system820L. As shown inFIG.8B, the left display system820L may include a waveguide display assembly815for an eye860of the user. The waveguide display assembly815may be an embodiment of the waveguide display system700shown inFIG.7A. That is, the waveguide display assembly815may include one or more polarization selective gratings (e.g., PVH gratings) fabricated based on the disclosed processes and systems. The PVH gratings may serve as or be included in an in-coupling element and/or an out-coupling element. In some embodiments, the PVH grating may include a non-uniform diffraction efficiency in at least one dimension of the PVH grating. The illuminance of the image light diffracted out of a waveguide by the PVH at different locations (or pupils) may have an improved uniformity. The waveguide display assembly815may include a waveguide or a stack of waveguides. An exit pupil862may be a location where an eye860may be positioned in an eye-box865when the user wears the NED800. Although one exit pupil862is shown for illustrative purposes, the NED800may provide a plurality of exit pupils within the eyebox865. For the purposes of illustration,FIG.8Bshows the cross section view associated with a single eye860and a single waveguide display assembly815. In some embodiments, another waveguide display assembly that is separate from and similar to the waveguide display assembly815shown inFIG.8Bmay provide an image light to an eye-box located at an exit pupil of another eye of the user. The waveguide display assembly815may include one or more materials (e.g., a plastic, a glass, etc.) with one or more refractive indices. InFIG.8B, the waveguide display assembly815is shown as a component of the NED800. In some embodiments, the waveguide display assembly815may be a component of some other NED or system that directs an image light to a particular location. As shown inFIG.8B, the waveguide display assembly815may be provided for one eye860of the user. The waveguide display assembly815for one eye may be at least partially separated from the waveguide display assembly815for the other eye. In some embodiments, a single waveguide display assembly815may be included for both eyes860of the user. In some embodiments, the NED800may include one or more optical elements disposed between the waveguide display assembly815and the eye860. The optical elements may be configured to, e.g., correct aberrations in an image light output from the waveguide display assembly815, magnify an image light output from the waveguide display assembly815, or perform another type of optical adjustment of an image light output from the waveguide display assembly815. Examples of the one or more optical elements may include an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, any other suitable optical element that affects an image light, or a combination thereof. In some embodiments, the waveguide display assembly815may include a stack of waveguide displays (each waveguide display may include a waveguide, a light source assembly, an in-coupling element, and/or an out-coupling element). In some embodiments, the stacked waveguide displays may include a polychromatic display (e.g., a red-green-blue (“RGB”) display) formed by stacking waveguide displays whose respective monochromatic light sources are configured to emit lights of different colors. For example, the stacked waveguide displays may include a polychromatic display configured to project image lights onto multiple planes (e.g., multi-focus colored display). In some embodiments, the stacked waveguide displays may include a monochromatic display configured to project image lights onto multiple planes (e.g., multi-focus monochromatic display). In some embodiments, the NED800may include an adaptive dimming element830, which may dynamically adjust (when controlled by a controller, such as controller715) the transmittance of lights reflected by real-world objects, thereby switching the NED800between a VR device and an AR device or between a VR device and an MR device. In some embodiments, along with switching between the AR/MR device and the VR device, the adaptive dimming element830may be used in the AR and/MR device to mitigate differences in brightness of lights reflected by real-world objects and virtual image lights. The present disclosure also provides a method for fabricating a PSOE, such as a polarization selective grating.FIG.9illustrates a flowchart showing a method900for fabricating a PSOE, according to an embodiment of the present disclosure. As shown inFIG.9, the method900may include directing an input beam to an SRG (Step910). In some embodiments, the input beam may be an at least partially polarized input beam having a wavelength and an incidence angle. The method900may include forwardly diffracting, by the SRG, the input beam as two linearly polarized beams (Step920). In some embodiments, the SRG may function as an optically isotropic element. In some embodiments, the SRG may function as an optically anisotropic element. In some embodiments, the SRG may be configured to operate at the Littrow configuration for the input beam with the predetermined incidence angle and the predetermined wavelength. In some embodiments, the two linearly polarized beams may include a 0thorder diffracted beam and a −1storder diffracted beam with orthogonal polarizations. In some embodiments, the two linearly polarized beams may include a 0thorder diffracted beam and a −1storder diffracted beam with a substantially same polarization. The diffraction angles of the 0thorder diffracted beam and −1storder diffracted beam may have a substantially same value and opposite signs. In some embodiments, the 0thorder diffracted beam and −1storder diffracted beam may have a substantially same light intensity. The method900may include converting, by a waveplate, the two linearly polarized beams into two circularly polarized beams having orthogonal circular polarizations, the two circularly polarized beams interfering with one another to generate a polarization interference pattern (Step930). In some embodiments, the waveplate may be indirectly optically coupled to the SRG with an intermediate optical element disposed therebetween that may or may not change at least one of the polarization or the propagating direction of the beams. In some embodiments, the waveplate may be directly optically coupled to the SRG without an optical element disposed and/or a gap therebetween. An angle formed between the two circularly polarized beams having orthogonal circular polarizations may be substantially equal to the angle formed between the 0thorder diffracted beam and the −1storder diffracted beam output from the SRG. In some embodiments, the two circularly polarized beams having orthogonal circular polarizations may have a substantially equal light intensity. In some embodiments, the two circularly polarized beams having orthogonal circular polarizations may have different light intensities. In some embodiments, the method900may include additional steps that are not shown inFIG.9. In some embodiments, the method900may include directing the two circularly polarized beams having orthogonal circular polarizations to a same surface of a polarization sensitive recording medium layer. The two circularly polarized beams having orthogonal circular polarizations may interfere with one another in a predetermined 3D space within which the polarization sensitive recording medium layer is located. The polarization sensitive recording medium layer may be exposed to the polarization interference pattern generated by the interference of the two circularly polarized beams having orthogonal circular polarizations. During the exposure process, the polarization interference pattern may be recorded at (e.g., in or on) the polarization sensitive recording medium layer to define an orientation pattern of an optic axis of the polarization sensitive recording medium layer. In some embodiments, the orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to a grating pattern. In some embodiments, the polarization sensitive recording medium layer may include a photo-sensitive polymer (or photo-polymer), e.g., an amorphous polymer, an LC polymer, etc. In some embodiments, after being exposed to the polarization interference pattern, the polarization sensitive recording medium layer (also referred to as “exposed polarization sensitive recording medium layer”) may function as a polarization selective grating, such as a PBP grating, or a PVH grating, etc. In some embodiments, the method900may also include annealing the exposed polarization sensitive recording medium layer in a predetermined temperature range. For example, when the polarization sensitive recording medium layer includes LC polymer, the predetermined temperature range may correspond to a liquid crystalline state of the LC polymer. In some embodiments, the polarization sensitive recording medium layer may include a photo-alignment material. The exposed polarization sensitive recording medium layer may function as a surface alignment layer. The method900may also include forming a birefringent medium layer on the polarization sensitive recording medium layer. In some embodiments, the birefringent medium layer may include a birefringent medium with or without a chirality. For example, the birefringent medium layer may include at least one of LCs or RMs with or without a chirality. In some embodiments, the exposed polarization sensitive recording medium layer may be annealed in a predetermined temperature range corresponding to a nematic phase of the LCs or RMs. In some embodiments, the method900may also include polymerizing the birefringent medium layer. In some embodiments, the polymerized birefringent medium layer may function as a polarization selective grating, such as a PBP grating, or a PVH grating, etc. In some embodiments, the method may include recording a plurality of polarization interference patterns to a plurality of regions or portions in the polarization sensitive recording medium layer. For example, a first polarization interference pattern may be generated using an input beam having a first wavelength, incident onto a first SRG at a first incidence angle, which diffracts the input beam into a first group of two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam). In some embodiments, the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) of the first group may have orthogonal linear polarizations. In some embodiments, the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) of the first group may have a substantially same linear polarization. A waveplate may convert the first group of two linearly polarized beams into a first group of two circularly polarized beams having orthogonal circular polarizations, which may interference with one another to generate a polarization interference pattern. One or more first recording regions or portions of the polarization sensitive recording medium layer may be exposed to the polarization interference pattern, which may be recorded in the one or more first recording regions. In some embodiments, a second polarization interference pattern may be recorded at one or more second recording regions. The method may include replacing the first SRG with a second SRG, which may be different from the first SRG. The method may include adjusting at least one of a wavelength of the input beam, or a relative position or a relative orientation between the polarization sensitive recording medium layer and the input beam incident onto the second SRG. The method may include forwardly diffracting, by the second SRG, the input beam as a second group of two linearly polarized beams. In some embodiments, the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) of the second group may have orthogonal linear polarizations. In some embodiments, the two linearly polarized beams (e.g., the 0thorder diffracted beam and the −1storder diffracted beam) of the second group may have a substantially same linear polarization. The method may include converting, by the waveplate, the two linearly polarized beams into a second group of two circularly polarized beams having orthogonal circular polarizations, the second group of two circularly polarized beams interfering with one another to generate a second polarization interference pattern. The method may also include recording the second polarization interference pattern in one or more second regions or portions of the polarization sensitive recording medium layer. The present disclosure also provides an efficient and cost-effective system and method using a transmissive PVH element as a mask (also referred to as a PVH mask) for fabricating PSOEs or polarization holograms, e.g., lenses, gratings, waveplates, etc., with a large adjustment range of the in-plane pitch. For example, PSOEs or polarization holograms with a fine in-plane pitch (e.g., 200 nm to 800 nm) may be fabricated based on the disclosed system and method. The transmissive PVH mask is also a type of a PSOE, and it is used as a mask during the process of fabricating other PSOEs or polarization holograms. The PVH mask may be configured to forwardly diffract an input beam into two polarized beams propagating in two different directions. The two polarized beams may be a signal beam carrying optical information or properties of the PVH mask, and a reference beam. The two polarized beams may have the same handedness. For example, the PVH mask may forwardly diffract the input beam into two beams of different diffraction orders (e.g., 0thorder and 1storder, etc.). Although splitting into two beams is used as an example, the present disclosure is not limited to splitting the input beam into two polarized beams propagating in different directions. In some embodiments, the PVH mask may be configured to split the input beam into more than two beams in different directions. The two or more beams may interference with one another to generate a polarization interference pattern, which may be recorded in a recording medium layer. The present disclosure provides a system including a mask configured to forwardly diffract an input beam as a first set of two polarized beams. The system also includes a polarization conversion element configured to convert the first set of two polarized beams into a second set of two polarized beams having opposite handednesses. The two polarized beams having opposite handednesses interfere with one another to generate a polarization interference pattern. In some embodiments, the mask includes a transmissive polarization volume hologram (“PVH”) element. In some embodiments, the transmissive PVH element includes a transmissive PVH grating or a PVH lens. In some embodiments, the two polarized beams in the first set include a 0thorder diffracted beam and a 1storder diffracted beam. In some embodiments, the two polarized beams in the first set have a substantially same light intensity. In some embodiments, the two polarized beams in the first set have planar wavefronts. In some embodiments, propagation directions of the two polarized beams in the first set are symmetric or asymmetric with respect to a normal of a surface of the mask. In some embodiments, at least one of the two polarized beams in the first set has a non-planar wavefront. In some embodiments, the polarization conversion element includes an oblique compensation plate (“O-plate”). In some embodiments, the first set of two polarized beams includes a first polarized beam having a first incidence angle relative to the O-plate, and a second polarized beam having a second incidence angle relative to the O-plate. In some embodiments, the O-plate is configured to provide a half-wave retardance to the first polarized beam having the first incidence angle, and a zero or full-wave retardance to the second polarized beam having the second incidence angle. In some embodiments, the first incidence angle is within a predetermined angle range, and the second incidence angle is outside of the predetermined angle range. In some embodiments, the system includes a light deflecting element configured to direct the input beam toward the mask. In some embodiments, the system also includes a movable stage configured to support the light deflecting element, and adjust at least one of an orientation or a position of the light deflecting element to change an incidence angle of the input beam incident onto the mask. In some embodiments, the system includes a movable stage configured to support the mask, the polarization conversion element, and a recoding medium layer. The movable stage is movable to adjust at least one of a position or an orientation of the recoding medium layer. In some embodiments, the present disclosure also provides a method. The method may include directing an input beam to a mask. In some embodiments, the method may include forwardly diffracting, by the mask, the input beam as a first set of two polarized beams. In some embodiments, the method may include converting, by a polarization conversion element, the first set of two polarized beams into a second set of two polarized beams having opposite handednesses, the two polarized beams having opposite handednesses interfering with one another to generate a polarization interference pattern. In some embodiments, the method may include directing the second set of two polarized beams having opposite handednesses to a same surface of a recording medium layer. In some embodiments, the method may include exposing at least one portion of the recording medium layer to the polarization interference pattern to record the polarization interference pattern in the at least one portion. In some embodiments, the polarization interference pattern recorded in the at least one portion of the recording medium layer defines an orientation pattern of an optic axis in the at least one portion of the recording medium layer. In some embodiments, the mask is a first mask, the polarization interference pattern is a first polarization interference pattern, and the first polarization interference pattern is recorded in a first portion of the recording medium layer. The method may also include replacing the first mask with a second mask. In some embodiments, the method may include converting, by the polarization conversion element, the third set of two polarized beams into a fourth set of two polarized beams having opposite handednesses, the two polarized beams having opposite handednesses in the fourth set interfering with one another to generate a second polarization interference pattern. In some embodiments, the method may include recording the second polarization interference pattern in a second portion of the recording medium layer. In some embodiments, the polarization interference pattern is a first polarization interference pattern, and the first polarization interference pattern is recorded in a first portion of the recording medium layer. In some embodiments, the method may also include adjusting at least one of a wavelength of the input beam, an incidence angle of the input beam with respect to the mask, or a relative position or a relative orientation between the recording medium layer and the input beam. In some embodiments, the method may also include forwardly diffracting, by the mask, the input beam as a third set of two polarized beams. In some embodiments, the method may also include converting, by the polarization conversion element, the third set of two polarized beams into a fourth set of two polarized beams having opposite handednesses, the two polarized beams having opposite handednesses in the fourth set interfering with one another to generate a second polarization interference pattern. In some embodiments, the method may also include recording the second polarization interference pattern in a second portion of the recording medium layer. In some embodiments, the recording medium layer includes a surface photo-alignment material. In some embodiments, the method also includes forming a birefringent medium layer on the recording medium layer after the polarization interference pattern is recorded in the recording medium. In some embodiments, the recording medium layer includes a bulk photo-alignment material. In some embodiments, the two polarized beams in the first set include a 0thorder diffracted beam, and a 1storder diffracted beam. In some embodiments, the mask includes a transmissive polarization volume hologram (“PVH”) element. It is noted that all features shown inFIGS.1A-10may be included in the embodiments shown inFIGS.11A-12in a suitable manner. Conversely, all features shown inFIGS.11A-12may be included in any embodiment shown inFIGS.1A-10in a suitable manner. That is, the features shown inFIGS.1A-12may be combined. FIG.11Aschematically illustrates a system (e.g., an interference system)1100configured to generate a polarization interference pattern that may be recorded in the recording medium layer210, according to an embodiment of the present disclosure. The system1100may include elements, structures, and/or functions that are the same as or similar to those included in the system200shown inFIGS.2A-2D. Descriptions of the same or similar elements, structures, and/or functions can refer to the above descriptions rendered in connection withFIGS.2A-2D. As shown inFIG.11A, the system1100may include the light source201, the beam conditioning device203, the reflector (e.g., mirror)207mounted on the first movable stage209, a PVH mask1111, and a polarization conversion element1113. A transmissive PVH mask is used as an example of the PVH mask1111. Hence, the PVH mask1111may also be referred to as a transmissive PVH element1111for discussion purposes. In the following discussions, a compensation plate is used as an example of the polarization conversion element1113. For discussion purposes, the polarization conversion element1113may also be referred to as a compensation plate1113. It is noted the polarization conversion element1113is not limited to being a compensation plate, and may be any other suitable polarization conversion elements that can convert a polarization of a beam. The system1100is based on the system200shown inFIGS.2A-2D. In the system1100, the SRG211shown inFIGS.2A-2Dmay be replaced by the transmissive PVH element1111, and the waveplate213shown inFIGS.2A-2Dmay be replaced by the compensation plate1113. The recording medium layer210may be disposed on the substate205. In some embodiments, the substate205may be mounted on the second movable stage219. The transmissive PVH element1111and the compensation plate1113may be disposed in parallel. In the embodiment shown inFIG.11A, the transmissive PVH element1111is shown as being spaced apart from the compensation plate1113by a gap. In some embodiments, the transmissive PVH element1111and the compensation plate1113may be stacked without a gap (e.g., through direct contact). In some embodiments, the transmissive PVH element1111may include at least one of sub-wavelength structures, a birefringent material, or a photo-refractive holographic material. The optic axis of the transmissive PVH element1111may periodically or non-periodically vary in at least one in-plane linear direction, in at least one in-plane radial direction, in at least one in-plane circumferential (e.g., azimuthal) direction, or a combination thereof. In some embodiments, the optic axis of the transmissive PVH element1111may also be configured with a spatially varying orientation in an out-of-plane direction. In some embodiments, the transmissive PVH element1111may modulate (e.g., diffract) an input beam satisfying a Bragg condition through Bragg diffraction. In some embodiments, the transmissive PVH element1111may include a transmissive PVH grating (also referred to as1111for discussion purposes). In some embodiments, the optic axis of the transmissive PVH grating1111may be configured with a periodic in-plane orientation pattern with a uniform (e.g., same) in-plane pitch Pinin a predetermined in-plane direction. In some embodiments, within a volume of the transmissive PVH grating1111, the optic axis of the transmissive PVH grating1111may be twisted in a helical fashion. In some embodiments, the transmissive PVH grating1111may include a birefringent medium layer (e.g., an LC layer). The optically anisotropic molecules (e.g., LC molecules) of the birefringent medium layer may be configured with a periodic in-plane orientation pattern with a uniform (e.g., same) in-plane pitch Pinin a predetermined in-plane direction, e.g., similar to that shown inFIG.1BorFIG.1C. In some embodiments, within a volume of the birefringent medium layer, the optically anisotropic molecules (e.g., LC molecules) may be arranged in a plurality of helical structures to form Bragg planes, e.g., similar to the that shown inFIG.1E. The transmissive PVH grating1111may function as a mask for surface (or volume) recording a grating pattern into the recording medium layer210. A PSOE fabricated based on the exposed recording medium layer210may be a polarization selective grating, e.g., a PBP grating, a PVH grating, etc. FIG.11Billustrates polarization selective diffractions of the transmissive PVH grating1111, according to an embodiment of the present disclosure. In some embodiments, the transmissive PVH grating1111may be configured to substantially forwardly diffract (via Bragg diffraction) a polarized beam (e.g., circularly, or elliptical polarized beam) having a predetermined handedness, and substantially transmit (e.g., with negligible diffraction) a polarized beam (e.g., circularly, or elliptical polarized beam) having a handedness that is opposite to the predetermined handedness. In addition, the transmissive PVH grating1111may be configured to reverse a handedness of the diffracted beam, and substantially maintain a handedness of the transmitted beam. For example, a left-handed transmissive PVH grating may be configured to substantially forwardly diffract an LHCP beam as an RHCP beam, and substantially transmit (e.g., with negligible diffraction) an RHCP beam as an RHCP beam. A right-handed transmissive PVH grating may be configured to substantially forwardly diffract an RHCP beam as an LHCP beam, and substantially transmit (e.g., with negligible diffraction) an LHCP beam as an LHCP beam. For discussion purposes,FIG.11Bshows the transmissive PVH grating1111as a right-handed transmissive PVH grating. For discussion purposes, as shown inFIG.11B, the transmissive PVH grating1111may be configured to substantially forwardly diffract an RHCP beam1160as a diffracted beam (e.g., which is the 1st order diffracted beam)1162, and substantially transmit (e.g., with negligible diffraction) an LHCP beam1165as a transmitted beam (which is the 0thorder diffracted beam)1167. In some embodiments, the diffracted beam1162may be an LHCP beam, and the transmitted beam1167may be an LHCP beam. For a linearly polarized input beam or an unpolarized input beam including an RHCP component and an LHCP component, the transmissive PVH grating1111may be configured to substantially forwardly diffract the RHCP (or LHCP) component, and substantially transmit (e.g., with negligible diffraction) the LHCP (or RHCP) component. In other words, the transmissive PVH grating1111may be configured to forwardly diffract the input beam as a −1storder diffracted beam and a 0thorder diffracted beam (that is a transmitted beam without negligible diffraction), which may be two circularly polarized beams having the same handedness. Although not shown, in some embodiments, the transmissive PVH grating1111may also provide polarization selective diffractions to elliptically polarized beams. FIGS.11C and11Dschematically illustrate diagrams of the transmissive PVH grating1111and the compensation plate1113, which may be included in the system1000shown inFIG.11A, according to various embodiments of the present disclosure. In some embodiments, the incident beam S228of the transmissive PVH grating1111may be at least partially polarized. For discussion purposes, inFIGS.11C and11D, the incident beam S228may be a linearly polarized beam with an incidence angle θIand a wavelength λ. The transmissive PVH grating1111may be configured to forwardly diffract the incident beam S228as a 1storder diffracted beam S1132and a 0thorder diffracted beam S1133(that is a transmitted beam with negligible diffraction). In some embodiments, the −1storder diffracted beam S1132and the 0thorder diffracted beam S1133may be two polarized beams with the same handedness, e.g., two circularly polarized beams with the same handedness. In some embodiments, the −1storder diffracted beam S1132and the 0thorder diffracted beam S1133may have a substantially same light intensity. A diffraction angle θ1Dof the 1st order diffracted beam S1132may be determined, in part, by the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and an in-plane pitch PM-inof the transmissive PVH grating1111. In other words, the optical properties of the transmissive PVH grating1111(functioning as a mask) may be encoded into the 1storder diffracted beam (e.g., LHCP beam) S1132. The 1storder diffracted beam (e.g., LHCP beam) S1132may be referred to as a signal beam, which may carry or may be encoded with the optical properties or optical information of the transmissive PVH grating1111. A diffraction angle θ0Dof the 0thorder diffracted beam S1133may be substantially equal to the incidence angle θIof the incident beam S228, i.e., θ0D=θI. The 0thorder diffracted beam (e.g., LHCP beam) S1133may be referred to as a reference beam, which may not carry, or may carry an insignificant amount of optical information of the transmissive PVH grating1111. An angle between the 0thorder diffracted beam S1133and the −1storder diffracted beam S1132may be determined, in part, by the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and an in-plane pitch PM-inof the transmissive PVH grating1111. The compensation plate1113may be configured to receive the 0thorder diffracted beam S1133and the −1st order diffracted beam S1132(e.g., two circularly polarized beams with the same handedness) from the transmissive PVH grating1111, and convert the 0thorder diffracted beam S1133and the 1st order diffracted beam S1132into two polarized beams S1135and S1134with opposite handednesses (e.g., (e.g., two circularly polarized beams with the same handedness). The optical anisotropy of the compensation plate1113may be configured to be one of a positive anisotropy, a negative anisotropy, or a biaxial anisotropy. In some embodiments, an orientation of the compensation plate1113and a phase retardance provided by the compensation plate1113at the wavelength λ of the incident beam S228may be configured, such that the compensation plate1113may be configured to provide a half-wave phase retardance (e.g., in-plane phase retardance) to a polarized beam having an incidence angle within a predetermined angle range, thereby converting a polarization of the polarized beam to an orthogonal polarization while transmitting the polarized beam. For a polarized beam having an incidence angle outside of the predetermined angle range, the compensation plate1113may be configured to substantially maintain a polarization of the polarized beam while transmitting the polarized beam. In some embodiments, the orientation of the compensation plate1113(e.g., an orientation of a predetermined principal axis of the compensation plate1113) may be adjusted through rotating a movable stage (not shown), on which the compensation plate1113is mounted. In some embodiments, the compensation plate1113may include an oblique compensation plate (“O-plate”) (also referred to as1113for discussion purposes). In some embodiments, the O-plate1113may be configured to change a polarization of a polarized beam that is obliquely incident onto the O-plate1113, and substantially maintain a polarization of a polarized beam that is substantially normally incident onto the O-plate1113. In some embodiments, the O-plate1113may be configured to change a polarization of a polarized beam having an oblique incidence angle within a predetermined angle range, and substantially maintain a polarization of a polarized beam having an oblique incidence angle outside of the predetermined angle range. The O-plate1113may be configured to convert a handedness of one of the 0thorder diffracted beam S1133or the −1storder diffracted beam S1132to an opposite handedness, and substantially maintain a handedness of the other one of the 0thorder diffracted beam S1133or the −1storder diffracted beam S1132, thereby converting the 0thorder diffracted beam S1133and the −1st order diffracted beam S1132(e.g., two circularly polarized beams with the same handedness) into two polarized beams S1135and S1134with opposite handednesses (e.g., two circularly polarized beams with opposite handednesses). In the disclosed embodiments, the O-plate1113may be configured to substantially maintain the propagation direction and the wavefront of a beam transmitted therethrough. Thus, an angle formed between the polarized beams S1135and S1134may be substantially equal to the angle formed between the 0thorder diffracted beam S1133and the −1st order diffracted beam S1132. For discussion purposes, inFIGS.11C and11D, the transmissive PVH grating1111may be a right-handed transmissive PVH grating configured to forwardly diffract an RHCP component of the incident beam (e.g., linearly polarized beam) S228and an LHCP component of the incident beam (e.g., linearly polarized beam) S228to the −1storder diffracted beam (e.g., LHCP beam) S1132and the 0thorder diffracted beam (e.g., LHCP beam) S1133(that is a transmitted beam with negligible diffraction), respectively. For discussion purposes, the O-plate1113may be configured to convert a handedness of the −1storder diffracted beam (e.g., LHCP beam) S1132to an opposite handedness, and substantially maintain a handedness of the 0thorder diffracted beam (e.g., LHCP beam) S1133. Thus, the O-plate1113may be configured to transmit the −1storder diffracted beam (e.g., LHCP beam) S1132as the polarized beam (e.g., RHCP beam) S1134with the handedness reversed, and transmit the 0thorder diffracted beam (e.g., LHCP beam) S1133as the polarized beam (e.g., LHCP beam) S1135with the handedness maintained. The polarized beam (e.g., RHCP beam) S1134and the polarized beam (e.g., LHCP beam) S1135with opposite handednesses may propagate toward a same surface of the recording medium layer210. In other words, the polarized beam (e.g., RHCP beam) S1134and the polarized beam (e.g., LHCP beam) S1135with opposite handednesses may propagate toward the recording medium layer210from the same side of the recording medium layer210. The polarized beam (e.g., RHCP beam) S1134and the polarized beam (e.g., LHCP beam) S1135with opposite handednesses may interfere with each other in space to generate a polarization interference pattern, to which the recording medium layer210may be exposed. In the embodiments shown inFIGS.11C and11D, the superposition of the polarized beam (e.g., RHCP beam) S1134and the polarized beam (e.g., LHCP beam) S1135may result in a superposed wave that has a substantially uniform intensity and a linear polarization with a spatially periodically varying orientation (or a spatially periodically varying linear polarization orientation angle). A pattern of the spatially periodically varying orientation of the linear polarization may define a grating pattern in the recording medium layer210. After the polarization interference pattern is recorded into the recording medium layer210(or after the recording medium layer210is optically patterned), a PSOE may be obtained according to the fabrication processes described above in connection withFIGS.4A-4D,FIGS.5A-5D, orFIGS.6A and6B. For example, in some embodiments, the recording medium layer210may include a surface recording material for surface-mediated photo-alignment, a passive PSOE may be obtained through disposing the birefringent medium layer415on the patterned recording medium layer210and polymerizing the birefringent medium layer415, similar to that shown inFIGS.4C and4D. The obtained passive PSOE may be a PBP grating, a transmissive PVH grating, a reflective PVH grating, etc. In some embodiments, an active PSOE may be obtained through assembling two substrates, on which the patterned recording medium layers210are disposed, to form an LC cell, and filling active LCs into the LC cell, similar to that shown inFIGS.5A-5D. The obtained active PSOE may be a PBP grating, a transmissive PVH grating, a reflective PVH grating, etc. In some embodiments, the recording medium layer210may include a volume recording material for bulk-mediated photo-alignment, the patterned recording medium layer210may function as a passive PSOE, similar to that shown inFIGS.6A-6B. The obtained passive PSOE may be a transmissive PVH grating, etc. Referring back toFIGS.11C and11D, the in-plane pitch (or the grating period) PR-inof the grating pattern defined in the recording medium layer210may be determined, in part, by the angle formed between the two polarized beams S1135and S1134, and the wavelength λ of the two polarized beams S1135and S1134(which is also the wavelength λ of the incident beam S228). The angle formed between the propagation directions of two polarized beams S1135and S1134(also referred to as the angle formed between the two polarized beams S1135and S1134) may be determined, in part, by the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and the in-plane pitch PM-inof the transmissive PVH grating1111. Thus, through adjusting at least one of the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, or the in-plane pitch PM-inof the transmissive PVH grating1111, the in-plane pitch (or the grating period) PR-inof the grating pattern defined in the recording medium layer210may be adjustable. In the disclosed embodiments, the in-plane pitch (or the grating period) Pinof PSOEs fabricated based on the pattern recording medium layer210is presumed to be substantially the same as the in-plane pitch (or the grating period) PR-inof the grating pattern defined in the recording medium layer210. Thus, through configuring the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and the in-plane pitch PM-inof the transmissive PVH grating1111, PSOEs with any suitable in-plane pitch Pinmay be fabricated. For example, PSOEs with a fine in-plane pitch (e.g., 200 nm to 800 nm) may be fabricated. In some embodiments, the orientation of Bragg planes formed within the volume of the PSOEs (fabricated based on the pattern recording medium layer210) may be determined, in part, by the angle formed between the polarized beams S1135and S1134, and the propagation directions of the polarized beams S1135and S1134. The angle formed between the polarized beams S1135and S1134, and the propagation directions of the polarized beams S1135and S1134may be determined, in part, by the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and the in-plane pitch PM-inof the transmissive PVH grating1111. Thus, through configuring the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and the in-plane pitch PM-inof the transmissive PVH1111, Bragg planes of any suitable orientation may be formed within the volume of the PSOEs (fabricated based on the pattern recording medium layer210). In other words, PSOEs having Bragg planes of any suitable orientation within the volume may be fabricated. For example, in the embodiment shown inFIG.11C, the incident beam S228may be obliquely incident onto the transmissive PVH grating1111, or the incident beam S228may be an off-axis incident beam of the transmissive PVH grating1111. In some embodiments, through configuring the incidence angle θIof the incident beam S228, the wavelength λ of the incident beam S228, and the in-plane pitch PM-inof the transmissive PVH grating1111, the diffraction angle θ1Dof the 1storder diffracted beam S1132may have a substantially same value as that of the incidence angle θIof the incident beam S228and a sign opposite to that of the incidence angle θI, i.e., θ1D=−θI. As the diffraction angle θ0Dof the 0thorder diffracted beam S1133is substantially equal to the incidence angle θIof the incident beam S228(i.e., θ0D=θI), the diffraction angles of the 0thorder diffracted beam S1133and the 1storder diffracted beam S1132have a substantially equal value and opposite signs, i.e., θ1D=−θ0D. An angle formed between the 0thorder diffracted beam S1133and the 1storder diffracted beam S1132may have a value that is two times of the value of the incidence angle θIof the incident beam S228. When the incidence angle θIof the incident beam S228is presumed to be +θ, the diffraction angles of the 0thorder diffracted beam S1133and the 1storder diffracted beam S1132may be +θ and −θ, respectively. The angle formed between the 0thorder diffracted beam S1133and the 1storder diffracted beam S1132may be 20. In other words, the transmissive PVH grating1111may be configured to forwardly diffract the incident beam S228as the 1storder diffracted beam S1132and the 0thorder diffracted beam S1133having symmetric propagation directions. Accordingly, the two polarized beams S1135and S1134output from the O-plate1113may also have symmetric propagation directions. A recording of the interference pattern generated by the circularly polarized beams S1135and S1134with opposite handednesses and symmetric propagation directions may be referred to as a symmetric recording. In some embodiments, a PSOE fabricated based on the recording medium layer210subjected to the symmetric recording may have vertical Bragg planes formed within the volume of the PSOE. Although not shown, in some embodiments, the diffraction angles of the 0thorder diffracted beam S1133and the 1storder diffracted beam S1132have different values and opposite signs, i.e., θ1D≠−θ0D. In the embodiment shown inFIG.11D, the incident beam S228may be substantially normally incident onto the transmissive PVH grating1111, or the incident beam S228may be an on-axis incident beam of the transmissive PVH grating1111. Thus, the incidence angle θIof the incident beam S228and the diffraction angles of the 0thorder diffracted beam S1133may be substantially zero. The diffraction angle θ1Dof the 1storder diffracted beam S1132may have a non-zero value, which may be determined by the wavelength λ of the incident beam S228and the in-plane pitch PM-inof the transmissive PVH grating1111. In other words, the transmissive PVH grating1111may be configured to forwardly diffract the incident beam S228as the 1storder diffracted beam S1132and the 0thorder diffracted beam S1133having asymmetric propagation directions. Accordingly, the two polarized beams S1135and S1134output from the O-plate1113may also have asymmetric propagation directions. A recording of the interference pattern generated by the polarized beams S1135and S1134with opposite handednesses and asymmetric propagation directions may be referred to as an asymmetric recording. In some embodiments, a PSOE fabricated based on the recording medium layer210subjected to the asymmetric recording may have slanted Bragg planes formed within the volume of the PSOE. Referring back toFIG.11A, in some embodiments, the transmissive PVH element1111shown inFIG.11Amay include a transmissive PVH lens (also referred to as1111for discussion purposes). The transmissive PVH lens may function as a spherical lens, an aspherical lens, a cylindrical lens, or a freeform lens, etc., depending on, for example, an in-plane orientation pattern of the optic axis of the transmissive PVH lens1111. For example, in some embodiments, the optic axis of the transmissive PVH lens1111may be configured with an in-plane orientation pattern, in which the orientation of the optic axis may continuously vary in at least two opposite in-plane directions from a center of the in-plane orientation pattern to the opposite peripheries of the in-plane orientation pattern with a varying (e.g., decreasing) pitch. In some embodiments, within a volume of the transmissive PVH lens1111, the optic axis of the transmissive PVH lens1111may be twisted in a helical fashion. In some embodiments, the transmissive PVH lens1111may include a birefringent medium layer (e.g., an LC layer). The optically anisotropic molecules (e.g., LC molecules) of the birefringent medium layer may be configured with an in-plane orientation pattern. In the in-plane orientation pattern, the orientations of the optically anisotropic molecules (e.g., LC molecules) may continuously vary in at least two opposite in-plane directions. The at least two opposite in-plane directions may be opposite directions from a center of the in-plane orientation pattern to the opposite peripheries of the in-plane orientation pattern. The continuous variation of the orientations may exhibit a varying (e.g., decreasing) pitch. For example, the pitch may gradually decrease from the center to the opposite peripheries. In some embodiments, within a volume of the transmissive PVH lens1111, the orientation of the optically anisotropic molecules (e.g., LC molecules) may be twisted in a helical fashion. In such an embodiment, the transmissive PVH lens1111may function as a spherical lens. The transmissive PVH lens1111may function as a mask for surface recording or volume recording of a lens pattern into the recording medium layer210. The lens pattern may be a spherical lens pattern, an aspherical lens pattern, a cylindrical lens pattern, or a freeform lens pattern, etc. A PSOE fabricated based on the exposed recording medium layer210may be a polarization selective lens, such as, a polarization selective spherical lens, a polarization selective aspherical lens, a polarization selective cylindrical lens, or a polarization selective freeform lens, etc. FIG.11Eillustrates polarization selective diffractions of the transmissive PVH lens (e.g., spherical lens)1111, according to an embodiment of the present disclosure. The transmissive PVH lens1111may provide a polarization selective converging or diverging function via forward diffraction. The transmissive PVH lens1111may be considered as a transmissive PVH lens with an optical power. In some embodiments, the transmissive PVH lens1111may be configured to substantially forwardly diffract and converge (or diverge) a polarized beam (e.g., circularly or elliptically polarized beam) having a predetermined handedness. In some embodiments, the transmissive PVH lens1111may be configured to substantially transmit (e.g., with negligible diffraction) a polarized beam (e.g., circularly or elliptically polarized beam) having a handedness that is opposite to the predetermined handedness. For example, a right-handed transmissive PVH grating may be configured to substantially forwardly diffract and converge (or diverge) an RHCP beam as an LHCP beam. In other words, a right-handed transmissive PVH lens may be configured to converge (or diverge) the RHCP beam as the LHCP beam via forward diffraction. A right-handed transmissive PVH grating may also be configured to substantially transmit (e.g., with negligible diffraction) an LHCP beam as an LHCP beam. In some embodiments, a left-handed transmissive PVH lens may be configured to substantially forwardly diffract and converge (or diverge) an LHCP beam as an RHCP beam. In other words, a left-handed transmissive PVH lens may be configured to converge (or diverge) the LHCP beam as the RHCP beam via forward diffraction. A left-handed transmissive PVH lens may also be configured to substantially transmit (e.g., with negligible diffraction) an RHCP beam as an RHCP beam. Whether a transmissive PVH lens converges or diverges a polarized beam (e.g., circularly, or elliptically polarized beam) having the predetermined handedness may depend on a sign of the optical power of the transmissive PVH lens. For example, a transmissive PVH lens having a positive optical power may converge a polarized beam (e.g., circularly or elliptically polarized beam) having the predetermined handedness. A transmissive PVH lens having a negative optical power may diverge a polarized beam (e.g., circularly or elliptically polarized beam) having the predetermined handedness. For discussion purposes,FIG.11Eshows the transmissive PVH lens1111as a right-handed transmissive PVH lens having a positive optical power. As shown inFIG.11E, the transmissive PVH lens1111may be configured to converge an RHCP beam1170as an LHCP beam (e.g., the 1storder diffracted beam)1172, while substantially forwardly diffracting the RHCP beam1170. The transmissive PVH lens1111may also be configured to substantially transmit (e.g., with negligible diffraction) an LHCP beam1175as an LHCP beam1177(e.g., the 0thorder diffracted beam)1177. For discussion purposes,FIG.11Eshows that the RHCP beam1170and the LHCP beam1175may have planar wavefronts. The LHCP beam (e.g., the 1storder diffracted beam)1172may have a non-planar wavefront, and the LHCP beam1177(e.g., the 0thorder diffracted beam)1177may have a planar wavefront. For a linearly polarized input beam or an unpolarized input beam including an RHCP component and an LHCP component, the transmissive PVH lens1111having a positive optical power may be configured to converge the RHCP (or LHCP) component while substantially forwardly diffracting the RHCP (or LHCP) component, and substantially transmit (e.g., with negligible diffraction) the LHCP (or RHCP) component. The transmissive PVH lens1111having a negative optical power may be configured to diverge the RHCP (or LHCP) component while substantially forwardly diffracting the RHCP (or LHCP) component, and substantially transmit (e.g., with negligible diffraction) the LHCP (or RHCP) component. Thus, for a linearly polarized input beam or an unpolarized input beam having a planar wavefront, the transmissive PVH lens1111may be configured to output a 1storder diffracted beam having a non-planar wavefront and a 0thorder diffracted beam having a planar wavefront. The 1storder diffracted beam and the 0thorder diffracted beam may be polarized beam (e.g., circularly, or elliptically polarized beam) having the same handedness. FIG.11Fschematically illustrates a diagram of the transmissive PVH element1111and the O-plate1113, which may be included in the system1100shown inFIG.11A, according to an embodiment of the present disclosure. The incident beam S228may be at least partially polarized. For discussion purposes, inFIG.11F, the incident beam S228may be a linearly polarized beam. The transmissive PVH lens1111may be configured to forwardly diffract the incident beam (e.g., linearly polarized beam) S228as a 1storder diffracted beam S1143and a 0thorder diffracted beam S1142(that is a transmitted beam with negligible diffraction). The 1storder diffracted beam S1143and the 0thorder diffracted beam S1142may be polarized beams having the same handedness, e.g., two circularly polarized beams having the same handedness. At least one of the 1storder diffracted beam S1143or the 0thorder diffracted beam S1142may have a non-planar wavefront. The O-plate1113may be configured to convert the 1storder diffracted beam S1143and the 0thorder diffracted beam S1142(which are polarized beams having the same handedness) to two polarized beams S1145and S1144having the opposite handednesses, e.g., two circularly polarized beams having opposite handednesses. The polarized beams S1145and S1144having the opposite handednesses may interfere with one another in space to generate a polarization interference pattern, which defines a lens pattern (e.g., a spherical lens pattern, an aspherical lens pattern, a cylindrical lens pattern, or a freeform lens pattern, etc.) in the recording medium layer210, which is exposed to the polarization interference pattern. A PSOE fabricated based on the optically patterned recording medium layer210, via, e.g., the fabrication processes shown inFIGS.4A-4D,FIGS.5A-5D, orFIGS.6A and6B, may be a polarization selective lens, e.g., a polarization selective spherical lens, a polarization selective aspherical lens, a polarization selective cylindrical lens, or a polarization selective freeform lens, etc. For example, the polarization selective lens may be a PBP lens, a PVH lens, etc. For discussion purposes, in the embodiment shown inFIG.11F, the transmissive PVH element1111may be a right-handed transmissive PVH lens1111functioning as a spherical lens having a positive optical power. The transmissive PVH element1111may have an optical axis1117that is a symmetric axis of the transmissive PVH element1111and perpendicular to the surface of the transmissive PVH element1111. For discussion purposes, the incident beam S228may be a linearly polarized beam having a planar wavefront, and substantially normally incident onto the transmissive PVH lens1111(e.g., the incident angle θi=0). The incident beam S228may be parallel to the optical axis1117of the transmissive PVH lens1111, and spaced apart from the optical axis1117of the transmissive PVH lens1111. The transmissive PVH lens1111may be configured to forwardly diffract an RHCP component of the incident beam (e.g., linearly polarized beam) S228as a 1storder diffracted beam (e.g., LHCP beam) S1142, and forwardly diffract an LHCP component of the incident beam (e.g., linearly polarized beam) S228as a 0thorder diffracted beam (e.g., LHCP beam) S1143(that is a transmitted beam with negligible diffraction). The transmissive PVH lens1111may be configured to converge the RHCP component of the incident beam (e.g., linearly polarized beam) S228as the 1storder diffracted beam (e.g., LHCP beam) S1142. The 1storder diffracted beam (e.g., LHCP beam) S1142may have a non-planar wavefront (e.g., spherical converging wavefront). The convergence of the 1storder diffracted beam (e.g., LHCP beam) S1142may be determined, in part, by the optical power of the transmissive PVH lens1111. In other words, the optical properties of the transmissive PVH lens1111(functioning as a mask) may be encoded into the 1storder diffracted beam (e.g., LHCP beam) S1142. The 1storder diffracted beam (e.g., LHCP beam) S1142may be referred to as a signal beam. For discussion purposes,FIG.11Fshows diffraction angles θ1Dof rays included in the 1storder diffracted beam (e.g., LHCP beam) S1142as negative diffraction angles (e.g., counter-clockwise from a normal1188of a light outputting surface of the PVH lens1111). Rays included in the 1storder diffracted beam (e.g., LHCP beam) S1142may have different negative diffraction angles.FIG.11Fshows a diffraction angle θ1Dof one ray included in the in the 1storder diffracted beam (e.g., LHCP beam) S1142as negative diffraction angles (counter-clockwise from a normal1188of the light outputting surface of the PVH lens1111). The 0thorder diffracted beam (e.g., LHCP beam) S1143may have a planar wavefront. The 0thorder diffracted beam (e.g., LHCP beam) S1143may be referred to as a reference beam. A diffraction angle θ0Dof the 0thorder diffracted beam (e.g., LHCP beam) S1143may be substantially zero. For discussion purposes, the O-plate1113may be configured to convert a handedness of the 1storder diffracted beam (e.g., LHCP beam) S1142to an opposite handedness, and substantially maintain a handedness of the 0thorder diffracted beam (e.g., LHCP beam) S1143. For example, the O-plate1113may be configured to transmit the 1storder diffracted beam (e.g., LHCP beam) S1142as a polarized beam (e.g., RHCP beam) S1144with the handedness reversed, and transmit the 0thorder diffracted beam (e.g., LHCP beam) S1143as a polarized beam (e.g., LHCP beam) S1145with the handedness substantially maintained. The polarized beam (e.g., RHCP beam) S1144and the polarized beam (e.g., LHCP beam) S1145may interfere with one another to generate a polarization interference pattern, to which the recording medium layer210is exposed. The polarization interference pattern may be recorded in the recording medium layer210to define a lens pattern (e.g., spherical lens pattern) in the recording medium layer210. A PSOE fabricated based on the exposed recording medium layer210may be a polarization selective lens, e.g., a polarization selective spherical lens having a positive optical power. FIG.11Gschematically illustrates a diagram of the transmissive PVH element1111and the O-plate1113, which may be included in the system1100shown inFIG.11A, according to an embodiment of the present disclosure. For discussion purposes, in the embodiment shown inFIG.11G, the transmissive PVH element1111may be a right-handed transmissive PVH lens1111functioning as a spherical lens having a negative optical power. The transmissive PVH element1111may have an optical axis1117that is a symmetric axis of the transmissive PVH element1111and perpendicular to the surface of the transmissive PVH element1111. The incident beam S228may be at least partially polarized. For discussion purposes, the incident beam S228may be a linearly polarized beam having a planar wavefront, obliquely incident onto the transmissive PVH lens1111(e.g., the incident angle θi≠0). The incident beam S228may be non-parallel to the optical axis1117of the transmissive PVH lens1111. An intersection between the incident beam S228and the transmissive PVH lens1111may be spaced apart from an intersection between the optical axis1117and the transmissive PVH lens1111. The transmissive PVH lens1111may be configured to forwardly diffract an RHCP component of the incident beam (e.g., linearly polarized beam) S228as a 1storder diffracted beam (e.g., an LHCP beam) S1152, and forwardly diffract an LHCP component of the incident beam (e.g., linearly polarized beam) S228as a 0thorder diffracted beam (e.g., an LHCP beam) S1153(that is a transmitted beam with negligible diffraction). The transmissive PVH lens1111may be configured to diverge the RHCP component of the incident beam (e.g., linearly polarized beam) S228as the 1storder diffracted beam (e.g., LHCP beam) S1152. The 1storder diffracted beam (e.g., LHCP beam) S1152may have a non-planar wavefront (e.g., spherical diverging wavefront). The divergence of the 1storder diffracted beam (e.g., LHCP beam) S1152may be determined, in part, by the optical power of the transmissive PVH lens1111. In other words, the optical properties of the transmissive PVH lens1111(functioning as a mask) may be encoded into the 1storder diffracted beam (e.g., LHCP beam) S1152. The 1storder diffracted beam (e.g., LHCP beam) S1152may be referred to as a signal beam. For discussion purposes,FIG.11Gshows diffraction angles θ1Dof rays included in the 1storder diffracted beam (e.g., LHCP beam) S1152as negative diffraction angles (counter-clockwise from a normal1199of the light outputting surface of the PVH lens1111).FIG.11Gshows a diffraction angle θ1Dof one ray included in the in the 1storder diffracted beam (e.g., LHCP beam) S1152as negative diffraction angles (counter-clockwise from a normal1199of the light outputting surface of the PVH lens1111). Rays included in the 1storder diffracted beam (e.g., LHCP beam) S1152may have different negative diffraction angles. The 0thorder diffracted beam (e.g., LHCP beam) S1153may have a planar wavefront. The 0thorder diffracted beam (e.g., LHCP beam) S1153may be referred to as a reference beam. A diffraction angle θ0Dof the 0thorder diffracted beam (e.g., LHCP beam) S1153may be a positive diffraction angle. For discussion purposes, the O-plate1113may be configured to convert a handedness of the 1storder diffracted beam (e.g., LHCP beam) S1152to an opposite handedness, and substantially maintain a handedness of the 0thorder diffracted beam (e.g., LHCP beam) S1153. Thus, the O-plate1113may be configured to transmit the 1storder diffracted beam (e.g., LHCP beam) S1152as a polarized beam (e.g., an RHCP beam) S1154with the handedness reversed, and transmit the 0thorder diffracted beam (e.g., LHCP beam) S1153as a polarized beam (e.g., an LHCP beam) S1155with the handedness substantially maintained. The polarized beam (e.g., RHCP beam) S1154and the polarized beam (e.g., LHCP beam) S1155may interfere with one another in space to generate a polarization interference pattern, to which the recording medium layer210is exposed. The polarization interference pattern may be recorded in the recording medium layer210to define a lens pattern (e.g., spherical lens pattern) in the recording medium layer210. A PSOE fabricated based on the exposed recording medium layer210may be a polarization selective lens, e.g., a polarization selective spherical lens having a negative optical power. Referring toFIGS.11F and11G, for discussion purposes, the incident beam S228of the transmissive PVH lens1111(functioning as a mask) is presumed to have a planar wavefront, the 0thorder diffracted beam (e.g., LHCP beam) S1143or S1153is presumed to have a planar wavefront, and the 1storder diffracted beam (e.g., LHCP beam) S1142or S1152is presumed to have a non-planar wavefront. In some embodiments, the incident beam S228of the transmissive PVH lens1111(functioning as a mask) may be configured with a non-planar wavefront. In some embodiments, the 0thorder diffracted beam (e.g., LHCP beam) S1143or S1153may be configured with a non-planar wavefront, and the 1storder diffracted beam (e.g., LHCP beam) S1142or S1152may be configured with a planar wavefront. In some embodiments, both of the 0thorder diffracted beam (e.g., LHCP beam) S1143or S1153and the −1storder diffracted beam (e.g., LHCP beam) S1142or S1152may be configured with non-planar wavefronts. For discussion purposes,FIGS.11F and11Gshow the transmissive PVH lens1111functioning a spherical lens, which is for the purpose of explaining the principle of recording a polarization interference pattern that defines a lens pattern using the transmissive PVH lens1111as a mask. The transmissive PVH lens1111may function as any suitable lens, such as a spherical lens, an aspherical lens, a cylindrical lens, or a freeform lens, etc. The generated polarization interference pattern may define any suitable lens pattern in the recording medium layer210, such as a spherical lens pattern, an aspherical lens pattern, a cylindrical lens pattern, or a freeform lens pattern, etc. Referring back toFIG.11A, the system1100shown inFIG.11Ais for illustrative purposes to explain the principle of recording a polarization interference pattern in the recording medium layer210using the transmissive PVH element1111as a mask. A PSOE may be fabricated based on the exposed or patterned recording medium layer210. The principle disclosed herein may be applicable to any suitable systems including a transmissive PVH element functioning as a mask for recording a polarization interference pattern in the recording medium layer210, and is not limited to the system1100shown inFIG.11A. The transmissive PVH grating1111shown inFIGS.11B-11Dis for illustrative purposes to explain the principle of recording a polarization interference pattern that defines a lens pattern in the recording medium layer210using the transmissive PVH grating1111as a mask. A PSOE or polarization hologram fabricated based on the exposed or patterned recording medium layer210may be a polarization selective grating. The transmissive PVH lens1111shown inFIG.11E-11Gis for illustrative purposes to explain the principle of recording a polarization interference pattern that defines a lens pattern in the recording medium layer210using the transmissive PVH lens1111as a mask. A PSOE or polarization hologram fabricated based on the exposed or patterned recording medium layer210may be a polarization selective lens. The principle disclosed herein may be applicable to any suitable transmissive PVH masks, and is not limited to the transmissive PVH grating mask shown inFIGS.11B-11Dand the transmissive PVH lens mask shown inFIGS.11E-11G. In addition, the transmissive PVH masks as described herein can also be fabricated based on various other methods, such as holographic interference, laser direct writing, ink-jet printing, and various other forms of lithography. Thus, a “hologram” as described herein is not limited to creation by holographic interference, or “holography.” The O-plate1113shown inFIGS.11A,11C,11D,11F, and11Gdisclosed herein are for illustrative purposes. Any suitable compensation plate or other type of optical element may be used to convert the two polarized beams (e.g., circularly or elliptical polarized) with the same handedness output from the transmissive PVH element into two polarized beams (e.g., circularly or elliptical polarized) with the opposite handedness, following the same or similar design principles described herein with respect to the O-plate. Any suitable polarization conversion optical element, which is configured to convert the two polarized beams (e.g., circularly, or elliptical polarized) with the same handedness output from the transmissive PVH element (that functions as a mask) into two polarized beams (e.g., circularly or elliptical polarized) with the opposite handedness, may be used. The O-plate or the waveplate213shown in other embodiments may be referred to as a polarization conversion optical element or polarization conversion element. The two polarized beams with the opposite handednesses may interfere with one another to generate any suitable polarization interference pattern, which may be recorded in the recording medium layer, thereby defining a suitable optic axis orientation pattern (e.g., an in-plane orientation pattern and/or 3D orientation pattern) in the recording medium layer210. Referring toFIG.2A,FIG.2C,FIG.2D, andFIG.11A, in some embodiments, the light source201may be a light-emitting diode (“LED”) light source, or an organic light-emitting diode (“OLED”) light source. When considering cost efficiency in manufacturing, in some embodiments, LEDs may be used to replace laser sources for reducing the manufacturing cost. A laser beam may be a substantially collimated beam, and may be expanded to a plane wave propagating in a single direction. LEDs have an emitting area at which different points on the LED may act as independent point sources. The point sources may be converted to plane waves propagating in different directions by a condenser lens. Thus, the condenser lens may output a superposition of multiple plane waves. In direct duplications of PVH elements using a PVH mask, each plane wave incident on the PVH mask may be converted, by the PVH mask, into two orthogonally circularly polarized beams propagating in different directions. The two orthogonally circularly polarized beams propagating in different directions may interfere with one another to generate a polarization fringe or polarization interference pattern. When the plane wave is tilted, the polarization fringe may shift accordingly. After adding up all shifted polarization fringes, the final polarization fringe profile may be smeared or even disappeared. A series of experiments have been conducted to investigate the relationship between a duplicated polarization pitch and an allowable gap between the PVH mask and the substrate on which a recording medium is disposed (referred to as a sample plane), with different diameters the aperture of the LED.FIG.13Aschematically illustrates a diagram of an experimental setup1300for investigating the relationship between the duplicated polarization pitch and the allowable gap between a mask and a sample plane, according to an embodiment of the present disclosure. As shown inFIG.13A, a light emitting from an LED1301may be collimated by an aspherical condenser lens1303. The LED1301may have a 1.4 mm diameter. A linear polarizer1305may be disposed between the aspherical condenser lens1303and a PVH mask1307, and configured to convert a light emitting from the LED1301as a linearly polarized light propagating toward the PVH mask1307. The PVH mask1307may be a PVH lens with decreasing pitch from the lens center to the lens peripheries. As the gap between the PVH mask1307and a sample plane1309increases, a series of concentric rings that are not well-aligned may be observed under crossed polarizers, and the clear aperture of the duplicated area may decrease when observed under crossed polarizers.FIG.13Billustrates a duplicated lens pattern with a reduction of the clear aperture observed under crossed polarizers, when the gap between the between the PVH mask1307and the sample plane1309is about 1000 μm, according to an embodiment of the present disclosure. The pitches on both sides of the first cutoff ring are measured to be around 40 μm and 54 μm. Since each point on the LED1301may be regarded as an independent point source that generates a polarization pattern, the final polarization pattern on the sample plane may be the incoherent integration of the polarization patterns from the entire LED emitting area. The mathematical calculation unveils that the alignment quality may be the Fourier transform of the aperture of the LED1301. For a circularly shaped LED, its Fourier transform may be a Sombrero function with a first zero located between the two measured pitches. The first zero is defined as the minimal duplicable pitch, which may be used to calculate the maximum allowable gap between the PVH mask1307and the sample plane1309. Table 1 shows maximum allowable gaps between the PVH mask1307and the sample plane1309with different LED diameters, at 1 μm and 310 nm polarization pitch at the sample plane1309through direct duplication. The focal length of the aspherical condenser lens1303is 32 mm. The LED1301has a circular profile. As the diameter of the LED1301decreases, the LED1301may exhibit an increased spatial coherence, and the maximum allowable gap for duplication may be increased. As shown in Table 1, when the LED1301has a 0.1 mm diameter, the maximum gap between the PVH mask1307and the sample plane1309is 390 μm, which may be tight for some applications. Such a restriction on the maximum gap between the PVH mask1307and the sample plane1309may not exist when using a laser as the light source in some applications. TABLE 1Maximum gap with varied LED diameter at1 μm and 310 nm pitch at the sample plane.LEDMax.LEDMax.PitchdiametergapPitchdiametergap1 μm1.4 mm27.9 μm310 nm1.4 mm8.64 μm1 mm39.0 μm1 mm12.1 μm0.5 mm78.0 μm0.5 mm24.2 μm0.1 mm390 μm0.1 mm121 μm FIG.12illustrates a flowchart showing a method1200for fabricating a PSOE (e.g., a polarization hologram), according to an embodiment of the present disclosure. As shown inFIG.12, the method1200may include directing an input beam to a mask (Step1210). In some embodiments, the mask may be a transmissive PVH element. In some embodiments, the input beam may be at least partially polarized. In some embodiments, the input beam may be a linearly polarized beam. In some embodiments, the input beam may be an unpolarized polarized beam. The method1200may include forwardly diffracting, by the mask, the input beam as a first set of two polarized beams (Step1220). In some embodiments, the first set of two polarized beams may include circularly and/or elliptical polarized beams. In some embodiments, the two polarized beams in the first set may have the same handedness. In some embodiments, the two polarized beams having the same handedness may include a 0thorder diffracted beam and a 1storder diffracted beam. In some embodiments, the 0thorder diffracted beam and the 1storder diffracted beam may have a substantially same light intensity. In some embodiments, the 0thorder diffracted beam and the 1storder diffracted beam may have different light intensities. In some embodiments, the transmissive PVH element functioning as a mask may include a transmissive PVH grating. The transmissive PVH element functioning as a mask may be referred to as a PVH mask or a transmissive PVH mask. In some embodiments, the 0thorder diffracted beam and the 1storder diffracted beam may have planar wavefronts. In some embodiments, the diffraction angles of the 0thorder diffracted beam and the 1storder diffracted beam may have a substantially same value and opposite signs. In some embodiments, the propagation directions of the 0thorder diffracted beam and the 1storder diffracted beam may be symmetric with respect to a normal of a surface of the mask (e.g., the PVH mask). In some embodiments, the propagation directions of the 0thorder diffracted beam and the 1storder diffracted beam may be asymmetric with respect to a normal of a surface of the mask (e.g., the PVH mask). In some embodiments, the PVH mask may include a transmissive PVH lens. In some embodiments, at least one of the 0thorder diffracted beam or the 1storder diffracted beam may have a non-planar wavefront. The method1200may include converting, by a polarization conversion element, the first set of two polarized beams into a second set of two polarized beams having opposite handednesses, the two polarized beams having opposite handednesses interfering with one another to generate a polarization interference pattern (Step1230). In some embodiments, the polarization conversion element may include a compensation plate, such as an O-plate. The compensation plate may be indirectly optically coupled to the PVH mask with an intermediate optical element disposed therebetween. The intermediate optical element may or may not change at least one of the polarization or the propagating direction of the beams. In some embodiments, the compensation plate may be directly optically coupled to the PVH mask without an optical element disposed and/or without a gap therebetween. In some embodiments, the two polarized beams having opposite handednesses in the second set may have planar wavefronts. In some embodiments, an angle formed between the propagation directions of the two polarized beams having opposite handedness in the second set may be substantially equal to the angle formed between the propagation direction of the 0thorder diffracted beam and the 1storder diffracted beam (or the two polarized beams having the same handedness in the first set) output from the PVH mask. In some embodiments, the two polarized beams having opposite handednesses in the second set may have a substantially equal light intensity. In some embodiments, the two polarized beams having opposite handedness may have different light intensities. In some embodiments, the compensation plate may include an O-plate configured to provide an angular selective phase retardance to the two polarized beams having the same handedness output from the PVH mask. In some embodiments, the two polarized beams having the same handedness output from the PVH mask may include a first polarized beam (e.g., first circularly or elliptically polarized beam) having a first incidence angle relative to the O-plate, and a second polarized beam (e.g., second circularly or elliptically polarized beam) having a second incidence angle relative to the O-plate. In some embodiments, the O-plate may be configured to provide a half-wave retardance to the first polarized beam having the first incidence angle within a predetermined angle range, and provide a zero or full-wave retardance to the second polarized beam having the second incidence angle outside of the predetermined range. In some embodiments, the O-plate may be configured to convert a handedness of the first polarized beam to an opposite handedness, and substantially maintain the handedness of the second polarized beam. In some embodiments, the method1200may include additional steps that are not shown inFIG.12. In some embodiments, the method1200may include directing the two polarized beams (e.g., circularly or elliptically polarized beams) having opposite handednesses to a same surface of a polarization sensitive recording medium layer. The two polarized beams having opposite handednesses may interfere with one another in a predetermined space within which the polarization sensitive recording medium layer is located. The polarization sensitive recording medium layer may be exposed to the polarization interference pattern. During the exposure process, the polarization interference pattern may be recorded at (e.g., in or on) the polarization sensitive recording medium layer to define an orientation pattern of an optic axis of the polarization sensitive recording medium layer. In some embodiments, the orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to a grating pattern. In some embodiments, the orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to a lens pattern. In some embodiments, the orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to a prism pattern. In some embodiments, the orientation pattern of the optic axis of the polarization sensitive recording medium layer may correspond to a waveplate pattern. In some embodiments, the polarization sensitive recording medium layer may include a photo-sensitive polymer (or photo-polymer), e.g., an amorphous polymer, an LC polymer, etc. In some embodiments, after being exposed to the polarization interference pattern, the polarization sensitive recording medium layer (also referred to as “exposed polarization sensitive recording medium layer”) may function as a polarization selective grating, such as a transmissive PVH grating, etc. In some embodiments, the method1200may also include annealing the exposed polarization sensitive recording medium layer in a predetermined temperature range. For example, when the polarization sensitive recording medium layer includes LC polymer, the predetermined temperature range may correspond to a liquid crystalline state of the LC polymer. In some embodiments, the polarization sensitive recording medium layer may include a photo-alignment material. The exposed polarization sensitive recording medium layer may function as a surface alignment layer. The method1200may also include forming a birefringent medium layer on the polarization sensitive recording medium layer. In some embodiments, the birefringent medium layer may include a birefringent medium with or without a chirality. For example, the birefringent medium layer may include at least one of LCs or RMs with or without a chirality. In some embodiments, the exposed polarization sensitive recording medium layer may be annealed in a predetermined temperature range corresponding to a nematic phase of the LCs or RMs. In some embodiments, the method1200may also include polymerizing the birefringent medium layer. In some embodiments, the polymerized birefringent medium layer may function as a polarization selective grating, such as a PBP grating, or a reflective or transmissive PVH grating, etc. In some embodiments, the method may include recording a plurality of polarization interference patterns to a plurality of regions or portions in the polarization sensitive recording medium layer. For example, a first polarization interference pattern may be generated using an input beam having a first wavelength, incident onto a first PVH mask at a first incidence angle. The first PVH mask may diffract the input beam having the first wavelength and the first incidence angle into a first set of two polarized beams (e.g., circularly or elliptically polarized beams) having the same handedness (e.g., the 0thorder diffracted beam and the 1storder diffracted beam). The compensation plate may convert the first set of two polarized beams having the same handednesses into a second set of two polarized beams (e.g., circularly, or elliptically polarized beams) having opposite handedness, which may interference with one another to generate a first polarization interference pattern. One or more first recording regions or portions of the polarization sensitive recording medium layer may be exposed to the first polarization interference pattern, which may be recorded in the one or more first recording regions. In some embodiments, a second polarization interference pattern may be recorded at one or more second recording regions or portions of the polarization sensitive recording medium layer. In some embodiments, the method may include adjusting at least one of a wavelength of the input beam, an incidence angle of the input beam, or a relative position or a relative orientation between the polarization sensitive recording medium layer and the input beam incident onto the first PVH mask. The method may include forwardly diffracting, by the first PVH mask, the input beam as a second set of two polarized beams (e.g., circularly or elliptically polarized beams) having the same handedness (e.g., the 0thorder diffracted beam and the 1storder diffracted beam). In some embodiments, the method may include replacing the first PVH mask with a second PVH mask, which may be different from the first PVH mask. The method may include forwardly diffracting, by the second PVH mask, the input beam as a third set of two polarized beams (e.g., circularly or elliptically polarized beams) having the same handedness (e.g., the 0thorder diffracted beam and the −1storder diffracted beam). The method may include converting, by the compensation plate, the third set of two polarized beams (e.g., circularly or elliptically polarized beams) having the same handedness into a fourth set of two polarized beams (e.g., circularly or elliptically polarized beams) having opposite handednesses, which may interference with one another to generate a second polarization interference pattern. The method may also include recording the second polarization interference pattern in one or more second regions of the polarization sensitive recording medium layer. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware and/or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product including a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. In some embodiments, a hardware module may include hardware components such as a device, a system, an optical element, a controller, an electrical circuit, a logic gate, etc. Further, when an embodiment illustrated in a drawing shows a single element, it is understood that the embodiment or another embodiment not shown in the figures but within the scope of the present disclosure may include a plurality of such elements. Likewise, when an embodiment illustrated in a drawing shows a plurality of such elements, it is understood that the embodiment or another embodiment not shown in the figures but within the scope of the present disclosure may include only one such element. The number of elements illustrated in the drawing is for illustration purposes only, and should not be construed as limiting the scope of the embodiment. Moreover, unless otherwise noted, the embodiments shown in the drawings are not mutually exclusive, and they may be combined in any suitable manner. For example, elements shown in one figure/embodiment but not shown in another figure/embodiment may nevertheless be included in the other figure/embodiment. In any optical device disclosed herein including one or more optical layers, films, plates, or elements, the numbers of the layers, films, plates, or elements shown in the figures are for illustrative purposes only. In other embodiments not shown in the figures, which are still within the scope of the present disclosure, the same or different layers, films, plates, or elements shown in the same or different figures/embodiments may be combined or repeated in various manners to form a stack. Various embodiments have been described to illustrate the exemplary implementations. Based on the disclosed embodiments, a person having ordinary skills in the art may make various other changes, modifications, rearrangements, and substitutions without departing from the scope of the present disclosure. Thus, while the present disclosure has been described in detail with reference to the above embodiments, the present disclosure is not limited to the above described embodiments. The present disclosure may be embodied in other equivalent forms without departing from the scope of the present disclosure. The scope of the present disclosure is defined in the appended claims. | 247,163 |
11860574 | DETAILED DESCRIPTION Technical solutions in some embodiments of the present disclosure will be described below clearly and completely in combination with the accompanying drawings. Obviously, the described embodiments are merely some but not all embodiments of the present disclosure. All other embodiments obtained based on the embodiments of the present disclosure by a person of ordinary skill in the art shall be included in the protection scope of the present disclosure. Unless the context requires otherwise, the term “comprise” and other forms thereof such as the third-person singular form “comprises” and the present participle form “comprising” throughout the description and the claims are construed as open and inclusive, i.e., “including, but not limited to”. In the description of the specification, the terms such as “one embodiment”, “some embodiments”, “exemplary embodiments”, “example”, “specific example”, or “some examples” are intended to indicate that specific features, structures, materials or characteristics related to the embodiment(s) or example(s) are included in at least one embodiment or example of the present disclosure. Schematic representations of the above terms do not necessarily refer to the same embodiment(s) or example(s). In addition, the specific features, structures, materials or characteristics may be included in any one or more embodiments or examples in any suitable manner. Hereinafter, the terms such as “first” and “second” are used for descriptive purposes only, and are not to be construed as indicating or implying the relative importance or implicitly indicating the number of indicated technical features. Thus, features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present disclosure, the term “a plurality of” and “the plurality of” each means two or more unless otherwise specified. In the description of some embodiments, the terms such as “coupled” and “connected” and their derivatives may be used. For example, the term “connected” may be used in the description of some embodiments to indicate that two or more components are in direct physical or electric contact with each other. For another example, the term “coupled” may be used in the description of some embodiments to indicate that two or more components are in direct physical or electric contact. However, the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other. The embodiments disclosed herein are not necessarily limited to the contents herein. The phrase “at least one of A, B, and C” has a same meaning as the phrase “at least one of A, B, or C”, and they both include the following combinations of A, B, and C: only A, only B, only C, a combination of A and B, a combination of A and C, a combination of B and C, and a combination of A, B, and C. The phrase “A and/or B” includes the following three combinations: only A, only B, and a combination of A and B. The use of the phrase “applicable to” or “configured to” herein is meant as an open and inclusive language, which does not exclude devices that are applicable to or configured to perform additional tasks or steps. In addition, the use of the phrase “based on” is meant to be open and inclusive, since a process, step, calculation or other action that is “based on” one or more of the stated conditions or values may, in practice, be based on additional conditions or values exceeding those stated. Holographic technology is a technology for recording and reproducing a real three-dimensional image of an object by using interference and diffraction principles. The three-dimensional image reproduced by using the holographic technology has a strong stereoscopic effect and a real visual effect. Based on this, some embodiments of the present disclosure provide a holographic optical apparatus. As shown inFIG.1, the holographic optical apparatus includes a beam splitting component1, a transmission assembly2, a focal length modulation component3, and an optical element4. The beam splitting component1may receive light emitted by a light source7, split the received light into reference light101and signal light102, and output the reference light101and the signal light102. The reference light101and the signal light102are coherent light. That is, after entering the beam splitting component1, the light emitted by the light source7(e.g., a display device or a projection device) may be split into the reference light101and the signal light102. The reference light101and the signal light102are the coherent light. The reference light101and the signal light102will interfere with each other when meeting, and generate interference fringes. Optionally, the beam splitting component1may be a beam splitter. The transmission assembly2is disposed in a light-exit path of the beam splitting component1. For example, the transmission assembly2may be directly disposed at a light-exit side of the beam splitting component1. The transmission assembly2is used to transmit the reference light101to the optical element4, and transmit the signal light102to each focal length modulation region of the focal length modulation component3. Herein, the transmission assembly2transmits the signal light102to each focal length modulation region of the focal length modulation component3. That is, through the transmission of the transmission assembly2, signal light102exiting from the beam splitting component1each time may be directly transmitted to a focal length modulation region of a plurality of focal length modulation regions. Therefore, times for respective focal length modulation regions receiving the signal light102may be different. For example, through the transmission of the transmission assembly2, signal light102exiting from the beam splitting component1for a first time is transmitted to one of the focal length modulation regions of the focus length modulation component3, and then signal light102exiting from the beam splitting component1for a second time is transmitted to another of the focal length modulation regions of the focal length modulation component3, and so on. As shown inFIGS.2A to2D, the focal length modulation component3includes a plurality of focal length modulation regions301, and focal lengths of the plurality of focal length modulation regions301are different from each other. That is, the focal length modulation regions301have different capabilities of modulating the signal light102. For example, the focal length modulation regions301have different capabilities of modulating phases of the signal light102. For example, the signal light102from the beam splitting component1is a plane wave, and thus the signal light102is parallel light before being modulated by the focal length modulation component3. After the signal light102passes through the focal length modulation regions301of the focal length modulation component3, the signal light102becomes spherical waves. In this case, images formed by the signal light102are located in focal planes of the focal length modulation regions301that the signal light102passes through, respectively. It will be seen therefrom that, in a case where the focal lengths of the plurality of focal length modulation regions301are different from each other, after the signal light102from the transmission assembly2passes through the focal length modulation regions301of the focal length modulation component3, image distances of the images formed by the signal light102modulated by the focal length modulation regions301are different from each other. For example, the focal length modulation regions301with different focal lengths may change the phases of the signal light to varying degrees, so that the image distances of the images formed by the signal light102modulated by the focal length modulation regions301may be changed. The optical element4may be disposed at a light-exit side of the focal length modulation component3, and the optical element4includes a recording medium layer. As shown inFIGS.2A to2D, the recording medium layer41includes a plurality of recording regions401, and the plurality of recording regions401are in one-to-one correspondence with the plurality of focal length modulation regions301, that is, each recording region401is located in a light-exit path of a focal length modulation region301. The recording medium layer41is used to record, in each recording region401, interference fringes generated by signal light102reaching the recording region401through a focal length modulation region301corresponding to the recording region401and the reference light101. It will be understood that, similar to a manner in which the signal light102is sequentially transmitted to focal length modulation regions301of the focal length modulation component3, the reference light101exiting from the beam splitting component1, after being transmitted by the transmission assembly2, may also separately reach recording regions401in the recording medium layer41of the optical element4. That is, through the transmission of the transmission assembly2, reference light101exiting from the beam splitting component1each time may be directly transmitted to a recording region401of the plurality of recording regions401. Therefore, times for recording regions401receiving the reference light101may be different. In this case, the reference light101transmitted to each recording region401and the signal light102modulated by the focal length modulation region301corresponding to the recording region401in the focal length modulation component3may synchronously reach the recording region401. In addition, it is also possible that reference light101exiting from the beam splitting component1may simultaneously reach each recording region401. Herein, the optical element4may only include the recording medium layer41, and in this case, the recording medium layer41is the optical element4. Of course, in addition to the recording medium layer41, the optical element4may further include a transparent carrier substrate, such as a glass substrate, and the recording medium layer41may be fixedly disposed on the transparent carrier substrate. It will be noted that, the plurality of recording regions401may be in one-to-one correspondence with the plurality of focal length modulation regions301. That is, the signal light102, after exiting from any focal length modulation region301of the focal length modulation component3, travels to a recording region401corresponding to the focal length modulation region301. It will be seen from the above description that, the signal light102separately travels to each focal length modulation region301of the focal length modulation component3. Accordingly, as shown inFIGS.2A to2D, the signal light102may separately travel to the recording regions401corresponding to the focal length modulation regions301. That is to say, the signal light102, after being transmitted to any focal length modulation region301, immediately travels to the recording region401corresponding to the focal length modulation region301. The signal light is substantially synchronized to be transmitted to any focal length modulation region301and the recording region401corresponding to the focal length modulation region301. In each recording region401of the optical element4, the signal light102and the reference light101generate interference fringes. In different recording regions401, since the image distances of the images formed after the signal light102passes through the focal length modulation regions301with different focal lengths are different, phases of the generated interference fringes are different. The interference fringes include phase information and amplitude information of the signal light102. For example, in a case where the light source7is a display device or a projection device, the interference fringes include phase information and amplitude information of an image. A person skilled in the art will understand that, an implementation of the holographic technology is divided into two steps, i.e., “interference recording” and “diffraction reproduction”. A recording process of the interference fringes is the “interference recording”. For the “diffraction reproduction”, the optical element4in which the interference fringes have been recorded needs to be irradiated with the reference light101. The interference fringes generate a restored image under the irradiation of the reference light101. Similar to a manner in which the reference light101enters the recording regions401in the recording medium layer41of the optical element4in the above “interference recording” process, in a “diffraction reproduction” process, the reference light101may separately enter recording regions401in the recording medium layer41of the optical element4, or may simultaneously enter the recording regions401in the recording medium layer41of the optical element4. In each recording region401, the interference fringes may generate a complete restored image under the irradiation of the reference light101, the restored image in each recording region401has an image distance, and image distances of restored images in the plurality of recording regions401are different from each other. As shown inFIG.3, the diffraction reproduction process may be as follows: for example, when interference fringes in a recording region401are restored, light exiting from the light source7, after being split by the beam splitting component1, may generate same reference light101as reference light101when the interference fringes are formed. After the reference light101is irradiated on the interference fringes, the signal light102may be restored. After the interference fringes in the plurality of recording regions401are separately restored, a plurality of restored images with different depths of field may be obtained, and a reconstructed image5with a three-dimensional effect may be constructed after the plurality of restored images are combined. The reconstructed image5not only has a strong stereoscopic effect when viewed by human eyes9, but also does not cause a problem of vergence-accommodation conflict. In the holographic optical apparatus provided by the present embodiment, the focal length modulation component3is provided to include a plurality of focal length modulation regions301, and the focal lengths of the plurality of focal length modulation regions301are different from each other; and the recording medium layer41is provided to include a plurality of recording regions401in one-to-one correspondence with the focal length modulation regions301, and the interference fringes with different phases may be separately recorded in the plurality of recording regions401. Based on this, the restored images presented after the interference fringes in respective recording regions401are restored by the reference light101have different image distances. Based on this, since the image distances of the restored images restored in the plurality of recording regions401are different, and the depths of field of the restored images with different image distances are different, when the human eyes9are viewing the reconstructed image5constructed by the plurality of restored images with different depths of field, the problem of vergence-accommodation conflict may be avoided, discomfort phenomena such as dizziness, diplopia and blurring may not occur, and the reconstructed image5viewed by the human eyes9has a strong stereoscopic effect and an optimal contrast. Optionally, as shown inFIG.4, the transmission assembly2includes a first transmission portion201, a second transmission portion202and a third transmission portion203. The first transmission portion201is configured to transmit the reference light101to the optical element4, For example, the first transmission portion201may sequentially transmit the reference light101to recording regions401in the recording medium layer41of the optical element4. The second transmission portion202is configured to transmit the signal light102to part of all the focal length modulation regions301of the focal length modulation component3. For example, as shown inFIG.4, in a case where a number of focal length modulation regions301in the focal length modulation component3is four, the second transmission portion202is configured to transmit the signal light102to two focal length modulation regions301of all the focal length modulation regions301of the focal length modulation component3. The second transmission portion202may also sequentially transmit the signal light102to the two focal length modulation regions301of the focal length modulation component3in a time-sharing manner. The third transmission portion203is configured to transmit the signal light102to a remaining part of all the focal length modulation regions301of the focal length modulation component3. For example, as shown inFIG.4, in the case where the number of the focal length modulation regions301in the focal length modulation component3is four, the third transmission portion203is configured to transmit the signal light102to the other two focal length modulation regions301of all the focal length modulation regions301of the focal length modulation component3, The third transmission portion203may also sequentially transmit the signal light102to the two focal length modulation regions301of the focal length modulation component3in a time-sharing manner. In the present embodiment, compared with a projection device using digital light processing in the related art, using the second transmission portion202and the third transmission portion203to separately transmit the signal light102to the plurality of focal length modulation regions of the focal length modulation component3has lower requirements on accuracy, a simpler structure and lower costs. Based on the above, optionally, a number of the focal length modulation regions301is at least three. In this case, for example, the second transmission portion202is configured to transmit the signal light102to one focal length modulation region301of all the focal length modulation regions301of the focal length modulation component3, and the third transmission portion203is configured to transmit the signal light102to two focal length modulation regions301of all the focal length modulation regions301of the focal length modulation component3. Optionally, as shown inFIG.5A, at least one of the second transmission portion202and the third transmission portion203includes a first reflecting component204and a second reflecting component205. The first reflecting component204is configured to receive the signal light102from the beam splitting component1and transmit the signal light102to the second reflecting component205. The second reflecting component205is a rotatable reflecting component, and the second reflecting component205is configured to transmit the received signal light102to different focal length modulation regions301at different rotation angles. For example, the number of the focal length modulation regions is four. In this case, as shown inFIG.5A, after receiving the signal light102from the beam splitting component1, a first reflecting component204of the second transmission portion202reflects the signal light102to a second reflecting component205of the second transmission portion202, and the second reflecting component205of the second transmission portion202transmits the signal light102to a focal length modulation region301. After that, the second reflecting component205of the second transmission portion202may be rotated to change an angle of the second reflecting component205of the second transmission portion202(e.g., the second reflecting component205of the second transmission portion202shown by the dashed lines inFIG.5A), and the signal light102is transmitted to another focal length modulation region301by using the second reflecting component205of the second transmission portion202. Then, the first reflecting component204of the second transmission portion202is removed, and the third transmission portion203is used to transmit the signal light102. Similarly, after receiving the signal light, a first reflecting component204of the third transmission portion203transmits the signal light102to a second reflecting component205of the third transmission portion203, and then the second reflecting component205of the third transmission portion203transmits the signal light102to yet another focal length modulation region301. Then, the second reflecting component205of the third transmission portion203is rotated to change its angle (e.g., the second reflecting component205of the third transmission portion203shown by the dashed lines inFIG.5A), so as to transmit the signal light102to a remaining focal length modulation region301. Based on this, when the signal light102is transmitted to each focal length modulation region301and travels to a corresponding recording region401, the reference light101is synchronously transmitted to the recording region401of the optical element4corresponding to the focal length modulation region301. In a case where the second transmission portion202and the third transmission portion203each include the first reflecting component204and the second reflecting component205, the second transmission portion202and the third transmission portion203may transmit the signal light102relatively separately, and the signal light102may be transmitted to different focal length modulation regions301only by rotating the second reflecting component205in the second transmission portion202and the second reflecting component205in the third transmission portion203, which is convenient and fast. Based on this, optionally, as shown inFIG.5B, the first reflecting component204in the second transmission portion202and the first reflecting component204in the third transmission portion203are a same reflecting component. In this case, the first reflecting component204is a rotatable reflecting component, that is, the first reflecting component204may be used in both the second transmission portion202and the third transmission portion203. In this case, that the first reflecting component204is configured to receive the signal light102exiting from the beam splitting component1, and transmit the signal light102to the second reflecting component205includes: transmitting, by the first reflecting component204, the signal light102to the second reflecting component205of the second transmission portion202at a first rotation angle, and transmitting, by the first reflecting component204, the signal light102to the second reflecting component205of the third transmission portion203at a second rotation angle. When the first reflecting component204is at the first rotation angle (e.g., shown by the solid line inFIG.5B) and at the second rotation angle (e.g., shown by the dashed line inFIG.5B) respectively, the first reflecting component204may separately transmits the signal light102from the beam splitting component1to the second reflecting component205of the second transmission portion202and the second reflecting component205of the third transmission portion203. By setting the first reflecting component204as a rotatable reflecting component, the first reflecting component204in the second transmission portion202may be further used as the first reflecting component204in the third transmission portion203, so that an entire structure of the holographic optical apparatus may be simple and easy to construct. Based on this, optionally, as shown inFIGS.5A and5B, a number of the focal length modulation regions301is four. In this case, that the second reflecting component205is configured to transmit the received signal light102to different focal length modulation regions301at different rotation angles includes: transmitting, by the second reflecting component205of the second transmission portion202, the received signal light102to a first focal length modulation region301at a third rotation angle; transmitting, by the second reflecting component205of the second transmission portion202, the received signal light102to a second focal length modulation region301at a fourth rotation angle; transmitting, by the second reflecting component205of the third transmission portion203, the received signal light102to a third focal length modulation region301at a fifth rotation angle; and transmitting, by the second reflecting component205of the third transmission portion203, the received signal light102to a fourth focal length modulation region301at a sixth rotation angle. Based on the above, when the signal light102is transmitted to the second reflecting component205of the second transmission portion202, the second reflecting component205of the second transmission portion202transmits the signal light102to a focal length modulation region301at the third rotation angle (e.g., the second reflecting component205of the second transmission portion202shown by the solid lines inFIGS.5A and5B). After that, the second reflecting component205of the second transmission portion202may be rotated to change its rotation angle to the fourth rotation angle (e.g., the second reflecting component205of the second transmission portion202shown by the dashed lines inFIGS.5A and5B), so as to transmit the signal light102to another focal length modulation region301. When the signal light102is transmitted to the second reflecting component205of the third transmission portion203, the second reflecting component205of the third transmission portion203transmits the signal light102to yet another focal length modulation region301at the fifth rotation angle (e.g., the second reflecting component205of the third transmission portion203shown by the solid lines inFIGS.5A and5B). After that, the second reflecting component205of the third transmission portion203may be rotated to change its angle to the sixth rotation angle (e.g., the second reflecting component205of the third transmission portion203shown by the dashed lines inFIGS.5A and5B), so as to transmit the signal light102to the remaining focal length modulation region301. In the case where there are four focal length modulation regions301, the second transmission portion202may transmit the signal light102to two focal length modulation regions301. Similarly, the third transmission portion203may also transmit the signal light102to remaining two focal length modulation regions301. The second transmission portion202and the third transmission portion203may be fully utilized. Optionally, the first reflecting component204and the second reflecting component205are both micro electro-mechanical system (MEMS) micro-mirrors. A MEMS micro-mirror is manufactured by using optical MEMS technology, and a low-light-level mirror and a MEMS driver are integrated to form the MEMS micro-mirror. The MEMS driver is electromagnetically driven, and has advantages of low driving voltage, no need for a booster chip, and high driving frequency, so that the MEMS driver may drive the low-light-level mirror to twist a certain angle. As shown inFIG.6, a principle of rotation of the MEMS micro-mirror6is that, a back face (i.e., a non-reflective surface) of the low-light-level mirror601is provided with, for example, four coils6011thereon, the MEMS driver is provided with an annular magnet602therein, and the four coils6011are in one-to-one correspondence with four positions on the annular magnet, i.e., the four positions A, B, C and D shown inFIG.6. When currents are applied to coils6011corresponding to the positions A and C, the two coils6011may generate alternating current (AC) excitation signals with a phase difference of 90°, which causes the two coils6011to generate magnetic fields with opposite and alternating polarities. The magnetic fields generated by the two coils6011separately interact with the annular magnet602to generate torques in opposite directions, and the low-light-level mirror601will twist with a connecting line between the position B and the position D as an axis. Similarly, if currents are applied to the other two coils6011corresponding to the positions B and D, the low-light-level mirror601will twist with a connecting line between the position A and the position B as an axis, and an exit direction of the signal light102incident on the low-light-level mirror601may be changed after the low-light-level mirror601is twisted. Using the MEMS micro-mirror6to change the direction of the signal light102may have a high control accuracy and be easy to operate. Based on the above, optionally, the first transmission portion201is a MEMS micro-mirror6. In a case where the first transmission portion201separately transmits the reference light101exiting from the beam spotting component1to recording regions401in the recording medium layer41of the optical element4, using the MEMS micro-mirror6as the first transmission portion201may facilitate to adjust a transmission direction of the reference light101, so that the reference light101transmitted by the beam splitting component1may be separately transmitted to recording regions401of the optical element4. Using the MEMS micro-mirror6to change the transmission direction of the reference light101may have a high control accuracy. In addition, in a case where the first reflecting component204, the second reflecting component205and the first transmission portion are all MEMS micro-mirrors6, it may facilitate to manufacture the transmission assembly. Optionally, the focal length modulation component3is a spatial light modulator31. As shown inFIG.7A, the spatial light modulator31includes an upper substrate311, a lower substrate312, a liquid crystal layer313located between the upper substrate311and the lower substrate312, a lower polarizer314located on a side of the lower substrate312away from the upper substrate311, and an upper polarizer315located on a side of the upper substrate311away from the lower substrate312. The spatial light modulator31has a plurality of sub-units, and each sub-unit is provided with a first electrode316and a second electrode317therein. The first electrode316is disposed in the lower substrate312, and the second electrode317is disposed in the lower substrate312or the upper substrate311. It will be understood that, the first electrode316and the second electrode317are insulated from each other. Optionally, as shown inFIG.7A, the first electrode316is disposed in the lower substrate312, and the second electrode317is disposed in the upper substrate311. First electrodes316in different sub-units are insulated from each other, and second electrodes317in all the sub-units are electrically connected and integrated as a whole. Optionally, the first electrode316and the second electrode317are both disposed in the lower substrate312, and they are disposed in different layers. First electrodes316in different sub-units are insulated from each other, and the second electrodes317in all the sub-units are electrically connected and integrated as a whole. The first electrode316is disposed at a side of the second electrode317proximate to the upper substrate311. Optionally, the first electrode316and the second electrode317are both disposed in the lower substrate312, and they are disposed in a same layer. First electrodes316in different sub-units are insulated from each other, second electrodes317in different sub-units are insulated from each other, and the first electrode316and the second electrode317each have a comb-tooth structure including a plurality of strip-shaped sub-electrodes. The spatial light modulator31is a device that modulates a spatial distribution of light waves. The sub-units in the spatial light modulator31may be referred as pixels of the spatial light modulator31. By writing information including information for controlling the pixels to the spatial light modulator31, and transferring the written information to positions of corresponding pixels through addressing, some parameters, such as phases, of the light waves may be modulated under control of the written information. Based on this, the pixels may be grouped, and each group is used as a focal length modulation region301. For example, as shown inFIG.7B, the pixels may be divided into four groups, each group is used as a focal length modulation region, and pixels in each group are separately controlled, so that a focal length of a region where each group is located may be separately controlled. In the embodiments of the present disclosure, since the phases of the signal light102correspond to the image distances of the images formed after the signal light102passes through the focal length modulation regions301, by using the spatial light modulator31to modulate the phases of the signal light102, the image distances of the images formed after the signal light102passes through the focal length modulation regions301may be modulated. Using the spatial light modulator31to modulate the image distance of the image formed after the signal light102passes through the focal length modulation region301may have advantages of fast response speed and high efficiency. Optionally, as shown inFIG.7C, the focal length modulation component3is a lens group32, the lens group32includes a plurality of lenses with different focal lengths, and a region where each lens is located is a focal length modulation region301. As shown inFIG.7C, taking the lens group including four lenses with different focal lengths as an example, an image distance of an image formed by signal light102exiting from each lens may be equal to a focal length of the lens. Using the lens to directly modulate the signal light102may have advantages of simplicity and convenience. The focal length modulation component3composed of the lens may have a simple structure and be easy to manufacture. As shown inFIG.8, the signal light102has different image distances after being modulated by the focal length modulation component3, and the interference fringes generated with the reference light101in the optical element4also include image distance information of the signal light102. The restored images restored by using the interference fringes may include the image distance information, such as the image distances f1, f2, f3and f4shown inFIG.8. The restored images with different image distances have different depths of field, and thus the optical element4has a plurality of depths of field. Optionally, as shown inFIGS.5A and5B, the holographic optical apparatus further includes a collimating beam expander8. The collimating beam expander8is disposed at a light incident side of the beam splitting component1; and the collimating beam expander8is used to collimate and expand the light emitted by the light source7. Light after collimation and expansion may be easily split into the reference light101and the signal light102. Based on the above description, optionally, the recording medium layer41is a photosensitive polymer layer, and the photosensitive polymer layer is disposed on the glass substrate of the optical element4. The photosensitive polymer is highly sensitive to light, and its structure or properties will change significantly under action of the light, so it can be used to record the interference fringes. As shown inFIG.9, with the photosensitive polymer layer of the optical element4including four recording regions401as an example, each recording region401is used to record interference fringes formed by signal light102with a certain focal length and reference light101with a certain focal length. The plurality of recording regions401in the photosensitive polymer layer are discretely distributed, and at most only one glass substrate is required to support the photosensitive polymer layer. Therefore, a thickness of an entire optical element4is small, and using the optical element in the embodiments of the present disclosure to restore images may have an advantage of high display brightness. As shown inFIGS.1,5A and53, some embodiments of the present disclosure further provide a holographic optical system, and the holographic optical system includes the light source7and the above holographic optical apparatus. The holographic optical system provided by the present embodiments has same beneficial effects as the above holographic optical apparatus, and thus details will not be repeated herein. Based on this, optionally, the light source7is a display device or a projection device. The display device or the projection device is used to output images, and each image is sequentially and separately output. All light included in each image is split into reference light101and signal light102after passing through the beam splitting component1, The reference light101and the signal light102will interfere with each other and form interference fringes in each recording region401of the optical element4. Using the display device or the projection device to output the images may have advantages of large storage capacity and being easy to control the output of the images. As shown inFIG.3, some embodiments of the present disclosure further provide a holographic display system, and the holographic display system includes the light source7, the beam splitting component1, the first transmission portion201and the optical element4. The beam splitting component1is configured to receive the light emitted by the light source7and output the reference light101. The first transmission portion201is disposed in the light-exit path of the beam splitting component1. For example, the first transmission portion201may be disposed on the light-exit side of the beam splitting component1. The first transmission portion201is configured to transmit the reference light101to the optical element4. The optical element4may be disposed on a light-exit side of the first transmission portion201, and the optical element4includes the recording medium layer41. The recording medium layer41includes the plurality of recording regions401, and interference fringes are recorded in each recording region401. The interference fringes are formed by the above holographic optical apparatus. The holographic display system provided by the present embodiments may achieve the “diffraction reproduction”. The beam splitting component1, the first transmission portion201and the optical element4may be the same as the beam splitting component1, the first transmission portion201and the optical element4in the above holographic optical apparatus, and details will not be repeated herein. The forgoing descriptions are merely specific implementations of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any changes or replacements that a person skilled in the art could conceive of within the technical scope of the present disclosure shall be included in the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims. | 39,221 |
11860575 | MODES OF CARRYING OUT THE INVENTION The present invention proposes a free escapement mechanism1of a new kind, designed and arranged to exploit and combine, in a free escapement with direct impulses, the advantages of reliability, simplicity of adjustment and self-starting of a Swiss anchor escapement well known to watchmakers for decades by singularly improving its performance by significantly reducing the angle of lift at the regulating organ while allowing a large amplitude in rotation of the latter, thus offering a quality factor far superior to known escapements. In addition, direct pulse operation promotes high energy efficiency. It advantageously presents a very simple and compact structure, compatible with the use of regulating organs of the sprung balance type classically used in pocket watches or wristwatches, offering the unprecedented possibility of working at the level of the regulating organ with low amplitudes and high frequencies or conversely with large amplitudes of oscillations and lower frequencies, in all cases with increased performance, whether in terms of isochronism or chronometry, compared to the majority of existing anchor mechanisms. These combined advantages are obtained according to the invention and as shown inFIGS.1to13by a direct impulse free escapement mechanism1, comprising an escapement wheel2having a series of peripheral teeth3, a locking device4arranged to cooperate in abutment with a tooth3of the escapement wheel2in a locking position and in this position to undergo a draw, and impulse members9capable of being attached to a regulating organ5such as a sprung balance so that the latter periodically receives an impulse from a tooth3of the escapement wheel2in order to maintain its oscillations, said teeth3and the impulse members9being configured and arranged so that the impulses occur outside the plane P1of the escapement wheel2. This is made possible according to the invention by an original conformation of the teeth3of the escapement wheel2, of which at least a part forms a projection perpendicular to the plane P1of said escapement wheel, as well as an adjusted configuration and orientation of the impulse organs9, in such a way that the respective pulse planes of said teeth3and of said impulse members9interact directly outside the plane P1of said escapement wheel2at each alternation of the regulating organ closest to the dead centre of the mechanism, therefore with a low angle of lift and a minimum of disturbance of the regulating organ at the impulse. The locking draw ensures free movement of the regulating organ during the locking phases over its entire amplitude of movement between the projecting parts of the teeth of the escapement wheel2. In practice, the teeth3of the escapement wheel form at their free end a protrusion, which has a length measured perpendicular to the plane P1of the escapement wheel2, greater than the thickness of the felloe22of the escapement wheel2. This protrusion defines a substantially triangular shape which is represented schematically in the figures by a shoulder line at the end of each tooth3. In addition, the impulse members9, for example consisting of ruby pallets, of the escapement mechanism1of the invention are advantageously arranged integral in rotation with the regulating organ5such that their impulse plane describes a trace whose width, measured in a path of the escapement mechanism, is at most equal to half the pitch separating two teeth3of the escapement wheel2. This allows escapement1to be configured such that the angle of lift at the regulating organ5is between 10° and 35°, preferably between 15° and 30°, in other words lower than any other escapement mechanism known to date. The locking device4is configured to disturb as little as possible the operation of the regulating organ5and thus to cooperate with the teeth3of the escapement wheel at least partially, and preferably completely, in the plane P1of said escapement wheel2. Combined with the particular configuration of the projections of the teeth3, this gives the escapement wheel2the ability to cooperate alternately with the locking device4and the impulse members9in superimposed and secant parallel planes of said teeth31. The impulse and locking phases of the escapement mechanism of the invention thus intervene not only separately from one another but in different planes, or levels, of the mechanism, which allows a very compact construction and a minimum of disturbances, while offering a wide choice of frequency and amplitude of operation of the regulating organ due to the possible entanglement of the circumcircles of the escapement wheel2and its teeth3and of the impulse organs9. The escapement mechanism1of the invention is presented more particularly in the figures under two particular embodiments, shown respectively inFIGS.1to7andFIGS.8to13. With reference first of all to the first embodiment, concerning the escapement mechanism1of the invention. It comprises an escapement wheel2extending in a plane P1and provided with peripheral teeth3and rotatably mounted about an axis of rotation X1perpendicular to this plane P1. Said teeth3define by their free ends a circular trajectory C during the rotation of the escapement wheel2. In a conventional way, this escapement wheel2is associated with an escapement pinion21driven on a pivot of axis X1common to the escapement wheel2and by which the latter can be coupled in use to the finishing gear train and the driving source of a watch movement in which the escapement mechanism1is integrated to maintain the oscillations of a regulating organ5of the movement, rotatably mounted around an axis of rotation X2. In accordance with the invention, the teeth3of the escapement wheel2each have a protruding portion on the surface of the escapement wheel2to allow transmission of impulses to the regulating organ5out of the plane P1of the escapement wheel2. These projections are formed in this embodiment by an impulse finger31projecting with respect to the plane P1in order to engage, at each step of rotation of the escapement wheel2, the impulse plane9pof a first or second impulse pallet9fixed to an impulse plate91integral with the regulating organ5and extending in a plane P2parallel to the plane P1of the escapement wheel2. Advantageously, said impulse pallets9are fixed to the plate91by any suitable means and they also extend, as can be seen fromFIGS.1,3and5in particular, projecting perpendicularly to the plane P2of the plate91in the direction of the escapement wheel2. The impulse fingers31of the teeth3of the escapement wheel2and impulse pallets9integral with the regulating organ5are thus arranged “head to tail” to ensure their interaction outside said plane P1, and more particularly here between the planes P1and P2of the escapement wheel and impulse plate91, respectively. As shown, the impulse surfaces9pof the impulse pallets9, on which the fingers31of the teeth3of the escapement wheel2slide and act, have a flat surface. They can also advantageously have a curved, concave or convex shape, in order to provide a progressive acceleration of the impulse or to symmetrize the impulse on each of the impulse pallets9with respect to the X2axis of the regulating organ5. Thus, one can play directly on the path of the escapement, by acting on the movements of the impulse pallets9in relation to the escapement wheel2, at the level of their angular values, their speeds and the transmitted torques. Escapement mechanism1also has a locking device4, which is itself rotatable about an axis of rotation X3. The axes of rotation X1, X2, X3of the escapement wheel2, the regulating organ5and the locking device4are preferably parallel to each other. The regulating organ5, which is not part of the escapement mechanism1as such, may consist of a sprung balance well known to the watchmaker or any other oscillating regulating organ, such as for example a knife resonator as proposed by the applicant in patent application WO 2016/012281 A1. The locking device4comprises a bar-shaped return lever42which is rotatably mounted on a pivot about the axis X3and to which an anchor43, which is made of the same material as the return lever42and is riveted or driven onto the pivot of rotation of the latter, is attached, at the ends of the arms of which are arranged two locking pallets41each having a locking plane41rfor alternately forming an abutting locking surface for the teeth3of the escapement wheel2in two extreme positions of rotation of the return lever42about its axis X3, called locking positions, one position of which is shown inFIGS.5and6. The pivoting of the return lever42in rotation about the axis X3to move the locking anchor43between the two locking positions is controlled as in a conventional anchor escapement by the regulating organ5acting on one end of the return lever42. This interaction takes place on each alternation in a so-called unlocking position by means of a unlocking pallet6, for example formed by a pin61, fixed to the impulse plate91on a complementary unlocking organ44formed at said first end42of the return lever42. This interaction induces the pivoting of the return lever42around the axis X3and thus the unlocking of the locking anchor43, more precisely of one of the locking pallets41, of one tooth3of the escapement wheel2prior to an impulse given by another tooth3of the escapement wheel on one of the impulse elements9of the regulating organ5. In addition, the angular travel of the return lever42is also limited by limiting stops7, e.g. formed by pins, arranged on both sides of the second end of the return lever42. These stops7determine the locking positions of the locking anchor43such that each locking pallet41is located in the trajectory C defined by the teeth3of the escapement wheel when the return lever42comes into contact with one of the stops7. The return lever42and the locking anchor43are then held in place during the free course of the regulating organ5after impulse by the draw effect of the escapement wheel2on the locking anchor43and the return lever42. As can be seen fromFIGS.2and3, the return lever42and the locking anchor43are arranged in relation to the regulating organ5and the escapement wheel2in such a way that, in the dead centre of the escapement shown inFIG.1, the locking pallets41are located at a distance from the axis of rotation X1of the escapement wheel2greater than the radius of the circumcircle C of the teeth3of the escapement wheel2, when at the same time the ends of the impulse pallets9intersect said circumcircle C, and are thus located at a distance from the axis of rotation of the escapement wheel2smaller than the radius R. By this configuration, at the dead centre of the escapement1, the pulse planes9pof the impulse pallets9are located in the path of the fingers31of the teeth3of the escapement wheel, while the locking pallets41are located outside this path. Also, the rotation of the escapement wheel2necessarily involves the engagement by a finger31of an impulse plane9pof an impulse pallet9and the driving of the regulating organ5in rotation around its axis. This ensures a self-starting character to the escapement mechanism1of the invention. Another advantageous feature is that the impulse fingers31and impulse pallets9are also shaped so that in each locking position the impulse pallets9can fully circulate between the fingers31of the teeth3of the escapement wheel2, thus ensuring a maximum amplitude of angular deflection at the regulating organ5, which in this case can be up to 300°. For this purpose, as shown in particular inFIGS.6and7, the rear flank of the fingers31is chamfered, thus giving a triangular shape to the fingers31, so as to free a play J at the rear end of each impulse pallet9when the regulating organ5passes freely in the locking position of the escapement wheel2. In the example shown inFIGS.1to7, the complementary unlocking organ44is advantageously formed by an almond-shaped ring443in which the axis X2of regulating organ5extends. This ring443is configured so that during normal operation of the escapement mechanism1the control pallet6integral with the regulating organ5circulates without contact along the internal walls444of said ring, which thus define a control cam8for the unlocking of the return lever42. A unlocking notch45is formed in the inner wall of said ring443in a position of alignment with a longitudinal axis of the return lever passing through the axis of rotation X3thereof. This unlocking notch allows, in a conventional manner, the control pin61to fall at each alternation of the regulating organ so as to cause the return lever42to pivot and the locking anchor to be unlocked. Such a ring443has the advantage by a very simple configuration to provide not only the unlocking but also the safety of the escapement, the internal walls of the ring preventing in case of impact any untimely unlocking out of impulse. However, the complementary unlocking organ44could be in a different form than that shown inFIGS.1to7. It may in particular be formed by a fork comprising two horns separated by a notch and devoid of a guard pin or the like, said horns being symmetrical with respect to the longitudinal axis of the return lever42passing through the axis of rotation X3and the centre of the notch and extending from said notch along an arc of a circle. FIGS.8to13show a second embodiment of the escapement mechanism according to the invention. This embodiment differs from the previous one in that it simplifies the locking device4, which has only one locking pallet4whose movement is controlled by a trigger-type unlocking device. In addition, only one out of two teeth3of the escapement wheel2actually participates in the impulses by acting alternately on two parallel levels of impulses on either side of the plane P1of the escapement wheel2, as described below. With reference toFIGS.8and10in particular, the escapement mechanism1of this second embodiment comprises an escapement wheel2extending in a plane P1rotatably mounted about an axis of rotation X1perpendicular to this plane P1. The escapement wheel2is provided with peripheral teeth3defining by their free ends a circular path C during the rotation of the escapement wheel2. The number of teeth3of the escapement wheel2in this second embodiment is equal to twice that of the escapement wheel of the escapement mechanism according to the first embodiment, in order to compensate in particular for the removal of the second locking pallet41at the locking device4, as will be described below. In a conventional way, the escapement wheel2is associated with an escapement pinion21driven on a pivot of axis X1common to the escapement wheel2and by which the latter can be coupled in use to the finishing gear train and the driving source of a watch movement in which the escapement mechanism1is integrated to maintain the oscillations of a regulating organ5of the movement, rotatably mounted around an axis of rotation X2. In an original way, the teeth3of the escapement wheel2comprise a regular alternation of teeth3comprising an impulse bar32, forming two symmetrical projections on either side of the plane P1of the escapement wheel2, and teeth3without projections and contained substantially in the plane P1, as in an ordinary escapement wheel2. The impulse bars32, shaped in practice like two fingers31of the first embodiment symmetrical with respect to the plane P1, are arranged to engage an impulse plane9pof a first and a second impulse pallet9fixed respectively to an upper impulse plate91and a lower impulse plate92integral with the regulating organ5and extending respectively in two planes P2′, P2″ parallel and symmetrical with respect to the plane P1of the escapement wheel2. The impulse pallets9also extend, as can be seen inFIGS.8,10and12in particular, projecting perpendicular to the P2′, P2″ planes of plates91,92in the direction of the escapement wheel2. Each impulse pallet9is thus arranged “head to tail” with one of the projecting portions of the impulse bars32formed one tooth in 2 of the escapement wheel so as to provide impulses alternately between plane P1and plane P2′ on the one hand and between plane P1and plane P2″ on the other hand on each alternation of the regulating organ5. As shown, the impulse surfaces9pof the impulse pallets9, on which the fingers32of the teeth3of the escapement wheel2slide and act, have a flat surface. They can also advantageously have a curved, concave or convex shape, in order to provide a progressive acceleration of the impulse or to symmetrize the impulse on each of the impulse pallets9with respect to the X2axis of the regulating organ5. Thus, one can play directly on the path of the escapement, by acting on the movements of the impulse pallets9in relation to the escapement wheel2, at the level of their angular values, their speeds and the transmitted torques. The escapement mechanism1also has a locking device4, rotatably mounted about an axis of rotation X3. This locking device4is, as previously presented, extremely simplified and comprises only a single locking pallet41arranged at one end of an arcuate return lever42rotatable about a pivot of axis X3between a locking position, shown inFIGS.8,9and11and an unlocking position, shown inFIG.13. These two positions are determined by a retaining stop7, formed by a single pin, and a control cam8arranged as an integral part of the rotation of said regulating organ5. The pivoting of the return lever42in rotation about the axis X3between the locking position and the unlocking position is controlled by the regulating organ5via an unlocking pallet6acting on a complementary unlocking organ44integral with the lever42on the one hand and the control cam8acting directly on the return lever42on the other hand. In the embodiment shown inFIGS.8to10, the control pallet6is a pallet61attached to a plate62which is fixed to the regulating organ5and arranged on the upper impulse plate91. The unlocking pallet61and its plate62are thus mobile in a plane P3parallel and distinct from the plane P1of the escapement wheel2and the planes P2′, P2″ of the impulse plates91,92. The control pallet6is arranged to trigger the unlocking of the locking device4from its locking position, i.e. more specifically to release the locking pallet41from its engagement against a tooth3of the escapement wheel in order to allow the transmission by the latter of an impulse on one of the impulse pallets9at each alternation of the regulating organ5. For this purpose, the unlocking pallet6cooperates with a complementary unlocking organ44which is formed by a unlocking arm441which is integral in rotation with the return lever42and which in this example is driven on the pivot point of the return lever42. The unlocking arm441has a unlocking tooth442at one free end, on the sides of which the unlocking pallet61is supported in the unlocking position prior to the impulse and pushes the unlocking arm441to swivel the return lever42about its release axis X3counter-clockwise (according to the convention of the figures). This rotation of lever42to unlock the locking pallet41from the escapement wheel is, however, secured by control cam8which controls the rotation of the return lever42at a free end421of the latter opposite the locking pallet41and forming a cam follower. For this purpose, the control cam8is formed by a shaft of axis X2integral with and/or forming part of the axis or pivot of the regulating organ5, said shaft being embedded between the upper91and lower92impulse plates. The cylindrical peripheral surface81of this shaft forms a cam surface in which a unlocking notch82is formed. The return lever42is so arranged in relation to the escapement wheel2and the regulating organ5that, in the locking position, it rests against the stop7and its end421is positioned opposite, but not touching, the cam surface8, the locking pallet41resting on its locking plane41ragainst a tooth3of the escapement wheel. Thus, in the event of impacts, said end421comes into contact with said cam surface81, preventing rotation of the return lever42and thus any unlocking of the locking pallet41. Such an unlocking is only permitted in the unlocking position, in which the cam follower421falls into the cam groove82simultaneously with the pushing of the unlocking pallet61onto tooth442of the unlocking arm441. This drop of the cam follower421into the notch releases the rotating lever42which rotates under the thrust undergone by the arm441, releasing the locking pallet4. The escapement wheel2rotates and gives an impulse to the regulating organ5via an impulse strip32on an impulse pallet9. During the rotation of the regulating organ5under the impulse of the escapement wheel2the cam follower421is pushed back by the cam surface81while the regulating organ begins its free alternation, which causes the return lever42to rotate clockwise, bringing the locking pallet41back into the path C of the escapement wheel, thus providing the stop necessary for the locking phase, and under the effect of the draw force forcing the return lever42against the retaining stop7. The unlocking and locking phases are thus easily controlled and secured by the control cam8, without any risk of unintentional release in the event of shocks and out of pulses, even at high oscillation frequencies of the regulating organ5. FIGS.11to13show escapement mechanism1in its second embodiment with an alternative unlocking and safety mechanism. In this embodiment, the control cam8is formed by a ring with a circular inner wall or rim forming a cam surface81in which a cam notch82is recessed. In the example shown, the ring8can be formed from a circle concentric to the axis X2and attached to the regulating organ5by gluing or other means of attachment to the upper impulse plate91. The ring8can also be made of material in one piece with the impulse plate91. The unlocking pallet6is then formed by a finger or feeler61which is fixed to the regulating organ5and whose free end is aligned and partially penetrates into notch82of the ring. The cam follower421consists of a pin or the like driven into the free end of the unlocking lever441and extending into said ring8. Thus, as shown inFIG.13and in accordance with the operating principles previously defined with reference toFIGS.8to10, the cam follower421falls into said cam notch82under the action of the unlocking pallet61in said unlocking position and is pushed out of the notch82opposite the inner cam surface81of the ring in the locking position. In another embodiment not shown, the cam ring8could also be formed by a circular groove on the upper impulse plate91in which the cam follower pin421would be housed, said groove having a said cam notch and the unlocking pallet6then being formed by a pin radially aligned with the cam notch with respect to the axis of rotation X2of the regulating organ5. Finally, as in the embodiment ofFIGS.1to7, the escapement mechanism1in the second embodiment ofFIGS.8to13is also self-starting. As can be seen fromFIG.13in particular, the return lever42is arranged in relation to the regulating organ5and the escapement wheel2in such a way that, in the unlocking position, therefore just before the dead centre of the escapement, the locking pallet41goes away from the circumcircle C of the teeth3of the escapement wheel2, when at the same time the ends of the impulse pallets9intersect said circumcircle C, and are thus located at a distance from the axis of rotation of the escapement wheel2smaller than the radius R. By this configuration, at the dead centre of the escapement1, the impulse planes9pof the impulse pallets9are located in the path C of the teeth3of the escapement wheel, while the locking pallet41is located outside of this path. Also, the rotation of the escapement wheel2necessarily involves the engagement by a bar32of an impulse plane9pof an impulse pallet9and the driving of the regulating organ5in rotation around its axis and the driving of the escapement mechanism1according to the invention. | 24,079 |
11860576 | MODES OF CARRYING OUT THE INVENTION The present invention offers an detached escapement mechanism1of a new type, designed and arranged pour utilize and combine, in a detached escapement with direct impulses, the advantages associated to reliability, simplicity of adjustment and self-start of a Swiss anchor escapement well known by matchmakers for several decades. A particular embodiment of such an escapement mechanism1is represented onFIG.1, corresponding to the dead centre of the escapement1. Generally speaking, the escapement mechanism1is structurally similar to a Robin type escapement but several aspects are modified, as it is described below. The escapement mechanism comprises a escape wheel2provided with pointed teeth3and mounted to be rotatable around a first axis of rotation X1. Conventionally, this escape wheel2is associated to a escape-pinion (not shown) through which the escape wheel2is coupled to the finishing gear train and the drive source of a watch movement, for example a barrel spring, which transmits a driving torque to the escape wheel2, which the latter distributes sequentially to a regulating organ5mounted to be rotatable around a second axis of rotation X2, in cooperation with an anchor4, itself rotatable around a third axis of rotation X3, the axes of rotation X1, X2, X3being parallel between each other. The regulating organ5can be made of a sprung balance well known by watchmakers or any other oscillating regulating organ, as for example a resonator with knives such as the one proposed by the applicant in patent application WO 2016/012281. The anchor4acts as a lever and comprises a plate41on which an entry-pallet10and an exit-pallet11are arranged, each presenting a locking plane10p,11pdesigned to form alternatively a locking surface in abutment for the teeth3of the escape wheel2in two end positions in rotation of the anchor4around its axis X3, called locking positions. In order to enable the pivoting of the anchor4from a locking position to the following one, said anchor comprises a fork43arranged at the end of an arm42extending from the plate41following a straight line linking the axis of rotation X3of the anchor4with the axis of rotation X2of the regulating organ5. The fork43comprises two horns44,45separated by a notch46and devoid of guard pin, finger, or analogous anti-overbanking safety element. These horns44,45are symmetrical in relation to the straight line linking at the dead centre, the axis of rotation X3and the axis X2of the regulator, which passes also by the centre of the notch46. Thanks to this fork43, the anchor4cooperates, more particularly at the level of its notch46, with a pin6mounted together with the regulating organ5, for example with a coaxial plate of said regulating organ5and with which a direct impulse pallet9is further joined, which is propelled, at each alternation of the regulator5, by a tooth3of the escape wheel2. The travel of the anchor is limited by two pins7,8which limit the angle A of the anchor4travel between 5 and 6° in relation to the axis X3of the anchor4. According to the invention, and as shown in detail onFIGS.2and3, the entry-pallet10and the exit-pallet11, in any case at least the exit-pallet11, comprise, in the extension of their respective locking plane10p,11p, a inclined plane of indirect impulse10i,11iarranged to transmit an indirect micro-impulse to the regulating organ5when said pallets10,11cross the path C defined by the end of the teeth3of the rotating escape wheel2, during the rotation of the anchor4on its axis X3between the two locking positions. These inclined planes10i,11iare advantageously formed on the locking pallets10,11so that an angle β, so-called micro-impulse, of a value comprised between 0.5° and 5°, more particularly in the example represented between 1° and 2°, preferably around 1.5° is formed between its ends and the axis of rotation X3of the anchor4. In addition, the indirect impulse planes10i,11iare arranged in the continuity of the locking plane10p,11pwith an inclination according to an acute angle i; in practice in the range of 30° at 70°, preferably between 40° and 60°, preferably in the range of 50°, in relation to said locking plane10p,11p. Thus, the indirect micro-impulse planes10i,11iprovide a local break or recess d, in the extension and at the end of the locking planes10p,11p, whose length is determined according to the micro-impulse angle β, calculated and adapted to ensure the automatic start of a regulating organ associated to the escapement1under the driving force of the gear after the stoppage of said regulating organ. Advantageously, the locking planes10p,11pof the pallets10,11and of the teeth3are such that they rest plane on plane in a locking position of the escapement. In addition, the centre distance between the pivot of anchor X3and the pivot X1of the escape wheel2is adjusted so that at the dead centre of the escapement, the free end of the teeth3of the escape wheel bears by its locking plane substantially on the recess point d, in order to guarantee that the mere driving torque induces a sufficient draw to tip said end of the tooth3on the micro-impulse plane10i,11iafter the stoppage of the regulating organ5. In addition, the entry and exit pallets10,11of anchor4are advantageously adjusted on the plate41in such a way that the locking and unlocking functions of the anchor in relation to the escape wheel2, as well as the travel security of the latter in relation to said pallets are optimised. With reference toFIG.2, pallets10,11are therefore arranged so that in the locking positions a first pallet is in contact with a tooth3of the escape wheel on its locking plane on a distance defining a locking angle α in the range of 2° at the axis of rotation X3of the anchor while the other pallet is moved away from circle C of the escape wheel by a distance forming a security angle ε at the passage of the teeth3in the range of 1.5° at the axis of rotation X3. In addition, pallets10,11ensure by their configuration and arrangement a slight draw, their inclined planes10i,11ioffering a self-starting capability to the mechanism of the invention and the micro-impulse at each alternation, distinctive feature of a Swiss anchor escapement. In concrete terms, it is required for the mechanism1to start by itself under the torque of the escape wheel2that, from the locking position, the travel up to the dead centre (thus the half-angle of the anchor total movement) positions a inclined plane in the path C and not on a locking area. In addition, the passage from a pallet to another also requires that the pallet-escapement security be smaller than the half-angle of the travel of the anchor4. Thus, an essential condition of the mechanism of the invention is defined in the following manner: λ2-ε≤β Thus, a combined or united track escapement between a Swiss anchor escapement and a Robin type direct escapement is actually obtained. In another embodiment of the invention represented inFIG.22, the indirect impulse inclined plane can be provided not on the one of the entry-pallets10or exit-pallets11of the anchor4of the escapement mechanism1, of the Swiss anchor escapement type, but directly at the free radial end of the teeth3, in the form of a inclined plane3isimilar to the inclined plane10i,11iof the pallets10,11onFIGS.2and3of the pin escapement type and forming therefore a micro-impulse angle β, of a value comprised between 0.5° and 5°, more particularly in the example represented between 1° and 2°, in relation to the axis X3of the anchor. The micro-impulse inclined plane3iis provided at the free end of the tooth3, in the continuity of its locking plane3pbut it is inclined according to an acute angle i, in practice in the range of 30° at 70° in relation to said locking plane3p, preferably between 40° and 60°, more preferably in the range of 50°. Thus, the indirect micro-impulse planes3iprovide a local break or recess d, in the extension and at the end of the locking plane3p, whose length is determined according to the micro-impulse angle β, calculated and adapted to ensure the automatic start of a regulating organ associated to the escapement1under the driving force of the gear train after the stoppage of said regulating organ. Advantageously, the locking planes10p,11pof the pallets10,11and of the teeth3pare such that they rest plane on plane in a locking position of the escapement. In addition, the centre distance between the pivot of anchor X3and the pivot X1of the escape wheel2is adjusted so that at the dead centre of the escapement, the free end of the teeth3of the escape wheel rests by its locking plane substantially on the recess point d, in order to guarantee that the mere driving torque induces a sufficient draw of pallets10,11to tip said end of the tooth3on the micro-impulse plane3iafter the stoppage of the regulating organ5. The pallet9may also have an end shape adapted to receive and cooperate with the inclined plane3iof the tooth3during the direct impulse in order to transmit a substantially constant torque. To avoid the overbanking or butting risks observed with the securities of the guard pin type like in the Robin escapement, the escapement mechanism1of the invention offers an anti-overbanking security device formed in a united way on the fork43of the anchor4and on the regulating organ2, more particularly a flange made integral with the regulating organ5. As previously indicated, the fork43of the anchor4is devoid of guard pin between its horns44,45. Indeed, the little angular path λ travelled by the anchor4does not allow to consider as it is initially planned in the Robin escapement, the implementation of such a guard pin cooperating with a safety-roller according to the usual shape of a Swiss anchor escapement for example, the adjustment difficulty of the escapement to avoid risks of butting being too significant to make the escapement viable at an industrial level. As a replacement, the fork43of the anchor4of the mechanism1according to the invention comprises two horns44,45symmetrical in relation to a straight line passing by the axis of rotation X3of the anchor and the centre of the notch46and which extend from said notch46following an arc of a circle having a radius of curvature R1, advantageously comprised between 0.5 and 1.5 cm or in a more empirical manner slightly greater than the turning radius of the pin6integral with the regulating organ5in relation to its axis of rotation X2. Said horns44,45further comprise at their free end a cylindrical stud or finger44e,45eprotruding perpendicularly to the median plane of the anchor4, in which the plate41, the arm42and the fork43extend. The studs could however also have another geometry. The horns44,45thus form a guiding body for said pin6during the locking phases on the total angular amplitude in rotation of the regulating organ5if it is reduced and the oscillation frequency is important or almost the total additional arc in rotation of said regulating organ5if it is of the classical sprung balance type. In a complementary way to the horns44,45, the anti-overbanking device also comprises a circular flange12extending on both sides of the pin6on the regulating organ5. Said flange12is centred on the rotation axis X2of the regulating organ. Thus, the pin6being is integrated in said flange so that it moves, normally without contact, along an internal face of said horns during the locking phases of the escape wheel2. The total length of the flange12is preferably substantially equal to the total distance between the studs44e,45eof said horns44,45. Thus, in case of impact on the mechanism1generating a movement of the anchor4prone to overbanking during the additional arc of the regulating organ, a stud44eor45epasses on the lower surface of the flange12but this does not entail a stoppage because the regulating organ5continues to rotate beyond this internal face of the flange12against the stud44e,45eas represented inFIGS.16,17and inFIGS.19,20on two distinct alternations. In addition, the returns (FIGS.18,21) of the flange12and of the pin6between the horns in a normal operational configuration are ensured by the free ends of the flange12which are advantageously bevelled in a vertical plane in order to yield the passage to said studs44e,45eto come back between the horns44,45after having possibly tipped on the outside of the horns during the previous alternation. This arrangement and conformation of the horns44,45and of the flange12therefore offer an optimal security against a possible overbanking, without disrupting negatively the operation of the escapement mechanism1. FIGS.4at15show the different operational phases of the escapement mechanism of the invention on two successive alternations of the regulating organ5, the respective rotation directions of the escape wheel2, of the anchor4and of said regulating organ being represented by the arrows F1, F2, F3on a first alternation respectively and the arrows F4, F5, F6on a second alternation, respectively. FIG.4shows the mechanism1in an unlocking position. The pin6is in the notch46of the fork43of the anchor4and under the rotating action of the regulating organ5in the F3direction, it leaves the rest against the pin8in the F2direction, passing by the dead centre halfway (FIG.1) to arrive (FIG.5) in an indirect micro-impulse position via the entry-pallet10. Then, the regulating organ5receives (FIG.6) its direct impulse at pallet9by the escape wheel after which (FIG.7) the fork drops (FIG.8) against the pin7and the anchor4is in a locking position, a tooth3of the escape wheel2resting on the locking plane of the exit-pallet11. The regulating organ5then travels along its additional arc (FIG.9) and the anchor4undergoes a draw via its exit-pallet11against the pin7. The regulating organ5then comes back in the opposite F6direction towards the unlocking position (FIG.10) on the second alternation, thus unlocking the anchor4of the pin7, the anchor pivoting in the F5direction. Then, passing by the dead centre (FIG.11), it undergoes a second indirect micro-impulse at the exit-pallet11(FIG.12) whereas the escape wheel is freed in rotation in the F4direction, after which the anchor4drops on the pin8(FIG.13), then reaching the second locking position (FIG.14) where a tooth3of the escape wheel comes into abutment on the locking plane of the entry-pallet10. The regulating organ5then continues its travel in the F6direction by carrying out a “coup perdu” (vibration with no impulse) until the end of its second alternation to then come back in the F1direction towards the unlocking position inFIG.4. The escapement mechanism1of the invention therefore offers a direct escapement with a hybrid track having indirect micro-impulses and a slight draw as a Swiss anchor escapement in a direct escapement Robin type configuration without disrupting essentially its performance parameters and advantages but simplifying and increasing its reliability, while offering an optimal operational security. | 15,043 |
11860577 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The invention relates to a display indicator for a timepiece or for a scientific apparatus. Patent documents EP2863274 and EP3159751 filed by MONTRES BREGUET SA disclose a timepiece display using a resilient hand, and the features thereof can be directly used to produce a display mechanism according to the present invention. The invention is described here in the particular, but non-limiting case of a rotating indicator, and particularly a resilient hand. However, the principle is applicable to an indicator having a non-circular trajectory of mobility, for example with a linear cursor, or suchlike, particularly in space. The invention is more precisely described for this application of a flexible indicator to a hand, but it is applicable to other planar or three-dimensional indicator shapes. Similarly, drive means comprising gear trains are described hereinbelow, but the invention is equally applicable to analogue display means for an electronic or electrical apparatus, a quartz watch or other device. The principle of the invention is to produce a display mechanism, wherein at least one indicator, particularly a hand, for example the minute hand for a watch, has a variable length, or a variable radial extension, or a variable shape. The invention further relates to a variable-geometry timepiece display mechanism10that comprises at least one resilient hand1. This resilient hand1comprises a first drive pipe2integral with at least one one-piece flexible strip3, and with a single flexible strip3in the specific, non-limiting case shown in the figures. The display mechanism10comprises an input wheel set71, which is arranged so as to be driven such that it pivots about an input axis by a movement20, and which defines an input angle relative to a reference direction. The resilient hand1comprises a first drive pipe2integral with a first end of a flexible strip3, and a second drive pipe4integral with another end of this flexible strip3, and the resilient hand1comprises a display index or tip which, in an unstressed free state of this resilient hand1wherein both the first pipe2and the second pipe4are not subjected to any stress and are remote from one another, is remote from the first pipe2and from the second pipe4. The operating position of this resilient hand1is a stressed position where the first pipe2and the second pipe4are coaxial to one another about an output axis D. The display mechanism10comprises first means11for driving the first pipe2about the output axis D, and second means13for driving the second pipe4about this output axis D. These first drive means11and second drive means13are arranged so as to deform the flexible strip3, by varying the angular position of the second pipe4relative to the angular position of the first pipe2about the output axis D, and so as to vary the radial position of the display index or tip relative to the output axis D. In one specific embodiment, the resilient hand1, and more particularly the flexible strip3thereof, comprises a plurality of flexible segments5,5A,5B that are joined end-to-end at at least one tip6, arranged so as to form such an index, and preferably two successive flexible segments are joined by such a tip. In the case shown in the figures, a first flexible segment5A of the flexible strip3extends between the first pipe2and a first tip6. More particularly, the invention is shown in the most common case whereby the hand comprises two flexible segments5joined by a single tip6, which is used for the display. The display mechanism10comprises first means11for driving the first pipe2about an output axis D, and comprises second means12for stressing at least the first flexible segment5: these second means12are arranged so as to vary the position of at least the first tip6relative to the output axis D. The first tip6is thus at a variable distance from the first pipe2, as a function of the forces applied to the flexible strip3by the second stressing means12. FIGS.1to5show a specific case of such a mechanism, with a resilient hand1comprising a single tip6, which follows, over an upper part of the travel thereof, a circle that is off-centred relative to the output axis, and over a lower part of the travel thereof, another circle centred about the output axis. It goes without saying that this is a specific case, and the mechanism10, for the same oval-shaped watch, can also be dimensioned such that it follows the case contour or any other contour showcasing the product. More particularly, the first drive means11and/or the second stressing means12, and in particular the second drive means13comprised therein, comprise a first shaped gear train111and/or respectively a second shaped gear train131, which is arranged or which are arranged so as to accelerate or stabilise the speed of, or slow at least the first pipe2, and/or the second pipe4, over a part of the angular travel thereof. More particularly, the first drive means11and the second stressing means12, and in particular the second drive means13comprised therein, comprise at least one first shaped gear train111and respectively at least one second shaped gear train131, which are arranged so as to accelerate or stabilise the speed of, or slow the first pipe2, and respectively the second pipe4, over at least part of the angular travel of the first pipe2, and respectively of the second pipe4. In one specific embodiment, shown inFIGS.1to5, the first drive means11and the second stressing means12are arranged so as to drive the resilient hand1over the entirety of the angular travel thereof about the output axis D, and provide it, in projection on a display plane P or on a dial, and at different angular positions of the resilient hand1, with at least one first shape in which the flexible segments5:5A,5B, comprised in the flexible strip3do not cross paths outside of the first pipe2, and at least one second shape in which the flexible segments5:5A,5B, comprised in the flexible strip3cross paths outside of the first pipe2. In the specific yet non-limiting case shown in the figures, this first shape is an almond shape, and this second shape is a heart shape. In another alternative embodiment, wherein the resilient hand1travels the surface area defined by an ellipse, this hand can successively take, over the revolution thereof, an alternation of first shapes and second shapes, for example an almond shape on each of the two ends of the major axis of the ellipse, and a heart shape on each of the two ends of the minor axis of the ellipse. More particularly, and as disclosed in the patent documents EP2863274 and EP3159751, the resilient hand1comprises a second drive pipe4also integral with the flexible strip3. The second stressing means12thus comprise second drive means13of the second pipe4in an assembled and stressed state of the resilient hand1. In this assembled state of the resilient hand, both the first pipe2is, advantageously but not necessarily in a prestressed operating state, driven by the first drive means11, and the second pipe4is, advantageously but not necessarily in a prestressed operating state, driven by the second drive means13. Additionally, at least one of the tips6is, in a non-stressed free state of the resilient hand1in which both the first pipe2and the second pipe4are not subjected to any stress, remote from the first pipe2and from the second pipe4, which first pipe2and second pipe4are spaced apart from one another in this free state of the resilient hand1. In the specific case shown in the figures wherein the flexible strip3only comprises one first flexible segment5A and one second flexible segment5B, only one such tip6joining them is present. Thus, more particularly, this first flexible segment5A bears the first pipe2at a first end52, and this second flexible segment5B joined to the first flexible segment5A bears the second pipe4at a second end54. Moreover, in the free state of the resilient hand1, the first end52and the second end54are remote from one another, or form a non-zero angle with one another from the tip6at which the first flexible segment5A and the second flexible segment5B are joined. More particularly, the output of the second drive means13of the second pipe4is coaxial to the output of the first drive means11of the first pipe2in the assembled state of the resilient hand1. However, this arrangement is not compulsory, in particular in the case of a retrograde display, where the axes of the first pipe2and of the second pipe4can be different. According to the invention, the first drive means11and the second drive means13comprise an accelerator or decelerator device, which is arranged such that it accelerates, or stabilises the speed of, or slows down at least the first pipe2and/or said second pipe4over at least part of the angular travel thereof. In one alternative embodiment, the first pipe2is advanced or delayed relative to the value of the input angle, which is symmetric to the delay or advance of the second pipe4relative to the input angle, such that the first tip6always displays, relative to the output axis D and the reference, an angle that is equal to the input angle. In another alternative embodiment, the first pipe2is advanced or delayed relative to the value of the input angle, which is, as an absolute value, different to the delay or advance of the second pipe4relative to the input angle, such that the first tip6displays, relative to the output axis D and the reference, an angle that is variable relative to the input angle throughout the length of the travel thereof. This particular advance and/or delay arrangement relative to the input pipe allows the hand to point to the time (or another display) only on the dial, and in particular for a non-regular display, for example a square trajectory where the time is divided into twelve equally-spaced segments over the square trajectory, which cannot be managed in the same manner as twelve indexes separated by 30°. In yet another alternative embodiment, the hand1is arranged such that it travels a total non-retrograde path and, over the total path, the average speed of the first pipe2is equal to the average speed of said second pipe4. Numerous configurations can be considered:if the arms of the hand are symmetric, a symmetric advance and delay are required, such that the hand points to the right time;if the arms of the hand are asymmetric, an advance and delay are required, such that the hand points to the right time;this works on the assumption that the hand points to the right time. A graduation can also be obtained, which graduation is not separated every 30° as explained hereinabove. In one specific embodiment which will be described in detail hereinbelow, the accelerator or decelerator device comprises a first shaped gear train111and/or respectively a second shaped gear train131. According to the invention, and as shown inFIG.29, the accelerator or decelerator device comprises a device with a first differential gear912on the drive gear train of the first pipe2and/or a second differential gear914on the drive gear train of the second pipe, and at least one cam902,904forming an input of such a differential gear912,914. In yet another embodiment, the accelerator or decelerator device comprises single gear trains suitably arranged so as to perform the required accelerations or decelerations. More particularly, and as shown inFIGS.11to20, the first drive means11and the second drive means13comprise, respectively, at least one first shaped gear train111and at least one second shaped gear train131, which are each arranged or which are arranged so as to accelerate, or stabilise the speed of, or slow the first pipe2and respectively the second pipe4over a part of the angular travel thereof. The term “shaped gear train” is understood herein to mean that at least one wheel of the gear train is not axisymmetric; more particularly, at least two counteracting wheels of this gear train are not axisymmetric, and are arranged so as to permanently gear with one another with minimal clearance and a constant centre-to-centre distance. More particularly, the first shaped gear train111and the second shaped gear train131are arranged so as to accelerate or respectively brake the first pipe2, and to brake or respectively accelerate the second pipe4over at least part of the angular travel of the resilient hand1, or over only part of the angular travel of the resilient hand1. In other words, one of the pipes procures an angular advance relative to the input angle, whereas the other pipe procures an angular delay relative to the input angle. Thus, in one specific embodiment of the invention, the first pipe2is advanced or delayed relative to the value of said input angle, which is symmetric to the delay or advance of the second pipe4relative to the same input angle, such that the first tip6always displays, relative to the output axis D and the reference direction, an angle that is equal to the input angle. Thus, considering the embodiment according toFIGS.3to5, with a total angular travel CAT of 360°, from a position shown inFIG.3where the tip6of the hand1is in the twelve o'clock position, moving into the four o'clock position shown inFIG.4, by rotation in the clockwise direction, the second pipe4of the second flexible segment5B has slowed by 60°, and the first pipe2of the first flexible segment5A has accelerated by 60°. More specifically, neither of the flexible segments5of the hand1indicates the time alone; it is only the resultant of the rotation of the two pipes that determines time information indicated by the tip6of the hand1. Between the position inFIG.4and the eight o'clock position inFIG.5, the pipes remain synchronous with the offset therebetween. In order to move from the eight o'clock position inFIG.5to the twelve o'clock position inFIG.3, the opposite takes place: the second pipe4of the second flexible segment5B has accelerated by 60°, and the first pipe2of the first flexible segment5A has slowed by 60°. The invention is shown in the figures for the specific case of a continuous horological display showing a full revolution; it is understood that the invention can be applied to any display, in particular a retrograde display. More particularly, the first shaped gear train111and the second shaped gear train131are arranged such that they symmetrically control the first pipe2and the second pipe4, such that the first flexible segment5A and the second flexible segment5B are symmetrical relative to a radial originating from the output axis D and passing by way of the tip6at which the first flexible segment5A and the second flexible segment5B are joined, over at least part of the angular travel of the resilient hand1. This configuration is not limiting, however it has the advantage of subjecting the first flexible segment5A and the second flexible segment5B to symmetric stresses. More particularly, the first shaped gear train111and the second shaped gear train131each comprise at least one pair of wheels arranged such that they engage by gearing with one another and whose geometric supports, i.e. the primitive curves, of the toothings are not axisymmetric. Also more particularly, the first drive means11and/or the second drive means13comprise at least one first gear train stage115,135, and one second gear train stage116,136, which are arranged such that each controls a part of the shape transformation of the resilient hand1over at least part of the angular travel thereof, with distribution per stage. This distribution allows a part of the deformation to be distributed over each of the stages, which conserves, in each shaped gear train, wheels whose geometry is close to a circular geometry, so as to allow for suitable gearing of the toothings and prevent the wear thereof. More specifically, the shaped gear trains are not circular, however must not be excessively deformed, i.e. the shape thereof must allow for the gearing thereof without arcing, and without too high sensitivity to variations in the centre-to-centre distance and manufacturing tolerances. This can thus prevent interference defects that cut teeth would create if the primitive curves of the toothings deviated too far from the circular shape. A compromise must thus be found between a shape that is sufficiently non-circular so as to actuate the hand, and a shape that is resistant to wear. Distribution over a plurality of stages allows these conditions to be met: each stage takes part in the deformation of the hand, however the primitive curves thereof remain close to a circular shape; this is referred to as distribution per stage, whereby the overall cumulation of these staged gear trains procures the desired deformation of the hand. The figures show a non-limiting alternative embodiment having two gear train stages, however this number of two is not limiting, and the number of stages is only limited by the overall thickness of the movement and the efficiency loss due to friction. More particularly, both the first gear train stage115,135and the second gear train stage116,136respectively comprise a first shaped gear train111and a second shaped gear train131. FIGS.11to20show certain specific arrangements of such shaped gear trains. FIG.11is a schematic representation of the functioning of such a mechanism10, wherein the arrows symbolise the transmission of the movement to the pipes from a power take-off21at the level of a horological movement20, which can be either mechanical or electronic, symbolised in the bottom part of the figure, and which is arranged so as to drive, via the same input wheel set71, two gear trains: —a first gear train comprises idler wheels79and80about a first axis DA and wheels73,78and81about the major pivot axis D for driving the first pipe2, —and a second gear train comprises idler wheels74,75about a second axis DB and a wheel76about the major pivot axis D for driving the second pipe4. It should be noted that the entire gear train is tensioned as a result of the play compensation of the resilient hand due to the prestressing thereof. FIG.11also shows a conventional hand101coaxial to the resilient hand1for displaying other information, in particular time information. FIGS.12to14show more specifically a display mechanism10according to an alternative embodiment of the invention for displaying minutes with the resilient hand1. In this alternative embodiment, an input wheel set71is arranged so as to engage with an output wheel set21of the horological movement20, according to an input axis D0, and is guided on a fixed tube70. This input wheel set71, which is a cannon-pinion, is arranged so as to drive, directly or via indenting, by friction allowing the time to be set, a driving cannon-pinion72which is coaxial thereto. This driving cannon-pinion72is axisymmetric, and drives a first shaped wheel78, which gears with a second complementary shaped wheel79mounted such that it idles (with play compensation) about the first axis DA, and which is pivotably integral with a third shaped wheel80, which gears with a fourth complementary shaped idler wheel81, which in this case pivots about the output axis D of the pipes, and which comprises a cannon-pinion82for attaching the first pipe2. The same driving cannon-pinion72drives a fifth shaped wheel73, which gears with a sixth complementary shaped wheel74mounted such that it idles about the second axis DB, and which is pivotably integral with a seventh shaped wheel75, which gears with an eighth complementary shaped idler wheel76, which in this case pivots about the output axis D of the pipes, and is integral with a shaft77on which the second pipe4is attached. Each shaped wheel comprises an angular marking so as to correctly ensure indexing of the shaped gear train, as shown inFIG.26which illustrates a shaped gear train comprising two non-axisymmetric wheels75and76, and comprising markings275and276for the indexing thereof relative to one another, in addition to oblongs175and176easing the installation thereof, which in particular allows them to be made integral with one another and indexed by means of a pin or similar element. The driving cannon-pinion72further drives a ninth wheel91comprised in a wheel set pivoting about an hour axis DH, which comprises a pinion92driving the wheel93of an hour cannon-pinion94receiving the hour hand100. FIGS.15and16show an alternative embodiment, wherein an input wheel set, arranged such that it engages with an output wheel set of the horological movement, pivots about an input axis D0which in this case is separate from the output axis D, and where each gear train comprises two stages so as to distribute the angular travel of each stage, and where each stage comprises a shaped gear train:a first gear train comprises a first stage with a first shaped wheel101pivoting about the input axis D0, which gears with a second complementary shaped wheel102, mounted such that it idles about a first minor axis D1. This second complementary shaped wheel102is pivotably integral with a third shaped wheel103, which gears with a fourth complementary shaped wheel104, mounted such that it pivots about the output axis D, and designed for attaching one of the two pipes;a second gear train, illustrated separately inFIG.16, comprises a first stage with a first shaped wheel201pivoting about the input axis D0, which gears with a second complementary shaped wheel202, mounted such that it idles about a second minor axis D2. This second complementary shaped wheel202is pivotably integral with a third shaped wheel203, which gears with a fourth complementary shaped wheel204, mounted such that it pivots about the output axis D, and designed for attaching the other pipe. The case of a construction with radial symmetry of movement between the two flexible segments of the same hand1uses two similar sets of identical shaped gear trains, one mounted the right way up, the other upside down. FIGS.17to20show the construction of the shaped gear trains, which begins with the choice of a space rule for varying the radial length of the hand as a function of the angle of deviation between the two pipes thereof,FIG.17showing the output angle as a function of the input angle for one of the two pipes of the hand. This space rule allows the primitive profiles of the toothings to be calculated, according to the chosen centre-to-centre distance for the production thereof, as shown inFIG.18. The calculation of the driving toothing is then carried out as a function of the defined number of teeth, and the chosen profile type, in particular involute- or sinusoidal-type toothing, according toFIG.19, then the calculation of the driven toothing, according toFIG.20, allows the two wheels to be cut to the profile thus defined; FIGS.17and20show the space rule, with three successive areas of radial extension of the hand, of stabilisation of the length of the hand where the ratio between the output angle of one of the two pipes and the input angle in the mechanism is substantially constant, and of shortening of the hand. FIG.22shows the superimposition of the three states shown inFIGS.3to5, wherein the arrows illustrate a contraction phase CO of the hand1between the twelve o'clock and four o'clock positions of the tip6thereof, a stability phase ST at constant elongation between the four o'clock and eight o'clock positions, and a relaxation phase DE between the eight o'clock and twelve o'clock positions. This distribution is made possible by the use of shaped gear trains, and in particular of mechanisms with multiple shaped gear train stages, which allow sufficient angular displacements to be imposed on the pipes, in order to allow for significant changes in shape, and in particular to allow the flexible segments to cross paths as they do in the shape of a heart. These shaped gear trains allow one pipe to be safely slowed down relative to the other. The wheel inFIG.27is one example of the optimisation of the case inFIG.22; in order to correct the trajectories about angles of 140°-80°-140° as shown, instead of 120°-120°-120° which would be more balanced, wheels must be designed, without any specific distribution per stage, i.e. if the top stage and the bottom stage each carry out half of the angular transformation, said wheels having very high deformations, and very inclined teeth, which are fragile and difficult to machine. Another distribution, for example 20% of the deformation at the bottom stage and 80% at the top stage allows wheels that are closer to a round shape to be obtained, which are easier to machine, and with near-standard teeth, thus with improved kinematic and tribological parameters, and less wear. FIG.23shows the change in torque between the flexible segments of the resilient hand as a function of the angle travelled, with a first area wherein the length of the resilient hand is reduced with torque consumption, a second phase of maintaining the length of the hand at a substantially constant torque, and a third phase of extending the hand with torque restitution,FIG.24shows the change in torque on a pipe as a function of the angle of rotation of a pipe, andFIG.25shows the radial extension of the hand as a function of the angle of rotation of a pipe. The resilient hand1can be produced in a variety of different ways. In an alternative embodiment, in the free state, the resilient hand1extends over a single planar level comprising the first pipe2and the second pipe4, and the resilient hand1is thus arranged such that it is mounted in a twisted manner in a stressed operating position wherein the first pipe2and the second pipe4are superimposed on one another. In an alternative embodiment, in the free state, the resilient hand1extends over a first planar level comprising the first pipe2and over a second planar level comprising the second pipe4, and comprises a connecting area between the first planar level and the second planar level at a tip6between a first flexible segment5A bearing the first pipe2and a second flexible segment5B joined to the first flexible segment5A and bearing the second pipe4, and the resilient hand1is arranged such that it is mounted in a non-twisted manner in a stressed operating position wherein the first pipe2and the second pipe4are superimposed on one another. In another specific alternative embodiment, when the resilient hand1comprises more than two flexible segments5, in the free state, the resilient hand1extends over, at most, as many parallel levels as there are flexible segments5, and is arranged such that it is mounted in a non-twisted manner in a stressed operating position wherein the first pipe2and the second pipe4are superimposed on one another. In a specific alternative embodiment intended to facilitate assembly, as shown inFIG.10, in the free state, the resilient hand1comprises a divisible element24joining the first pipe2and the second pipe4, in order to facilitate the assembly of the resilient hand1on a drive wheel set of the first pipe2or of the second pipe4, this divisible element24being arranged such that it can be broken and allow for the passage of the resilient hand1into a stressed operating position wherein the first pipe2and the second pipe4are superimposed on one another. FIG.7shows one specific alternative embodiment wherein the resilient hand1comprises at least one eye60, which is arranged such that it forms an aperture for reading information appearing on a dial61comprised in the mechanism10, and in front of which the resilient hand1extends, or comprised in a horological movement20, on which the mechanism10is arranged for attachment thereto. For example, this eye allows a town or city to be viewed in a GMT application, or the a.m. time from 0 to 12 to be differentiated from the p.m. time from 13 to 24, in a specific application wherein the display mechanism is driven over two revolutions, the first with a certain extension of the resilient hand1in order to display the a.m. time, and the second with a different extension in order to display the p.m. time; it goes without saying that this alternative embodiment can also be compatible with a display by the tip6of the hand1, the presence of such an eye60improving reading comfort for the user.FIG.7also shows two inner and outer indexes on either side of this eye60, which also allow for specific readings, depending on the adopted dial configuration. One of the main advantages of the invention is that it allows for high design freedom as regards the dial, and for the placement of certain display areas outside of areas that are unavailable, for example as a result of the presence of a tourbillon or other complication. Advantageously, the hand1is made from a material that can be micro-machined according to a “LIGA” method, and is in particular made of nickel-phosphorus NiP12or similar material. Such a hand can be gold-plated, or can receive any other colouring, the adherence whereof is satisfactory on such a material. The hand1can be coloured using different methods: PVD, CVD, ALD, electrodepositing, painting, lacquering, or other coating or ionisation. The hand1can comprise jewel setting or similar, and/or decoration by engine-turning, engraving, angling or enamelling, the latter being reserved to areas of low deformation such as the circumference of the pipes, an eye circumference, the tip or similar areas. More particularly, the mechanism10forms an additional module, which is arranged so as to be connected to a horological movement20, and the first drive means11and the second stressing means12comprise a common input71, which is arranged so as to be driven by a single output21comprised in the movement20, such as the cannon-pinion that rotates in one hour, or the minutes wheel set. In an alternative embodiment, in addition to or in place of the shaped gear trains, the mechanism10comprises, between on the one hand the input wheel set71arranged so as to be driven by the movement20, and on the other hand the first pipe2and/or the second pipe4, at at least one stage, a cam902,904. This cam is arranged such that it controls a differential gear912,914, a first input whereof is formed by the input wheel set71, a second input whereof is a wheel set, in particular a rack, controlled by this cam902,904, and the output whereof gears with the gear train for transmitting the movement to the first pipe2or respectively to the second pipe4. In a first application of this alternative embodiment, the mechanism10comprises, between the input wheel set71and the first pipe2, at the level of at least one stage, a single cam902arranged such that it controls a first differential gear912, a first input whereof is formed by the input wheel set71, a second input whereof is a first wheel set or a first rack controlled by the cam902, and the output whereof gears with the gear train for transmitting the movement to the first pipe2, and between the input wheel set71and the second pipe4, the same single cam902arranged such that it controls a second differential gear914, a first input whereof is formed by the input wheel set71, a second input whereof is a second wheel set or a second rack controlled by the cam902, and the output whereof gears with the gear train for transmitting the movement to the second pipe4. In a second application of this alternative embodiment, the mechanism10comprises, between the input wheel set71and the first pipe2, at the level of at least one stage, a first cam902arranged such that it controls a first differential gear912, a first input whereof is formed by the input wheel set71, a second input whereof is a first wheel set or first rack controlled by the first cam902, and the output whereof gears with the gear train for transmitting the movement to the first pipe2; and, between the input wheel set71and the second pipe4, a second cam904driven by the input wheel set and arranged such that it controls a second differential gear914, a first input whereof is formed by the input wheel set71, a second input whereof is a second wheel set or a second rack controlled by the second cam904, and the output whereof gears with the gear train for transmitting the movement to the second pipe4. The use of a cam allows for highly non-circular trajectories, additionally with jumps of the hand. The use of a single cam for both differential gears allows a simultaneous jump of the two pipes to be performed, for example at midnight; the first differential gear adds the information of the cam for the first pipe, and the second differential gear subtracts the information for the second pipe. In another specific alternative embodiment, at least one wheel comprised in the gear train mechanism arranged between, on the one hand, the input wheel set71arranged such that it is driven by a movement20and, on the other hand, the first pipe2and/or the second pipe4, at the level of at least one stage, comprises an incomplete toothing, each missing tooth allowing the resilient hand1to relax, by rotation of only one of the pipes2,4, during the passage of the space corresponding to a missing tooth, or to the missing teeth, so as to control a recoil of the tip6of the resilient hand1. In particular, a gear train can be used, comprising one or more, or even all circular wheels, at least one circular wheel whereof is devoid of one or more teeth in order to allow the hand to relax, and to perform a jump at the end of the so-called spiral display travel carried out by the tip6of the hand1. If, for example, the first pipe rotates faster than the second pipe, and if the driving cannon-pinion72is locally devoid of teeth, the hand tends to contract, for example over two revolutions and, when the missing teeth release the first pipe, the hand becomes taught but the second pipe does not move, and the tip of the hand recoils. The advantage of such an alternative embodiment is to allow for the conventional machining of the wheels. FIG.24shows the very low level of torque consumed for the deformation of such a LIGA hand1, during the shortening thereof, which only has a very slight influence on the running of the movement. As a result of the proximity of the escapement, it nonetheless remains advantageous to reduce this perturbation as much as possible, which can be obtained using very thin flexible segments5, typically less than 100 micrometres in width and 200 micrometres in height for a LIGA construction. It is understood inFIG.24, which shows that the torque curve as a function of the angle is U-shaped with a very flat bottom, that it is advantageous, during design, to choose an angular deformation range corresponding to the lowest level of the torque curve so as to minimise the induced spurious torque and thus minimise the perturbation to watch operation. A staged design with a specific distribution helps to select optimal angular ranges. The correct choice of this angular range also allows the thickness of the segments5of the hand1to be increased in order to make it more visible, without significantly increasing the perturbation torque thereof. By way of comparison, the induced perturbation to operation is less than that caused by a change in date at midnight for a date mechanism. The invention is shown in the figures with a simple shape, however it can be declined with very different hand shapes. For example, an asymmetrical hand composed of two V shapes interlocking with one another and having the same direction, each arm of each V shape being integral with one of the pipes, and the extremal end of the other arm being linked to the similar end of the other V shape. Alternatively, it can be a two-armed hand with two segments joining a first tip and attached to the two pipes, and two other segments joining a second tip remote from the first, and attached to the same pipes. Alternatively, it can be a hand comprising thickened areas over a median area of the flexible segments, for improved viewing of the hand. The invention further relates to a horological movement20comprising at least one such display mechanism10. The invention further relates to a timepiece30comprising at least one horological movement20and/or comprising at least one such display mechanism10. More particularly,21this timepiece30is a watch. The invention further relates to a scientific apparatus comprising at least one horological movement20and/or comprising at least one such display mechanism10. | 36,544 |
11860578 | DETAILED DESCRIPTION OF THE INVENTION In the following description, reference is made to a horological movement provided with a date mechanism, typically an annual calendar mechanism. The usual components of the horological movement, which are well known to a person skilled in the art in this technical field, are only described in a simplified manner or not described at all. The person skilled in the art will indeed be able to adapt these various components and make them cooperate for the operation of the horological movement. In particular, everything relating to the date mechanism will not be described below. FIGS.1to4show a portion of a timepiece1, which comprises a horological movement2. The timepiece1is typically a mechanical watch. The horological movement2includes an indicator6anti-correction system4. The indicator6is for example a date indicator, a month indicator or else a day or year indicator or a week indicator, without this being limiting in the context of the present invention. In the particular embodiment shown inFIGS.1to4, the system4is a month indicator6anti-correction system. In this case, the horological movement2also includes a date indicator5and an annual calendar mechanism (the latter not being shown in the figures). The month indicator6is shown inFIG.1. In this figure (wherein the upper washer provided with the month digits has been omitted), the month indicator6is rotatably mounted about an axis6a. During thirty-day months, the month indicator6is driven by another system (not shown) belonging to the annual calendar mechanism, in order to modify the value of the month at the end of the thirtieth day. In a variant not shown, the system4is a date indicator anti-correction system. The system4comprises a correction mechanism8of the month indicator6, a correction gear10, a drive gear12, and a movable component14(not shown inFIG.1for clarity). Preferably, the system4also comprises at least one stopper (not visible in the figures). According to a particular variant embodiment not shown in the figures, the system4also comprises a spring mounted so as to urge the movable component14towards an elastic return position. The correction mechanism8is connected to a manual correction actuation system (not shown in the figures), such as for example an actuating rod connected to a winding crown. The correction mechanism8is movably mounted between a first correction position illustrated inFIGS.1and2, wherein the correction mechanism8meshes with the correction gear10; and a second non-correction position (not shown in the figures) wherein the mechanism8does not mesh with the correction gear10. Preferably, the correction mechanism8is configured so that its second non-correction position of the month indicator6corresponds to a correction position of the date indicator5. More preferably, the correction mechanism8comprises at least one drive finger18. In the particular exemplary embodiment illustrated inFIGS.1to4, the correction mechanism8comprises three drive fingers18regularly distributed over 360 degrees around an axis18awhich is mounted on the mechanism8. The drive fingers18are integral and rotatably mounted about the axis18a. The drive fingers18are configured to mesh with the correction gear10in the first correction position of the mechanism8. The correction gear10is integral with the drive gear12. As illustrated inFIG.1, the correction gear10is configured to mesh with the month indicator6. The drive gear12is mounted on the correction gear10, and is for example configured to cooperate with at least one lug19provided on an upper portion20of the date indicator5. In the particular exemplary embodiment illustrated inFIG.1, the upper portion20of the date indicator5is provided with a single lug19. The lug19drives the month indicator6, via the drive train12, during a long month (a thirty-one-day month). Conversely, the lug19is driven by the month indicator6, via the drive gear12, for a short month (less than thirty-one days). Preferably, as illustrated inFIG.1, the lug19extends in the plane defined by the upper portion20of the date indicator5, and extends radially inwardly of the date indicator5. Alternatively, the upper portion20of the date indicator5is not provided with any lug. This is for example the case when the horological movement2is provided with a simple non-annual date mechanism. The component14, which is visible inFIGS.2to4, is mounted movable in rotation about an axis22. In the preferred embodiment shown inFIGS.1to4, the component14is mounted free in rotation about the axis22. The component14is maintained in height on the axis22, typically by means of a tenon or a pin. The component14has a first end14aand a second end14b. In the preferred exemplary embodiment illustrated inFIGS.2to4, the component14is in the shape of a lever. In this case, the first end14acorresponds to a corner of a first enlarged portion21of the lever, and the second end14bcorresponds to a free end of a second arm-shaped portion23of the lever. The first end14ais configured to cooperate with at least one protuberance24provided on a lower portion25of the date indicator5. In the particular exemplary embodiment illustrated inFIGS.2to4, the lower portion25of the date indicator5is provided with a single protuberance24. Preferably, as illustrated inFIGS.2to4, the protuberance24extends in the plane defined by the lower portion25of the date indicator5, and extends radially towards the inside of the date indicator5. In a variant not shown, the lower portion25of the date indicator5is provided with at least two protuberances distributed at intervals which are regular or not on the periphery of the date indicator5. The component14is for example made of a plastic material or of a metallic material, in particular steel. The component14is advantageously manufactured via a stamp, or else via plastic or metal injection, or else in the shape of bands or neckline. As illustrated inFIGS.3and4, the second end14bof the component14is configured to cooperate with the correction mechanism8so as to block the mechanism8in an anti-correction position when the first end14aof the component14cooperates with the protuberance24. By this movement, the component14then rotates about its axis22and blocks the mechanism8in the anti-correction position. The correction mechanism8is then stopped before the end of its travel for the correction of the month, and rotates freely. The anti-correction position of the mechanism8, visible inFIGS.3and4, is a prohibited correction position of the month indicator6. In the particular embodiment shown inFIGS.1to4, the anti-correction position of the mechanism8is an intermediate position between the first correction position and the second non-correction position of the indicator6. In a variant not shown, the correction mechanism8is configured so that its anti-correction position corresponds to its second non-correction position of the month indicator6. The protuberance24is for example positioned so that the anti-correction position of the mechanism8is activated on the thirtieth day of each month. When the first end14aof the component14does not cooperate with the protuberance24, the movable component14is in an authorised correction position which is shown inFIG.2. In this position of the movable component14, the correction mechanism8is free to switch from its non-correction position to its correction position, and vice versa. This situation arises, for example, on a day other than the thirtieth day of the month. The month indicator6, the correction mechanism8, the correction gear10, the movable component14and the stopper are for example mounted on a plate26of the timepiece1. At least one of the stoppers is configured to limit the angular displacement of the correction mechanism8between its first correction position and its second non-correction position. At least one other stopper can be configured to limit the angular displacement of the movable component14. The present invention has been described with reference to a particular embodiment according to which the system4is an anti-correction system of a month indicator6, and wherein the lower portion25of the date indicator5is provided with a single protuberance24. According to another embodiment, not shown in the figures, the lower portion25of the date indicator5is provided with at least two protuberances. The position of each protuberance on the date indicator5then corresponds to a predetermined date for which the protuberance cooperates with the first end14aof the component14so as to cause the blocking of the mechanism8in its anti-correction position. According to this particular embodiment, the manual correction of the month is prohibited on at least two predetermined days of each month. According to yet another embodiment, not shown in the figures, the system4is a date indicator anti-correction system. A lower portion of the hour wheel has at least one protuberance, preferably at least two protuberances. The position of each protuberance on the hour wheel then corresponds to a predetermined time during which the protuberance cooperates with the first end14aof the component14so as to cause the blocking of the mechanism8in its anti-correction position. According to this particular embodiment, the manual correction of the date is prohibited during at least one predetermined time of each day, preferably during at least two predetermined times of each day. | 9,470 |
11860579 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The invention proposes to achieve a water resistant and secure assembly of external components with angular indexing which is easy to be adjusted, and in a guaranteed position, with a minimum number of components, and reduced manufacturing costs. FIGS.1to5illustrate the non-limiting example of the mounting and angular indexing of a back relative to a middle of a watch. The invention relates to a device100for fastening a back2on a middle1of a watch, the back2comprising a cover20arranged to form a bearing surface with the middle1and a body21arranged to rest in the middle, said device comprising a ring4disposed between the back2and the middle1, said ring4being arranged to ensure the water resistance and indexing of the back. The ring4comprises an upper part40ensuring the water resistance and a median part41comprising positioning means3to ensure the indexing of the back2relative to the middle1. As can be seen inFIG.1b, the middle1has a shoulder10on which the upper part40of the ring4rests, the ring4being flush with the middle once in place. Advantageously, the positioning means3comprise at least one lug43formed on the median part41of the ring4and arranged to rest in a housing11of the middle1, the lug43being formed on the external wall and extending over all or part of the height of the median part41of the ring4. Thus, the ring4cannot be moved in rotation when a torque is imparted to the back, for example. The ring4also comprises a lug46arranged to cooperate with a machining22formed in the back2. Such an arrangement allows the correct positioning, or the correct indexing, of the back2relative to the middle1, which is particularly useful when the back2has a decoration and that it must be perfectly aligned along the 12 h-6 h axis of the watch According to a variant of the invention illustrated inFIGS.4and5, the ring4has a lower part42, the lower part42having retaining means arranged to cooperate with the back2, the retaining means being in the shape of a projecting clip420arranged to be housed in a slot or a housing formed in the body21of the back2. With this in mind, the ring4is made of a polymer material such as ASUTANE® or HYTREL®. Obviously, any other material accepting to be elastically deformed, and sufficiently strong, can be used to produce the ring4. The external44and internal45ribs thus allow to obtain good resistance to driving out once the back2has been mounted, force-fitted or driven, on the middle1. According to a preferred embodiment of the invention, the ring4comprises at least two internal ribs45and at least two external ribs44, the ribs being angularly spaced from each other by 60° to 90°. The internal ribs45and the external ribs44are intended to be offset relative to each other, that is to say that an external rib will not be positioned facing an internal rib, and thus provide better retention. The back2and the ring4are dimensioned such that, when they are forced into the middle1, the back2radially presses the ring4against said middle28, which has the effect of compressing said elastic element4between the wall of the middle and the back. The ring4includes in particular ribs which extend over all or part of its height and which, in reaction to the compressive force, will deform and guarantee good retention of the assembly. A compression ratio of the ring4of the order of 20% is preferred to provide good retention. Such a ratio causes an increase in the frictional forces between the back2and the ring4on the one hand, and between the ring4and the middle1on the other hand, and therefore determines the torques involved for placing the back2on the watch. FIG.3illustrates a sectional view of the back2and of the ring4assembled on the middle1, the ring4being compressed between the internal side face of the middle1and the back2. Forcefully positioning the back into the middle keeps the ring4deformed, and more particularly the external slots. This deformation is obviously within the elastic deformation limit of the material of the ring. The invention also relates to a timepiece or watch1000including such a fastening device100. In short, the invention provides a fastening device whose design is compact, maintains the water resistance of the watch, and protects against accidental dismounting. The invention also allows to ensure the perfect orientation of a component kept blocked in its service position. | 4,445 |
11860580 | DESCRIPTION OF EXEMPLARY EMBODIMENTS First Embodiment A watch1according to a first embodiment of the present disclosure will be described below with reference to the drawings. FIG.1is a front view illustrating the watch1. In this embodiment, the watch1is configured as a wristwatch that is worn on a user's wrist. A side contacting the wrist when the watch1is worn on the wrist is referred to as the back side of the watch1, and the side opposite to the back side is described as the front side of the watch1. As illustrated inFIG.1, the watch1is provided with a metal outer case2. Further, the outer case2is provided with a disk-shaped dial10, a second hand3, a minute hand4, an hour hand5, a crown7, an A button8, and a B button9. Note that the outer case2is an example of a case of the present disclosure. The dial10is provided with hour marks6for indicating the time. Further, a solar cell50, a movement (not illustrated), and the like are provided on the back side of the dial10. That is, the watch1according to the present embodiment is configured as a solar watch. Dial FIG.2is an enlarged cross-sectional view illustrating main portions of the dial10. As illustrated inFIG.2, the dial10includes a substrate11, which is a base material, and a metal film12. Further, convex portions13, which will be described later, are formed on the dial10. Note that the dial10is an example of a watch component of the present disclosure. Substrate The substrate11is formed from a resin material, such as polycarbonate, for example, and is light-transmissive. Note that in the present disclosure, “light-transmissive” refers to having a property of transmitting at least some of light in a wavelength region that can be generated by a solar panel of the solar cell50. Also, the substrate11has a first face111formed on a circular disk and disposed on the front side of the watch1, and a second face112disposed on the back side of the watch1. Further, a plurality of recessed portions113, to be described below, are provided in the substrate11. Note that, as will be described below, the metal film12is layered on the first surface111of the substrate11. Then, the solar cell50is disposed on the second surface112side of the substrate11. In other words, the solar cell50is disposed on the second side112side, which is the surface of the substrate11opposite to the first surface111on which the metal film12is disposed. In the present embodiment, an average thickness of the substrate11is not particularly limited, but is preferably from 300 μm to 1000 μm. Note that the substrate11is not limited to the configuration described above, and may be formed from various types of glass material, a monocrystalline alumina such as sapphire, and the like, or may be formed from a material that is light-transmissive. Metal Film The metal film12is formed from various types of metal material and is layered on the first surface111of the substrate11. Further, the metal film12includes a front surface121disposed on the front side of the watch1, and a back surface122disposed on the substrate11side. In other words, the back surface122is disposed facing or in contact with the first surface111of the substrate11. Examples of the metal material configuring the metal film12include Ag, Pt, Pd, Au, Cu, Al, Cr, Sn, Fe, Ti, and the like, or alloys thereof. Further, the metal film12may be configured by layering a plurality of metal films made of these materials. Furthermore, the metal film12may be configured by layering a metal film made of the metals described above, a metal oxide film, a metal nitride film, a metal carbide film, an inorganic oxide film, or the like, or may be formed from a metal oxide film, a metal nitride film, a metal carbide film, or the like. In the present embodiment, the metal film12is configured by layering an Ag layer having a thickness of 150 nm and an SiO2layer having a thickness of 100 nm. A plurality of circular through-holes123are formed in the metal film12. The through-holes123penetrate from the front surface121to the back surface122of the metal film12, and are provided to provide a desired light transmittance in the dial10. In other words, in the dial10, light incident from the front side of the watch1is transmitted to the back surface122side of the metal film12via the plurality of through holes123. Note that an average diameter of the through-hole123is not particularly limited, but is preferably from 1 μm to 50 μm. Configuring the through holes123as described above can inhibit the solar cell50disposed on the back side of the dial10from being seen when the watch1is viewed from the front side, while maintaining the desired light transmittance, and it is possible to prevent a deterioration in appearance. Further, the through hole123is not limited to being formed in a circular shape, and for example, may be formed in a lattice shape in the metal film12. In other words, the shape of the through-hole123in plan view when viewed from the film thickness direction of the metal film12is not limited, as long as, in a cross-section in the thickness direction of the dial10, the through hole123or an opening, which is a space penetrating the metal film12as illustrated inFIG.2, and the recessed portions113formed in the substrate11are provided. Recessed Portion The plurality of recessed portions113of the substrate11are provided in positions corresponding to the plurality of through holes123of the metal film12. The recessed portion113includes side surfaces114and a bottom surface115formed continuously from the side surfaces114. In the present embodiment, the recessed portions113are formed so that side surfaces of the through holes123and the side surfaces114of the recessed portions113are flush with each other. In the present embodiment, the depth of the recessed portion113is not particularly limited, but is preferably from 5% to 50% of the thickness of the substrate11. Further, in the present embodiment, the bottom surface115of the recessed portion113is formed in a curved surface shape. Furthermore, the bottom surface115is formed to be a rough surface. Specifically, the bottom surface115is formed to be a rough surface for which an arithmetic average roughness Ra is greater than 0.01 μm and less than 0.3 μm. Note that in the present embodiment, the arithmetic average roughness Ra conforms to “JIS B 0601”. In this way, the bottom surface115is formed in the curved surface shape and is formed to be a rough surface, and as a result, much of the light that is incident from the front side of the watch1and passes through the through holes123is scattered by the bottom surface115. In other words, the bottom surface115functions as a scattering portion. Convex Portion The convex portion13is provided along an opening end portion124of the through hole123of the metal film12. The convex portion13is provided by causing the metal film12and the substrate11to protrude in a direction from the back side to the front side of the watch1, that is, in the film thickness direction of the metal film12. A protrusion height of the convex portion13is not particularly limited, but is preferably from 30 μm to 40 μm. As a result, at the opening end portion124of the through hole123, that is, at a boundary portion of the through-hole123, the light incident from the front side of the watch1is scattered by the convex portion13. As a result, the convex portion13functions as a scattering portion. Manufacturing Method for Dial Next, a manufacturing method for the dial10according to the present embodiment will be described with reference to a flowchart inFIG.3. Note that, in the present embodiment, a method for manufacturing a plurality of the dials10will be described. As illustrated inFIG.3, first, at step S1, the substrate11is formed by injection molding a resin material. Note that the substrate11is not limited to being formed by injection molding. For example, the substrate11may be formed by compression molding, extrusion molding, or the like. Next, at step S2, the metal film12is layered on the first surface111of the substrate11by sputtering. Note that the metal film12is not limited to being layered by sputtering, and may be layered by vacuum deposition, ion plating, ion-assisted deposition, or the like, for example. Next, at step S3, laser machining is performed. Specifically, an arrangement of the through holes123necessary to achieve the desired light transmittance is determined in advance, and the laser is irradiated from the surface121side of the metal film12in accordance with the required arrangement of the through holes123. As a result, the metal film12is drilled by the laser at positions corresponding to the through holes123, and thus, the through holes123are formed at the desired positions. At this time, a power output of the laser is adjusted so that the substrate11can also be drilled to a desired depth, as well as the metal film12. As a result, the recessed portions113having a predetermined depth are formed at positions corresponding to the through holes123of the substrate11. At this time, as described above, the bottom surfaces115of the recessed portions113are formed into the curved surface shape. Further, when drilling the metal film12and the substrate11using the laser, the opening end portions124of the through holes123thermally expand due to the heat of the laser, and protrude in the film thickness direction of the metal film12. As a result, the convex portions13are formed. Next, die cutting is performed at step S4to form the plurality of dials10. Then, at step S5, a model number or the like is printed on the surface of the metal film12or the like. Finally, at step S6, the hour marks6and the like are imprinted. Advantageous Effects of First Embodiment According to the present embodiment as described above, the following advantageous effects can be obtained. In the present embodiment, the dial10is provided with the light-transmissive substrate11and the metal film12layered on the first surface111of the substrate11. Then, the plurality of through holes123penetrating the metal film12are formed in the metal film12, and the recessed portions113are formed in the substrate11at positions corresponding to the through holes123. In this way, the light incident from the front side of the watch1reaches the recessed portion113of the substrate11via the through hole123of the metal film12, and is scattered by the recessed portion113. As a result, the interference of the reflected light can be suppressed compared to a case in which the recessed portion113is not provided in the substrate11, and the incident light is reflected by the first surface111of the substrate11. Therefore, an appearance of a stripe pattern caused by the interference of the reflected light can be suppressed, that is, it is possible to prevent glare. Thus, it is possible to prevent the deterioration in the appearance of the watch1. In the present embodiment, the bottom surface115of the recessed portion113is formed as the curved surface. As a result, the bottom surface115functions as the scattering portion that scatters the incident light, and thus, the interference of the reflected light can be suppressed. In the present embodiment, the bottom surface115of the recessed portion113is formed to be the rough surface. Specifically, the bottom surface115is formed to be a rough surface for which an arithmetic average roughness Ra is greater than 0.01 μm and less than 0.3 μm. This makes the incident light more scattered, so the interference of the reflected light can be further suppressed. In the present embodiment, the convex portions13protruding in the film thickness direction of the metal film12are provided on the dial10along the opening end portions124of the plurality of through holes123. As a result, the convex portion13functions as the scattering portion, and the interference of the reflected light of the light incident from the front side of the watch1can be suppressed at the boundary portion of the through hole123. In the present embodiment, the through holes123and the recessed portions113are formed by the laser machining in the manufacturing process of the dial10. As a result, manufacturing costs of the dial10can be reduced because manufacturing processes can be reduced in comparison to a case in which the through holes123and the recessed portions113are formed in a typical etching process. Second Embodiment Next, a second embodiment of the present disclosure will be described below with reference toFIG.4andFIG.5. The second embodiment differs from the first embodiment described above in that recessed portions113A are formed by ion milling. Note that, in the second embodiment, the same or similar components as or to those of the first embodiment will be assigned the same reference signs and a description thereof will be omitted or simplified. FIG.4is an enlarged cross-sectional view illustrating main portions of a dial10A according to the second embodiment. As illustrated inFIG.4, the dial10A of the present embodiment is provided with a substrate11A and a metal film12A layered on a first surface111A of the substrate11A. Note that convex portions such as those of the first embodiment described above are not formed on the dial10A of the present embodiment. The substrate11A is configured in a similar manner to the substrate11of the first embodiment described above, and is provided with the first surface111A, and a second surface112A. The recessed portions113A are provided at positions corresponding to through holes123A of the metal film12A. The recessed portion113A includes a side surface114A and a bottom surface115A, and the bottom surface115A is formed in a curved surface shape. The metal film12A is configured in a similar manner to the metal film12of the first embodiment described above, includes a front surface121A and a rear surface122A, and the plurality of through holes123A are formed therein. In the present embodiment, the convex portions are not provided as described above, so opening end portions124A of the through holes123A do not protrude. Manufacturing Method for Dial Next, a manufacturing method for the dial10A according to the present embodiment will be described using a flowchart inFIG.5. Note that in the present embodiment, steps S1A, S2A, and S4A to S6A are the same as the steps S1, S2, and S4to S6of the first embodiment described above, and descriptions thereof will thus be omitted here. As illustrated inFIG.5, at step S7A, a resist is applied to the surface121A of the metal film12A. Specifically, a photoresist is applied by spin coating. Next, at step S8A, the resist is irradiated with ultraviolet light and subject to UV exposure. At this time, by using a photomask, exposure is performed so that a resist pattern is formed apart from the positions at which the through holes123A are formed. Thereafter, at step S9A. heat treatment is performed using an atmospheric oven or the like, for example, and the resist pattern is developed at step S10A. The resist pattern is formed in this way. Next, the ion milling is performed at step S11A. Specifically, the surface121A of the metal film12A is irradiated with an ion beam using the resist pattern as a mask. As a result, the through holes123A are formed by irradiating the metal film12A with the ion beam at positions not masked by the resist pattern. Further, the substrate11A is also irradiated with the ion beam via the through holes123A. As a result, the recessed portions113A having a predetermined depth are formed in positions corresponding to the through holes123A of the substrate11A. At this time, in a similar manner to the first embodiment described above, the bottom surface115A of the recessed portion113A is formed in the curved surface shape. After that, at step S12A, the resist pattern is removed. Specifically, the resist pattern is peeled off by performing alkali treatment using caustic soda water or the like at a concentration of 2 to 5%, and then rinsing is performed using pure water or the like. Advantageous Effects of Second Embodiment According to the present embodiment as described above, the following advantageous effects can be obtained. In the present embodiment, in a similar manner to the first embodiment described above, the recessed portions113A are formed at the positions corresponding to the through holes123A of the substrate11A. The bottom surface115A of the recessed portion113A is formed in the curved surface shape. As a result, the interference of the reflected light can be suppressed in the same manner as in the first embodiment described above. Therefore, the appearance of an iridescent stripe pattern caused by the interference of the reflected light can be suppressed, that is, it is possible to prevent glare. It is thus possible to prevent the deterioration in the appearance of the watch1. Third Embodiment Next, a third embodiment of the present disclosure will be described with reference toFIG.6andFIG.7. The third embodiment differs from the first and second embodiments described above in that recessed portions113B are formed by blasting. Note that, in the third embodiment, the same or similar components as or to those of the first and second embodiments will be assigned the same reference signs and a description thereof will be omitted or simplified. FIG.6is an enlarged cross-sectional view illustrating main portions of a dial10B according to the third embodiment. As illustrated inFIG.6, the dial10B of the present embodiment is provided with a substrate11B, a metal film12B layered on a first surface111B of the substrate11B, and convex portions13B. The substrate11B is configured in a similar manner to the substrate11of the first embodiment described above, and includes the first surface111B and a second surface112B, and the recessed portions113B are provided at positions corresponding to through holes123B of the metal film12B. Then, the recessed portion113B includes a side surface114B and a bottom surface115B, and the bottom surface115B is formed in a curved surface shape. In the present embodiment, although not illustrated, the recessed portion113B is formed so that the arithmetic average roughness Ra of the bottom surface115B is greater than that of the first embodiment described above. Specifically, the bottom surface115B is formed to be a rough surface for which the arithmetic average roughness Ra is greater than 0.3 μm and less than 0.5 μm. As a result, the bottom surface115B functions as the scattering portion, in a similar manner to the first embodiment described above. Further, because the arithmetic average roughness Ra of the bottom surface115B is large, the incident light is less likely to be reflected. In other words, since reflection loss can be suppressed, the transmittance of light incident through the through holes123B increases. The metal film12B is configured in a similar manner to the metal film12of the first embodiment described above, includes a front surface121B and a rear surface122B, and the plurality of through holes123B are formed therein. As in the first embodiment described above, the convex portions13B are provided along opening end portions124B of the through holes123B of the metal film12B. In the present embodiment, the protrusion height of the convex portion13B is not particularly limited, but is preferably from 5 μm to 10 μm. Manufacturing Method for Dial Next, a manufacturing method for the dial10B according to the present embodiment will be described with reference to a flowchart inFIG.7. Note that in the present embodiment, steps S1B, S2B, and S4B to S6B are the same as steps S1, S2, and S4to S6of the first embodiment described above, and descriptions thereof will thus be omitted here. As illustrated inFIG.7, at step S13B, a film for a mask is adhered to the surface121B of the metal film12B. For example, a dry film resist for sand blasting is used as the film. Next, at step S14B, the adhered film is irradiated with ultraviolet light and is subject to UV exposure. Then, the dry film resist is developed at step S15B. The resist pattern is formed in this way. Next, at step S16B, blasting is performed. Specifically, fine sand is projected onto the surface121B of the metal film12B, with the resist pattern formed by the film as a mask. In this way, the through holes123B are formed as a result of the fine sand being projected onto positions where there is no masking by the resist pattern of the metal film12B. At this time, the fine sand is also projected onto the substrate11B via the through holes123B. In this way, the recessed portions113B having a predetermined depth are formed in positions corresponding to the through holes123B of the substrate11B. Here, in a similar manner to the first embodiment described above, the bottom surface115B of the recessed portion113B is formed in the curved surface shape. Further, in the present embodiment, since the recessed portion113B is formed by blasting, the bottom surface115B is scraped by the fine sand, and the arithmetic average roughness Ra of the bottom surface115B is increased. Furthermore, the opening end portion124B of the through hole123B is deformed due to an impact of collision with the fine sand, and protrudes in the film thickness direction of the metal film12B. The convex portion13B is formed in this way. Note that, at this time, the film resist is also somewhat shaved due to collision with the fine sand. However, because the film resist is sufficiently thicker than the metal film12B to be ground, and because a grinding rate is lower than that for the metal film12B, the metal film12B is not shaved at the locations where the metal film is masked with the resist pattern. After that, the resist is removed at step S17B. Advantageous Effects of Third Embodiment According to the present embodiment as described above, the following advantageous effects can be obtained. In the present embodiment, as in the first and second embodiments described above, the recessed portions113B of the substrate11B are formed in the positions corresponding to the through holes123B penetrating the metal film12B. Then, the bottom surface115B of the recessed portion113B is formed in the curved surface shape. In this way, the interference of the reflected light can be suppressed in a similar manner as in the first and second embodiments described above. Therefore, the appearance of the stripe pattern caused by the interference of the reflected light can be suppressed, that is, it is possible to prevent glare. It is thus possible to prevent the deterioration in the appearance of the watch1. In the present embodiment, the bottom surface115B is formed to be the rough surface for which the arithmetic average roughness Ra is greater than 0.3 μm and less than 0.5 μm. As a result, the incident light can be further scattered, and the interference of the reflected light can be further suppressed. Further, since the reflection loss of the incident light can be suppressed, a transmitted amount of light incident through the through holes123B can be increased. In the present embodiment, the convex portion13B protruding in the film thickness direction of the metal film12B is provided along the opening end portion124B of the through hole123B. In this way, in a similar manner to the first embodiment described above, the interference of the reflected light of the incident light at a boundary portion of the through hole123B can be suppressed. In the present embodiment, the through holes123B and the recessed portions113B are formed by blasting, in the manufacturing process of the dial10B. Therefore, manufacturing costs of the dial10B can be reduced because manufacturing processes can be reduced in comparison to a case in which the through holes123B and the recessed portions113B are formed in a typical etching process, for example. Next, specific examples will be described. Example 1 A dial was formed in accordance with the first embodiment described above. Specifically, the dial was formed by layering a metal film, through sputtering, on a polycarbonate substrate having a thickness of 500 μm and a diameter of 30 mm. The metal film was formed by layering an Ag layer having a thickness of 120 nm and an SiO2layer having a thickness of 100 nm. Then, a plurality of through holes were formed in the metal film by laser machining. At this time, a number of the through holes required such that the transmittance of light became 30% was determined through pre-testing, and the determined number of through holes was formed. In addition, recessed portions having a depth of 250 μm were formed by laser machining in positions corresponding to the through holes of the substrate. Further, convex portions having a protrusion height of 35 μm were formed on opening end portions of the through holes. Example 2 The dial was formed in accordance with the second embodiment described above. Specifically, a substrate and a metal film similar to Example 1 described above were prepared, and a plurality of through holes were formed in the metal film by ion milling. At this time, a number of the through holes required such that the transmittance of light became 30% was determined through pre-testing, and the determined number of through holes was formed. Further, recessed portions having a depth of 250 μm were formed by ion milling, in positions corresponding to the through holes of the substrate. Example 3 The dial was formed in accordance with the third embodiment described above. Specifically, a substrate and a metal film similar to Example 1 and Example 2 described above were prepared, and a plurality of through holes were formed in the metal film by blasting. At this time, a number of the through holes required such that the transmittance of light became 30% was determined through pre-testing, and the determined number of through holes was formed. Further, recessed portions having a depth of 250 μm were formed by blasting, in positions corresponding to the through holes of the substrate. Furthermore, convex portions having a protrusion height of 7.5 μm were formed on opening end portions of the through holes. COMPARATIVE EXAMPLE FIG.8is an enlarged cross-sectional view illustrating main portions of a dial20of a Comparative Example. As illustrated inFIG.8, the dial20of the Comparative Example is provided with a substrate21and a metal film22. The substrate21includes a first surface211and a second surface212, and was formed from polycarbonate having a thickness of 500 μm and a diameter of 30 mm. Then, the metal film was layered on the first surface211of the substrate21. The metal film22was formed by layering an Ag layer having a thickness of 120 nm and an SiO2layer having a thickness of 100 nm. A plurality of through holes223penetrating from the front surface221of the metal film22to the rear surface222were formed by a known etching process. At this time, a number of the through holes223required such that the transmittance of light became 30% was determined through pre-testing, and the determined number of through holes223was formed. Note that recessed portions, such as those in Example 1 to Example 3 described above, are not formed in the substrate21of the character board20of the Comparative Example. Evaluation Tests The following evaluation tests were performed on the dials of Example 1 to Example 3 and on the dial20of the Comparative Example. Confirmation Test for Interference Fringe Reduction Effect A visual test stipulated in “JIS Z 8720”, for example, was performed with respect to the dials of Example 1 to Example and the dial20of the Comparative Example, and an interference fringe reduction effect was evaluated. Evaluation criteria were “A” for significant improvement, “B” for improvement, and “C” for no improvement in the interference fringe reduction effect with respect to the dial20of the Comparative Example. Confirmation Test for Panel Transmittance Reduction Effect The visual test stipulated in, “JIS Z 8720”, for example, was performed with respect to the dials of Example 1 to Example 3 and the dial20of the Comparative Example, and an evaluation as to how difficult it was to see the solar cell50when viewed from the front side of the dial was used as a panel transmittance reduction effect. Evaluation criteria were “A” for significant improvement, “B” for improvement, and “C” for no improvement in the panel transmittance reduction effect with respect to the dial20of the Comparative Example. Opening Ratio Evaluation Opening ratios of the dials of Example 1 to Example 3 and the dial20of the Comparative Example were calculated. Specifically, a ratio of a total through hole area with respect to an area of the surface of the dial was calculated as a percentage. Note that, as described above, in the dials of Example 1 to Example 3 and the dial20of the Comparative Example, the through holes are formed so that the transmittance of light is 30%. Results of Confirmation Test for Interference Fringe Reduction Effect FIG.9is a diagram showing results of the evaluation tests. As shown inFIG.9, the result of the confirmation test for the interference fringe reduction effect is “A” for the dials of Example 1 and Example 3, indicating that the interference fringe reduction effect is significantly improved with respect to the dial20of the Comparative Example. Further, the result is “B” for the dial of Example 2, indicating that the interference fringe reduction effect is improved with respect to the dial20of the Comparative Example. This suggests that by providing the recessed portions in the positions corresponding to the through holes, the interference fringe can be reduced. Furthermore, the results suggest that the interference fringe can be further reduced by providing the convex portions at the opening end portions of the through holes and increasing the arithmetic average roughness Ra of the bottom surfaces of the recessed portions. Results of Confirmation Test for Panel Transmittance Reduction Effect The result of the confirmation test for the panel transmittance reduction effect is “A” for the dial of Example 3, indicating that the panel transmittance reduction effect is significantly improved with respect to the dial20of the Comparative Example. Further, the result is “B” for the dials of Example 1 and Example 2, indicating that the panel transmittance reduction effect is improved with respect to the dial20of the Comparative Example. This suggests that the panel transmittance can be reduced by providing the recessed portions in the positions corresponding to the through holes. In particular, the results suggest that increasing the arithmetic average roughness Ra of the bottom surfaces of the recessed portions, as in Example 3, is effective to reduce panel transmittance. Opening Ratio Evaluation The result of the opening ratio evaluation is 24.0% for the dials of Example 1 and Example 2 and for the dial20of the Comparative Example, and 23.4% for the dial of Example 3. In other words, the results suggest that the dial of Example 3 can achieve the predetermined transmittance with a smaller opening ratio compared to the other Examples and the Comparative Example. This suggests that by increasing the arithmetic average roughness Ra of the bottom surfaces, the area of the through holes can be reduced. Modified Example Note that the present disclosure is not limited to each of the embodiments described above, and variations, modifications, and the like within the scope in which the object of the present disclosure can be achieved are included in the present disclosure. In each of the embodiments described above, the watch component of the present disclosure is configured as the dials10,10A, and10B, but are not limited thereto. For example, the watch component of the present disclosure may be configured as a partition plate. In the first embodiment described above, the die cutting is performed after the laser processing, but no such limitation is intended, and, for example, the die cutting may be performed after performing coating following the laser processing. Similarly, in the second embodiment described above, the die cutting is performed after the ion milling, but no such limitation is intended, and, for example, the die cutting may be performed after performing the coating following the ion milling. Furthermore, in a similar manner, in the third embodiment described above, the die cutting is performed after the blasting, but no such limitation is intended, and, for example, the die cutting may be performed after performing the coating following the blasting. | 32,758 |
11860581 | DESCRIPTION OF EXEMPLARY EMBODIMENTS First Embodiment As shown inFIGS.1-7, wristwatch1is a diver's watch worn on a user's wrist. A diver's watch is a type of diving watch. The watch1includes an outer case2, a windshield glass3, and a case back4. A dial5, a watch movement, an hour hand6, a minute hand7, and a seconds hand8are disposed in the outer case2. The hour hand6, the minute hand7and the seconds hand8are driven by the watch movement. The outer case2includes an opening portion2a. The windshield glass3is disposed in the opening portion2a. The hour hand6, minute hand7and seconds hand8indicate time of day. The dial5includes an indicator9. The indicator9is the graduation of the hour hand6, the minute hand7, and the seconds hand8. The direction in which the indicator9cfor 3 o'clock is relative to the axis6afor the hour hand6is the 3 o'clock direction11. The direction in which the indicator9ffor 6 o'clock is relative to the axis6afor the hour hand6is the 6 o'clock direction12. The direction in which the indicator9ifor 9 o'clock is relative to the axis6afor the hour hand6, in other words, the opposite direction of the 3 o'clock direction11is the 9 o'clock direction13. The direction in which the indicator9mfor 12 o'clock is relative to the axis6afor the hour hand6is the 12 o'clock direction14. The direction from the case back4toward the windshield glass3is the top direction15. The opposite direction of the top direction15is a back direction16. A bezel17that rotates around the axis6aof the hour hand6is disposed at the outer case2. The bezel17is disposed around the windshield glass3. The bezel17includes a mark17aon a surface facing the top direction15. The mark17ais disposed on the windshield glass3side of the bezel17. The shape of the mark17ais not particularly limited. With the present embodiment, for example, the mark17aincludes numbers for every 10, from 10 to 50. In addition, the mark17aincludes a rectangle at positions dividing the numbers into 10 equal parts. The bezel17includes a triangular mark17aat the 12 o'clock direction14of the axis6a. The mark17afunctions as a scale indicating the amount of movement of the hour hand6, the minute hand7, and the seconds hand8. The crown18is disposed at a side surface2cof the outer case2in the 3 o'clock direction11. The crown18is rotatable and can be pushed and pulled. When the user of wristwatch1rotates crown18, the crown18can be pulled out. When the crown18is pulled out, the seconds hand8stops. When the crown18is pulled out and rotated, the hour hand6and the minute hand7rotate about axis6a. The user of wristwatch1operates the crown18to correct the time indicated by the hour hand6and the minute hand7. After correcting the time, the user of the wristwatch1actuates the seconds hand8by pushing the crown18inward. After pushing the crown18inward, the crown18is no longer pulled out, by rotating the crown18. Therefore, the operation of the crown18is locked. A protective member19is fastened to the side surface2cof the outer case2to which the crown18is disposed at the 3 o'clock direction11of the outer case2. The protective member19is fastened to the outer case2by two bolts20. The protective member19covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17. According to this configuration, the wristwatch1is equipped with the protective member19that covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17. Therefore, even when impacted, the protective member19protects the outer case2and the bezel17. The protective member19has a notch portion19athat exposes the crown18. Furthermore, because the protective member19includes a notch portion19athat exposes the crown18, the crown18can be operated. Therefore, according to this wristwatch1, the outer case2and the bezel17around the crown18can be protected by the protective member19without impairing operability. By loosening the bolt20, the protective member19can be separated from the outer case2. Accordingly, a plurality of protective members19having different shapes can be prepared, and the protective member19can be replaced with the outer case2. By changing the appearance of the protective member19without changing the appearance of the outer case2, the appearance of the wristwatch1on which the protective member19is mounted can be changed. At this time, by designing only the protective member19without redesigning the outer case2, it is possible to reduce the time required to design the shape change. Even in molds for manufacturing the wristwatch1, the molds of the outer case2can be used without changing, and for that reason a new model can be offered with high productivity. When the wristwatch1is attached to an arm, the wristwatch1on the 3 o'clock direction11side is exposed from a wet suit. Because the bezel17is protected by the protective member19, it is possible to prevent the bezel17from rotating by inadvertently hitting the bezel17against an object such as a tank. A user of wristwatch1can loosen bolt20to separate the protective member19from outer case2. The user of the wristwatch1can implement maintenance by removing dirt, sand, and the like from between the protective member19and the outer case2. The 6 o'clock direction12side and the 12 o'clock direction14side of the crown18are coupled, making the protective member19one body. Therefore, the crown18can easily be fastened to the outer case2compared to when the 6 o'clock direction12side and the 12 o'clock direction14side of the crown18are separate bodies. As illustrated inFIG.1, a portion of the bezel17overlaps with the protective member19in a plan view of the windshield glass3. The protective member19does not overlap with the mark17a. According to this constitution, the bezel17and the protective member19overlap in a plan view of the windshield glass3. The protective member19covers a portion of the outer peripheral side of the bezel17. Therefore, contamination caused by foreign matter between the bezel17and the sheath case2can be suppressed. The protective member19does not overlap the mark17a, so the mark17ais exposed. For that reason, the user of wristwatch1can interpret the mark17a. A band21is disposed from the 12 o'clock direction14side to the 6 o'clock12side of the outer case2. The outer case2is equipped with a plurality of bows22coupled to the band21. At the 12 o'clock direction14side of the outer case2, the first bow22ais disposed at the 3 o'clock direction11side of the band21, and the second bow22bis disposed at the 9 o'clock direction13side of the band21. The end of the band21at the 12 o'clock direction14side of the first and second bows22aand22bis coupled to the outer case2. A third bow22cis disposed on the 3 o'clock direction11side of the band210at the 6 o'clock direction12side of the outer case2, and a fourth bow22dis disposed at the 9 o'clock direction13side of the band21. The end of the band21at the 6 o'clock direction12side in the third and fourth bows22cand22dis coupled to the outer case2. In this way, a plurality of bows22is equipped with a first bow22aand a second bow22bfor coupling of the band21that are disposed at one side of the outer case2, and a third bow22cand a fourth bow22dfor coupling of the band21that are disposed at another side of the outer case2. The first and third bows22aand22care arranged at the crown18side. The protective member19is arranged between the two bows22of the first and third bows22aand22carranged at the crown18side. Specifically, the protective member19covers the side surface2cof the outer case2and the side surface17bof the bezel17between the first and third bows22aand22c. According to this constitution, damage to the side surface2cof the outer case2and the side surface17bof the bezel17between the two bows22arranged on the crown18side can be inhibited. As illustrated inFIGS.1and2, the shape of the notch portion19alooking from the top direction15and the back direction16is trapezoidal. The notch portion19ais equipped with an oblique face19bthat faces the side surface18bof the crown18. The oblique angles23, which are the angles formed by the rotating shaft18aand the oblique face19bof the crown18are from 45 degrees to 55 degrees. When the angle formed by the rotating shaft18aand the oblique face19bof the crown18is 45 degrees or higher, there is a space between the crown18and the oblique face19b, so the user of the wristwatch1can easily grip and rotate the crown18. When the angle formed by the rotating shaft18a and the oblique face19bof the crown18is 55 degrees or less, the crown18and the oblique face19bare close, so the protective member19inhibits an object near the crown18from colliding with the crown18. Therefore, it is preferably from 45 degrees to 55 degrees. As shown inFIGS.3and4, the protective member19extends at the top direction15side of the bezel17and the back direction16side of the case back4. Specifically, the protective member19has a shape that overlaps a portion of the bezel17in a plan view, and overlaps a portion of the case back4. Said another way, the cross-sectional shape of the protective member19is U-shaped and covers a portion of the bezel17and a portion of the case back4. Therefore, it is difficult for foreign matter such as sand or the like to get between the protective member19and the outer case2. Furthermore, when the protective member19is screwed and fastened to the outer case2, positioning of the top direction15and the back direction16with respect to the outer case2can be easily implemented. Therefore, the user of the wristwatch1can easily position the protective member19relative to the outer case2. As shown inFIG.7, by featuring an external appearance of the protective member19, the design of the wristwatch1can be improved. Also, when the user of wristwatch1rotates bezel17, the user operates by sandwiching the bezel17at 6 o'clock12and 12 o'clock14. Therefore, the protective member19protects the outer case2and the bezel17without interfering with the rotation of the bezel17. Second Embodiment The present embodiment differs from the first embodiment in that the shape of the protective member19is different. Moreover, the same constituent elements as those in the first embodiment 1 are denoted using the same reference symbols, descriptions thereof will be omitted. As shown inFIGS.8through14, a wristwatch26is equipped with an outer case2. A protective member27is fastened to a side surface2cof a 3 o'clock direction11of the outer case2. The protective member27is fastened to the outer case2by two bolts20. The protective member27covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17. According to this constitution, the wristwatch26is equipped with the protective member27that covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17, the protective member27protects the outer case2and the bezel17. Third Embodiment The present embodiment differs from the first embodiment in that the shape of the protective member19is different. Moreover, the same constituent elements as those in the first embodiment 1 are denoted using the same reference symbols, descriptions thereof will be omitted. As shown inFIG.15, a wristwatch30is equipped with an outer case2. The protective member31is fastened to a side surface2cof the outer case2at the 3 o'clock direction11side surface. The protective member31is fastened to the outer case2by a bolt20or the like. The protective member31covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17. According to this constitution, the wristwatch30is equipped with the protective member31that covers a portion of the side surface2cof the outer case2and a portion of the side surface17bof the bezel17, the protective member31protects the outer case2and the bezel17. Fourth Embodiment In the first embodiment, the protective member19covers a portion of the side surface2cof the outer case2. The protective member19may also cover all sides of the outer case2. The protective member19can protect all sides of the outer case2. In the first embodiment, the protective member19covers a portion of the side surface17bof the bezel17. The protective member19may also cover all sides17bof the bezel17. The protective member19protects all sides of the bezel17. In the first embodiment, an example of a diver's watch is depicted for the wristwatch1. Furthermore, the present disclosure can also be applied to quartz type or mechanical wristwatches, or wristwatches equipped with a solar battery. In addition, it can also be applied to wristwatches equipped with a pressure gauge or inertia sensor. In addition, although the first embodiment describes a bezel that is rotatable, a fastened bezel that does not rotate can also be applied with the present disclosure. | 12,944 |
11860582 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The invention proposes to supplement the acoustic measurement of the amplitude with a measurement of the influence that the winding state of the watch has on the winder. This is because the winding angle of the oscillating mass (also called the dead angle) increases with the winding rate, since the barrel spring opposes the torque of the mass. The centre of gravity CG of the oscillating mass10is eccentric, and is located, relative to its axis of rotation, on a radial which is called here radial of the centre of mass RCM. If, in a simplified approach, the friction is neglected, the system of forces applied to the oscillating mass10boils down to the opposition between the return torque exerted by the barrel geartrain on the one hand, and the torque exerted by gravitation on the oscillating mass10. When an automatic watch is disposed with the plane of the oscillating mass parallel to the field of gravity, the angle AM that this radial of the centre of mass RCM makes with the vertical V of the place is called “dead angle”. When the watch is unwound, and an hour torque R is applied thereto in this same plane to recharge it, this dead angle AM is very small: the right edge11that an oscillating mass10generally includes remains almost horizontal, as shown inFIG.1. On the other hand, when the watch is fully wound, and under the same conditions, and as visible inFIG.2, the dead angle AM is considerably higher (for example 24° more for a standard movement ETA2824, well known to the person skilled in the art and very widespread), since the torque of the barrel spring is maximum and opposes the torque of the oscillating mass10: the balance is only possible at a large angle so that the gravity torque balances that coming from the barrel. In short, this change of angle has the effect of shifting the centre of mass of the entire watch, which has a measurable unbalance effect on the winder100on which the watch is placed, through a watch holder1. Three non-limiting methods, and which can be combined, are proposed for measuring this effect. Speed measurement is advantageous because it is an efficient and inexpensive method. The winder100is equipped with a direct current motor21which is not speed-controlled, only the supply voltage is constant (imposed by the algorithm). When the watch is discharged, the torque opposed by the watch is at the minimum, and the winder speed is at the maximum. When the watch is fully wound, the torque opposed by the watch is at the maximum, and the winder speed is at the minimum.FIG.3shows the evolution of the winder speed (in revolutions per second) as a function of the number of winding revolutions100, during the complete winding of a watch. It can be seen that this speed of rotation decreases by approximately 0.8% when the watch is fully charged (after approximately 2000 winding revolutions). In this embodiment, the winder100speed is simply carried out using a fixed optical sensor31and a mobile locator32integral with the rotating watch holder1. Torque measurement is an efficient method but more expensive than the previous one. The torque opposing the oscillating mass10increases as the winding increases, until reaching a plateau when the watch is fully wound. The torque can be measured with a torque tester or torquemeter mounted on the watch holder. The advantage of a torquemeter is its high sensitivity. The measurement of the current injected into the motor of the winder is a cheap, but delicate method, unless averaging is carried out long enough. The current of a direct current motor as used in the winder is proportional to its charge, and therefore to the torque opposed by the rotor made of the watch holder and the watch with its oscillating mass10. Measurements show that a winder equipped with a discharged watch consumes around 2 mA (at 1V), with periodic variations that can reach +/−0.5 mA (or +/−25%) during one revolution of the watch holder. It can be shown that the average current should theoretically increase by only 40 μA when the watch is fully charged, that is to say an average increase of only 2% compared to the reference 2 mA. If the current measurement is averaged long enough (typically several revolutions, that is to say a few tens of seconds, which corresponds to a low-pass filter, which cancels the periodic variations), this 2% increase in the average current becomes possible to detect compared to noise. Thus, more particularly, the invention relates to a winding device100for an automatic watch with a mobile oscillating mass. This device100includes at least one watch holder1, which is arranged to carry at least one automatic watch. The device100includes motorisation means2for driving, in particular at least in rotation, the at least one watch holder1, and more particularly each watch holder1that it includes. According to the invention, the device100includes measuring means3, which are arranged to measure the variation in the resistive torque which is opposed to the motorisation means2by a mobile equipment consisting of, on the one hand, all the watch holders1driven by the motorisation means2, and on the other hand all the watches that all these same watch holders1carry, depending on the degree of winding of the watches. And these measuring means3include speed measuring means4to determine the speed and/or the variation in the speed of the motorisation means2, and/or include torque measuring means5to determine the value of the torque and/or the variation in the torque at least at one watch holder1, and/or include current measuring means6to determine the value of the current and/or the variation in the current at least at one electric motor21that the motorisation means2include. More particularly, the measuring means3include such speed measuring means4, which include fixed optical means31arranged to follow a mobile locator32that a watch holder1includes, and which are coupled with a time base9, that the winding device100includes or with which the winding device100is interfaced. In an alternative, these optical means31are arranged to follow an oscillating mass10of at least one watch including a transparent back allowing the observation of the oscillating mass10, or, more particularly, of each watch equipped with such a transparent back. Thus, more particularly, at least one watch holder1is arranged to make visible the oscillating mass10of each watch carrying a transparent back that it carries, and viewing means33are arranged to follow and/or determine the angular position of an oscillating mass10of a given watch between a dead angle corresponding to the unwound state of the watch and a limit winding angle corresponding to the fully wound state of the watch. And the measuring means3are then advantageously arranged to send a stop signal to the motor means2when the limit winding angle is reached, to avoid any unnecessary winding, and therefore any wear of the watch. More particularly, the motorisation means2include a direct current electric motor21, which is not speed-controlled. More particularly, the measuring means3then include speed measuring means4, which are arranged to send a stop signal to the motorisation means2when the speed of the motorisation means2is less, by a predetermined value, than the speed of the motorisation means2at the start of the cycle when at least one watch carried by at least one watch holder1is in an unwound state. More particularly, this predetermined value is comprised between 0.2% and 1.4%. Still more particularly, the predetermined value is comprised between 0.6% and 1.0%. More particularly, the measuring means3include torque measuring means5, which are arranged to send a stop signal to the motorisation means2when the value of the measured torque is stabilised with a variation less than a predetermined threshold, such as 1.0% in a particular non-limiting variant. More particularly, the measuring means3include torque measuring means5, which are arranged to determine the real angular position of the centre of mass of the mobile equipment mentioned above, to compare it with a theoretical angular position corresponding to the fully wound state of each watch, and are arranged to send a stop signal to the motorisation means2when these real and theoretical positions are equal. More particularly, the measuring means3include current measuring means6to determine the value of the current and/or the variation in the current at the motor that the motorisation means2include, in particular an electric motor21, and which constitute torque measuring means5. More particularly, the measuring means3include such current measuring means6to determine the value of the current and/or the variation in the current at the electric motor21, and which are arranged to send a stop signal to the motorisation means2when the current consumption is, for a duration greater than 80 seconds, more than 4.0% higher than the consumption at the start of the cycle when at least one watch carried by at least one watch holder1is in an unwound state. More particularly, these measuring means3are arranged to send this signal when the current consumption is, for a duration greater than 40 seconds, more than 2.0% higher than the consumption at the start of the cycle when at least one watch carried by at least one watch holder1is in an unwound state. More particularly, the measuring means3are arranged to determine a difference in resistance according to the direction of rotation of a watch holder1, and to impose a rotation of the watch holder1in the direction wherein it has the greatest resistance. This allows to determine the presence of automatic watches which are designed for winding in one direction only, and for freewheeling movement in the other direction; thus each movement imparted to the watch holder1is effective since it is used for rewinding. More particularly, at least one watch holder1carries a single watch. Still more particularly, each watch holder1carries a single watch. More particularly, the winding device100includes a single watch holder1. The invention has several major advantages, regardless of its embodiment:no need to install an air or contact microphone;independence from ambient noise, which generally constitutes a major obstacle to precise and reliable measurements;no need to install a second wireless-powered on-board electronic circuit at the watch holder;ease of speed measurement, with a very simple algorithm compared to that required for acoustic amplitude measurement;high resolution of speed or torque measurements, however with a potentially high noise;a relative measurement of the effect on the winder works with any automatic watch;these measurements allow to quickly determine the correct rewinding direction. | 10,820 |
11860583 | EMBODIMENT(S) OF THE INVENTION The aim of the following detailed description is to describe a chronograph mechanism with flyback function comprising an “all or nothing” actuating mechanism according to a preferred embodiment of the present invention, provided as an illustrative and non-limiting example. More specifically, according to the embodiment shown and described, the chronograph mechanism may be intended to be integrated into a timepiece movement or, alternatively, may be combined with an existing timepiece calibre in the form of an additional module. It should be noted that a person skilled in the art may implement the actuating mechanism according to the invention in relation with other types of timepiece mechanisms without departing from the context of the invention as defined in the set of claims. Therefore, it may be possible, for example, to associate the actuating mechanism according to the present invention with a striking mechanism. In this case, the actuating mechanism would be used to load a spring driving the striking mechanism and actuate the striking mechanism only when the spring is sufficiently loaded. FIG.1shows a partial front view of a chronograph mechanism1having a flyback function, according to a preferred embodiment of the present invention. Generally, the chronograph mechanism1may have different known structures, such as, for example, of the shuttle or column-wheel type, without particularly impacting on the operation of the actuating mechanism according to the invention. Therefore, the chronograph mechanism1will not be fully described in detail. The chronograph mechanism1in this case comprises, as an illustrative and non-limiting example, two counters arranged coaxially: a minute counter2and a second counter4. Each of the counters2and4is constituted by a wheel comprising a shaft6,8(the minutes shaft6being hollow in order to define a passage for the seconds shaft8), as well as a disc10,12and a zero-reset cam14,16secured to the corresponding shaft. Each disc10,12is toothed in order to drive the corresponding counter. Each shaft6,8is intended to carry a display member, for displaying the minutes and the seconds respectively. Conventionally, the chronograph mechanism1is provided with a first START/STOP control lever100for starting and stopping the driving of the counters2and4in order to measure or stop measuring time, via at least one coupling102. The chronograph mechanism1also comprises one brake104arranged on a frame element so as to be able to pivot typically between two positions, a first inactive START position in which it is situated at a distance from the counter4, and a second STOP position in which it acts on the disc12in order to lock the counter4and allow a measured time to be read. Typically, a jumper108can lock the minute counter. The chronograph mechanism1therefore comprises a conventional device arranged to act on the brake104and control its position depending on the current state of the chronograph mechanism (START/STOP), in this case a column-wheel110that also cooperates with the coupling102. Moreover, the chronograph mechanism1according to the present invention is also provided with a second zero-reset control lever22. The second control lever22is secured to an actuating surface23, at a first of its ends, the actuating surface being intended to receive pressure applied by a user to a zero-reset push button of the corresponding timepiece, so as to pivot in the anti-clockwise direction of rotation in the view shown inFIG.1, according to the axis of rotation24. At its other end, the second control lever22carries a retractable pawl26intended to cooperate with a toothing28of a control member30, in this case in the form of an additional column-wheel comprising as many columns32as the toothing28comprises teeth, in this case N=6, as an illustrative and non-limiting example. The additional column-wheel is pivoted on a frame element (not shown) of the chronograph mechanism1, or of the corresponding timepiece movement, and allows the state of different components involved in resetting the counters2and4to zero to be controlled, as described below. The second control lever22comprises a central and rigid main portion34, from which two elastic portions36,38extend, to either side, one of which36defines an elastic return spring for returning the control lever22towards its inactive position, in relation with a fixed stud40, and the other38is capable of acting on a pin42secured to zero-reset hammers44, in order to make the latter pivot in a clockwise direction of rotation in the view shown in the figure, as disclosed below. It should be noted that the zero-reset hammers44are in this instance placed one on top of the other, being secured to each other, which is not visible in the front view, each of them being associated with one of the zero-reset cams14,16. The travel of the second control lever22is set in such a way that its pawl26acts on the additional column-wheel in order to rotate it in the anti-clockwise direction of rotation by a little less than one full pitch. A jumper46is arranged on a frame element to cooperate with the toothing28of the additional column-wheel and complete the current pitch, after the end of the action of the pawl26, and to keep the additional column-wheel in its stable inactive position, as shown inFIG.1. FIG.1also shows that the mechanism according to the invention advantageously comprises a lever18, pivoted according to an axis of rotation20, and carrying a beak48arranged to cooperate with the columns32of the additional column-wheel and allow the latter to control the angular orientation of the lever18, between an inactive position in which it leaves the chronograph brake104free to move depending on the START or STOP state of the mechanism, and a zero-reset position in which it acts on the chronograph brake104to move it away from the counter4, if necessary. The cooperation between the beak48and the additional column-wheel helps ensure that the brake104is set apart from the counter4during zero-resetting operations, whether the chronograph mechanism1is in the START mode (brake already set apart from the counter) or the STOP mode (brake in contact with the counter). An elastic return member112is provided to act on the lever18, to tend to make it pivot in the anti-clockwise direction of rotation in the view shown in the figures, i.e., so that it moves towards the additional column-wheel. The elastic return member112is made here as one with an additional lever106which is designed to play a role when the minute counter2is reset to zero. Indeed, an appropriate rotation of the levers18and106leads, in particular, to the lifting of an actuating beak engaged with the minute counter2, during zero-reset operations, this actuating beak being otherwise responsible for driving the minute counter2as a function of the movements of the second counter4during time measuring operations. Moreover, the operating mode of the lever18can be used to act on the kinematic link connecting the chronograph counter4to a drive wheel of the timepiece movement, during zero-resetting operations. Indeed, generally, the coupling comprises a transmission member carried by a coupling lever that is able to move between two positions, a coupled position and an uncoupled position, allowing such a kinematic link to be established or broken depending on the START or STOP operating mode of the chronograph mechanism. However, it is necessary to break this kinematic link, in the START mode, in order to be able, in particular, to reset the seconds counter4to zero. The lever18may therefore also be arranged to move the transmission member away from the chronograph counter4when the chronograph brake104is set apart from the counter4in order to reset the latter to zero. To this end, the transmission member114could, in particular, be connected in a novel way to the coupling lever116via an elastically deformable connection member118, which would thus be deformed under the effect of the action of the lever18. Thanks to this construction, the transmission member114can be moved away from the disc12of the second counter4, by way of a deformation of the connection member118, even when the chronograph mechanism is in its START functioning mode. Hence, a change in the orientation of the coupling lever116is not necessary during zero-resetting operations. It can also be seen inFIG.1that the zero-reset hammers44are secured to a beak50arranged to cooperate with the columns32of the additional column-wheel and control the angular orientation of the zero-reset hammers44, between their neutral or inactive position, as shown inFIG.1, and their active position (as shown inFIG.2e). The operation of the zero-reset mechanism will now be set out in detail in relation withFIGS.2ato2f, which show different successive steps thereof. FIG.2ashows the configuration of the actuating mechanism according to the invention when the chronograph mechanism1is in either of its functioning modes, START or STOP. In this situation, the zero-reset control lever22is not in contact with the additional column-wheel, the angular orientation of which is kept fixed by its jumper46. Generally, the lever18is arranged so as to allow an uncoupling or isolation of the chronograph counters. It always cooperates with the additional lever106which is prestressed under the effect of the action of the elastic return member112. When the lever18is actuated, it cooperates with the additional lever106to isolate the minutes counter2and allow the zero-resetting of the latter, as far as it is then only undergoing the action of its jumper108. In the case where the chronograph mechanism is in its START functioning mode, the column-wheel110moves the brake104away from the seconds counter4and the lever18thus does not cooperate with the brake104when it is actuated. However, in this case, the lever18acts on the transmission member114to move it away from the seconds counter4. In the case where the chronograph mechanism is in its STOP functioning mode, the column-wheel110acts on the coupling102so as to move the transmission member114away from the seconds counter4. In this case, when the lever18is actuated, it does not cooperate with the transmission member114but it however cooperates with the brake104to move it away from the seconds counter4. Referring back toFIG.2a, it appears that the beak48of the lever18is situated in a recess between two columns32of the additional column-wheel. The beak50of the zero-reset hammers44is kept outside of the external perimeter defined by the columns32of the additional column-wheel by the control lever22, thus keeping the zero-reset hammers at a distance from the zero-reset cams14,16. When a user begins to apply pressure to the zero-reset control lever22(via the push button, which is not shown), the latter begins to pivot in the anti-clockwise direction of rotation in the view shown in the figures, as shown inFIG.2b. When the control lever22pivots, the pawl26comes into contact with the toothing28of the additional column-wheel. At the same time, the elastic portions36and38of the control lever22start to tension and to store mechanical energy, the elastic portion36bearing against the fixed stud40, and the elastic portion38bearing against the pin42secured to the zero-reset hammers44, the latter being kept fixed in their inactive position as a consequence of the abutment of the beak50against a column32of the additional column-wheel. After the control lever22has completed a certain amount of travel, the additional column-wheel also begins to rotate, as shown inFIG.2c. At this point, the column32against which the beak50of the zero-reset hammers44is still bearing has simply moved while keeping the zero-reset hammers44locked in their inactive position, while another column32has moved towards the beak48of the lever18. When the control lever22pivots further, as shown inFIG.2d, the lever18is lifted under the effect of the action of the column32that has moved towards its beak48. The beak50of the zero-reset hammers44is still bearing against the same column32but is about to be positioned facing a recess between this column and the next. Meanwhile, the elastic portions36and38of the control lever22have substantially accumulated a maximum amount of mechanical energy by being deformed, the control lever22being virtually at the end of its travel. A slight additional pivoting of the control lever22releases the beak50of the zero-reset hammers44, as shown inFIG.2e, because the column32that was locking them has been completely moved away from the beak50. At the same time, the pawl26has been released from the toothing28of the additional column-wheel. The elastic portion38loaded to its maximum can then release its mechanical energy, pivoting the zero-reset hammers44in the clockwise direction of rotation in the view shown inFIG.2e, until they abut against the zero-reset cams14,16, which in turn rotate to a predefined angular orientation, associated with the display members displaying the seconds and minutes of timed time being positioned at0, in a conventional manner. At the same time, the column32that acts on the lever18has moved opposite the beak48, while continuing to act on it. In this configuration, it can be seen that the additional column-wheel is in an unstable orientation, its jumper46being under tension, bearing on a single tooth of the toothing28. At the same time, no additional rotation of the additional column-wheel is possible in the anti-clockwise direction of rotation in the view shown inFIG.2e, due to the positioning of the zero-reset hammers44, the beak50of which defines an abutment for the next column32. As long as the user keeps applying pressure to the control lever22, the actuating mechanism remains in the state shown inFIG.2e, time measurement being suspended with the display members displaying the measured times positioned at0. When the user releases the control lever22, it is returned to its inactive position, as shown inFIG.2f, under the effect of the action of its elastic portion36on the stud40. By rotating in the clockwise direction of rotation, the control lever22comes into contact with the pin42of the zero-reset hammers44with its rigid central portion34, at the same time allowing the elastic portion38to release its tension. The elastic portion36has elastic properties such that its action on the control lever22, intended to rotate it in the clockwise direction of rotation in the view shown inFIG.2f, also lifts the zero-reset hammers44in order to return them to their inactive position. When it rotates further in the clockwise direction in the configuration shown inFIG.2f, the control lever22acts on the zero-reset hammers44to such an extent that their beak50is released from the recess between two columns32in which it was situated, and thus releases the additional column-wheel which can complete the pitch started under the effect of the action of the pawl26, and assume a new stable position, as shown inFIG.2a, under the effect of the driving action of its jumper46. By rotating further, the additional column-wheel also releases the beak48of the lever18, which falls back into a recess between two columns32. Hence, the chronograph mechanism switches back to its configuration as dictated by the orientation of the column-wheel110, meaning that the brake104falls back against the disc12of the seconds counter4if the chronograph mechanism is in its STOP functioning mode, or meaning that the transmission member114falls back against the disc12if the chronograph mechanism is in its START functioning mode. Moreover, the pawl26of the control lever22retracts in order to switch from the configuration shown inFIG.2fto that ofFIG.2a. Thus, when the user releases the control lever22, the actuating mechanism according to the invention switches back from the configuration shown inFIG.2fto that shown inFIG.2a, and time measurements restarts from 0 if the zero-resetting operation was a flyback zero-resetting operation, in other words, if the chronograph mechanism was in its START functioning mode. Otherwise, if the chronograph mechanism was in its STOP functioning mode, all the chronograph counters remain in their zero position. It should be noted that, if the user releases the control lever22before the zero-reset hammers44have fallen against the zero-reset cams14,16, the tension in the two elastic portions36,38of the control lever22can be released and the additional column-wheel returns to its stable inactive position under the effect of the action of its jumper46, by rotating in the clockwise direction of rotation. FIGS.3ato3hshow pairs of front views of a first side and of the opposing side respectively of an actuating mechanism according to an alternative embodiment of the present invention, in four successive operating phases. More specifically, this actuating mechanism is similar to that which has been disclosed in relation withFIGS.2ato2f, its components being modified only to a minor extent in reference to the description above. Therefore, the same numerical references are used inFIGS.3ato3hto identify the components described above, in order to facilitate the understanding thereof. FIGS.3aand3bshow the configuration of the actuating mechanism when the chronograph mechanism1is in either of its functioning modes, START or STOP. In this situation, the zero-reset control lever22is not in contact with the additional column-wheel, the angular orientation of which is kept fixed by its jumper46. The beak48of the lever18is situated in a recess between two columns32of the additional column-wheel. The beak50of the zero-reset hammers44is kept outside of the external perimeter defined by the columns32of the additional column-wheel by the control lever22, thus keeping the zero-reset hammers at a distance from the zero-reset cams14,16. When a user begins to apply pressure to the zero-reset control lever22, the latter begins to pivot in the anti-clockwise direction of rotation in the view shown inFIG.3a, as shown inFIGS.3cand3d. When the control lever22pivots, the pawl26comes into contact with the toothing28of the additional column-wheel. At the same time, the elastic portions36and38of the control lever22start to tension and to store mechanical energy, the elastic portion36bearing against the fixed stud40, and the elastic portion38bearing against the pin42secured to the zero-reset hammers44, the latter being kept fixed in their inactive position as a consequence of the abutment of the beak50against a column32of the additional column-wheel. By pivoting, the control lever22rotates the additional column-wheel on itself, which can be seen in particular inFIG.3c, which shows the movement performed by the column32on which the beak50of the zero-reset hammers44rests from the inactive configuration shown inFIG.3a. At the same time, a new column32of the additional column-wheel is positioned behind the beak48of the lever18. It can be seen more particularly in the view shown inFIG.3dthat a tooth of the toothing28of the additional column-wheel has pushed the jumper46back to a maximum load position of the latter, the configuration shown representing an instant preceding the tipping point, beyond which the jumper46will once more be able to move towards the additional column-wheel, releasing the mechanical energy stored during the initial phase. It is therefore understood that, from the configuration shown inFIGS.3cand3d, any additional rotation of the additional column-wheel will allow the jumper46to apply a driving force to its toothing28, as shown inFIGS.3eand3f. Indeed, these figures show a configuration in which the control lever22has continued to rotate slightly, passing the tipping point, and the additional column-wheel has then been driven by the driving action of the jumper46to such an extent that it is out of reach of the pawl26of the control lever22. It can be seen in the view shown inFIG.3ethat the force applied by the jumper46to the toothing28makes it possible to lift the beak48of the lever18, while the zero-reset hammers44are still kept in the raised or neutral position by their beak50resting on a column32. It can be seen in the view shown inFIG.3fthat, in this configuration, the jumper46still acts on the toothing28to continue to rotate the additional column-wheel until it reaches the orientation shown inFIGS.3gand3h. The last rotational movement of the additional column-wheel, under the effect of the driving action of its jumper46, has allowed the lever18to be positioned in such a way as to isolate all the chronograph counters, by lifting either the brake104or the transmission member114depending on the START or STOP functioning mode of the chronograph mechanism, and has allowed the beak50of the zero-reset hammers44to fall between two columns32, under the effect of the energy accumulated by the elastic portion38of the control lever22being released. In this configuration, the jumper46has not yet released all the stored energy, but the additional column-wheel cannot rotate any further because the beak50is located on the path of one of its columns32. When the control lever22is released, it can pivot towards its inactive position, in the clockwise direction of rotation in the view shown inFIG.3g, under the effect of the energy stored by its elastic portion36being released, and act on the zero-reset hammers44in order to bring them towards their raised position, as already described in relation with the first embodiment. In doing so, the beak50is removed from its lowered position and releases the additional column-wheel, which can then return to its first state under the effect of the residual driving action of its jumper46, as shown inFIG.3h. The actuating mechanism then returns to its initial configuration, as shown inFIGS.3aand3b. Therefore, unlike the operation of the embodiment described in relation withFIGS.2ato2f, the driving force now allowing the lever18to be raised is no longer that provided directly by the user via the control lever22, but originates from the jumper46(previously loaded by the user via the control lever22). This means that, in the case of the first embodiment, the user acts simultaneously on four springs, namely the two elastic portions36,38of the control lever22, the jumper46and the elastic return member112of the lever18whereas, in the context of the second embodiment, shown inFIGS.3ato3h, the user only acts on three springs, some of the energy provided to the jumper46being used indirectly, at a later time, to neutralize the action of the elastic return member112of the lever18. Moreover, it should also be noted that the edges of the beak50are slightly rounded in the context of the second embodiment shown inFIGS.3ato3h. Indeed, advantageously, the corresponding rounded edges and the slope of the beak48of the lever18are designed so as to follow the lever18, and by way of consequence the brake104or the transmission member114, when the user releases the zero-resetting push button, to give them a “dragging” and no longer instantaneous quality, as in the case of the first embodiment inFIGS.2ato2f. The risk of the seconds hand of the chronograph jumping when restarting it after a flyback zero-reset can therefore be reduced or indeed eliminated. As a result of the features that have just been disclosed, an “all or nothing” actuating mechanism is obtained that has a flexible structure and offers precise and reliable operation. As already indicated above, the sensation felt by the user when actuating this mechanism is comfortable because the different springs involved (the two elastic portions36,38and the jumper46, and indeed the elastic return member112) can be produced in such a way as to present substantially linear resistance over the entire travel of the control lever22. Moreover, the user very clearly feels the instant when the zero-reset hammers44are released, also ensuring excellent feedback. As indicated above, although the “all or nothing” actuating mechanism according to the preferred embodiment of the invention, as shown and described, comprises a control lever acting on a column-wheel and on zero-reset hammers of a chronograph mechanism, other embodiments may be considered without departing from the context of the invention as defined by the set of claims, and a person skilled in the art may design the control lever such that it acts on a control member of a different nature to a column-wheel and on an actuating lever other than one or more zero-reset hammers. As a non-limiting example, the actuating mechanism according to the invention may be integrated into a striking mechanism of a timepiece movement, the control lever then being arranged to load a striking mechanism spring and to activate the striking mechanism in response to pressure applied by a user allowing the control lever to move up to the end of its travel. Generally, the implementation of the present invention is not limited to the precise geometry of the different components of the mechanism as shown and described. Indeed, a person skilled in the art will encounter no particular difficulty in adapting the present teaching to the implementation of an actuating mechanism that has the features of the present invention, in which the components may have different shapes and layouts to those which are described and shown. Therefore, for example, a single chronograph counter or two non-axial chronograph counters could be provided, the elastic portions36,38could be replaced with springs separate from the control lever22. One could also provide that the transmission member114and the brake104be driven by the fall down of the hammers, as in a conventional chronograph mechanism, but this would require an increased angular travel of the hammers to be implemented to make it possible for the mechanism to be uncoupled before the zero-reset cams are actuated. | 26,027 |
11860584 | DETAILED DESCRIPTION System1000for use in supporting a smartwatch is shown inFIG.1. System1000can include a manager system110having an associated data repository108, smartwatches120A-120Z, other user equipment (UE devices130A-130Z), and social media system140. Manager system110, smartwatches120A-120Z, other UE devices130A-130Z, and social media system140can be in communication with one another via network190. System1000can include numerous devices which can be computing node based devices connected by network190. Network190can be a physical network and/or a virtual network. A physical network can be, for example, a physical telecommunications network connecting numerous computing nodes or systems such as computer servers and computer clients. A virtual network can, for example, combine numerous physical networks or parts thereof into a logical virtual network. In another example, numerous virtual networks can be defined over a single physical network. In one embodiment, manager system110can be external to smartwatches120A-120Z, other UE devices130A-130Z, and social media system140. According to one embodiment, manager system110can be collocated with one or more of smartwatches120A-120Z, other UE devices130A-130Z, and/or social media system140. Smartwatches120A-120Z can be particularly configured smartwatches as set forth herein that can be configured to permit regular air flow to the wrist of a user wearing a smartwatch in the wrist area where the watch is worn. Each of the different smartwatches120A-120Z can be associated to a different user. Each of the different UE devices130A-130Z can be associated to a different user. Regarding UE devices130A-130Z, a UE device of one or more client UE device130A-130Z, in one embodiment, can be a computing node device provided by a client computer, e.g., a mobile device, e.g. a smartphone or tablet, a laptop, smartwatch or PC that runs one or more program, e.g. including a web browser for opening and viewing web pages. Data repository108of manager system110can store various data. In users area2121, data repository108can store data on users of system1000. System1000can be configured so that when a user registers a first smartwatch of the user, system1000assigns a universally unique identifier (UUID) to the user. A user can specify permissions to system1000when registering as a registered user of system1000. Permissions can include, e.g., permissions to permit manager system110to collect data from the new smartwatch being registered, but also other UE devices of the user such as, e.g., smartphones, televisions, tablets, personal computers, and the like. In users area2121, data repository108, for each user of system1000, can store historical data. Historical data can include, e.g., historical sensor data collected from sensors disposed in smartwatches of a user and other UE devices of a user. Historical data of a user can also include e.g. applications usage data respecting applications that are used by a user that are running on a smartwatch or another UE device of a user. Historical data of users area2121can also include historical social media data of a user for respective users of system1000. Data repository108and models area2122can store predictive models of system1000that have been trained using historical data of users area2121as training data. Predictive models stored in2122can be iteratively trained with training datasets defined by historical data stored in users area2121. Manager system110can query trained predictive models stored in models area2122for return of action decisions. Action decisions returned by manager system110can include, e.g., action decisions to control a display screen or to control the smartwatch's impact on health level of a user. Data repository108and decision data structures area2123can include one or more decision data structure for return of action decisions. Decision data structures area2123can include, e.g., decision tables and decision trees. Manager system110can run various processes. Manager system110running data collection process111can include manager system110collecting data from one or more data source. The one or more data source can include, e.g., smartwatches such as smartwatches120A-120Z, other UE devices130A-130Z, and/or social media system140. When collecting data from a UE device such as a smartwatch or other UE device130A-130Z, manager system110can collect, e.g., sensor output data and/or application usage data. Application usage data can refer to data provided by one or more running application of a UE device such as a smartwatch or other UE device. Manager system110running screen management process112can control a display screen of smartwatches120A-120Z. According to one embodiment, a display screen of a smartwatch herein can have a plurality of operating modes. According to one embodiment, a display screen herein can include a rollable display screen that is retractable into a chamber. According to one embodiment, a smartwatch herein can contain a rollable display screen. According to a hidden display screen configuration herein, a rollable display screen herein can include an entirety of a rollable display screen contained within one or more display screen chamber (cylinder) with no portion of the display screen viewable by a user. According to a standard display screen configuration, a rollable display herein can extend to define an exposed viewing area dimension according to an average sized smartphone. According to a standard display screen configuration, a rollable display screen herein can emulate the appearance of a conventional smartwatch. According to an extended display screen configuration herein, a rollable display screen can be controlled to exhibit an exposed viewing area dimension greater than an exposed viewing area dimension exhibited in a standard display screen operating configuration. According to a limited display screen configuration herein, a rollable display screen can be controlled to exhibit a viewable area having an exposed viewing area dimension less than an exposed viewing area dimension exhibited in a standard display screen operating configuration. Display screen121can be provided using rollable display technology. For configuring display121as a rollable display, display screen121can be provided using a flexible substrate. A flexible substrate can be formed of e.g., polyimide (PI), polycarbonate (PC), polyethylenapthanate (PEN), cyclic olefin polymer (COP), polyethyleneterephthalate (PET), polynorborneen (PNB), or polyethersulfone (PES). Chamber122A can include a first roller attaching to a first display screen segment and a first driver motor configured to drive the display screen segment. Chamber122B can include a second roller attaching to a display screen segment and a second driver motor configured to drive the second display screen segment. Manager system110, for providing a hidden display screen configuration, can command the first drive motor to wind the first display screen segment entirely into chamber122A, and can command the second drive motor to wind the second display screen segment entirely into chamber122B. Manager system110running screen management process112can control the viewing area dimension of an exposed display screen herein. Manager system110running screen management process112can predict a user's usage of a display screen and can control a display screen so that an exposed viewing area dimension of a display screen matches a predicted viewing usage of a user. By matching a display screen configuration to predicted viewing usage, a user need not implement user interface controls for controlling the configuration. By matching a display screen to predicted viewing usage, manager system110can regularly control a display screen to be in a hidden or limited display screen configuration and increase airflow to a wrist area of a user wearing a smartwatch. For predicting a user's usage of a display screen, manager system110can query one or more predictive model that predicts the user's usage of a display screen. The one or more predictive model can be trained with training data defined by historical data of the user and in some scenarios the training data can comprise data of other users of system1000. Where a predictive model is trained with training data associated to a user other than a current user, the predictive model can be regarded to be trained with crowdsourced training data stored in data repository108. Manager system110running health management process113can include manager system110controlling an impact of a smartwatch on a health level of a user. Manager system110running health management process113can include manager system110controlling microbumps of smartwatch120A. Smartwatch120A, as set forth more fully herein, can include a plurality of microbumps disposed on a wristband of smartwatch120A-120Z. Smartwatches120A-120Z herein can be configured so that a wristband of smartwatch, e.g., smartwatch120A, makes limited contact with skin in a wrist area of a user that holds a smartwatch. A limited contacting relationship of the wristband to the wrist of a user can permit regular flow of air into the wrist area of a user. The airflow into a wrist area of a user can be further facilitated by the design of a display screen of a smartwatch as set forth herein, including a rollable display which can be rolled into an undisplayed state to maximize airflow into wrist area of a user. Manager system110running health management process113, according to one embodiment, can control the activation states of microbumps of a wristband of a smartwatch of smartwatches120A-120Z. Manager system110running health management process113, according to one embodiment, can control activation states of microbumps of a wristband so that at any given time, only a subset of microbumps is in an active state, resulting in limited contact of the wristband to a user's skin. Manager system110running health management process113can include manager system110controlling a wristband so that the subset of microbumps that are in an active state at any given time changes through time. For example, at a first time period, a first set of microbumps of smartwatch120A can be active, and at a next successive time period, the second subset of microbumps can be active. Thus, there can be a reduced or no area of a user's wrist that is persistently contacted by a wristband of a smartwatch during the course of time that the user wears the smartwatch. The contact between a wristband and the skin of a user's wrist can improve sensing operations. For example, various humidity, temperature, and ultrasound imaging techniques can be improved by contact sensing. Manager system110running health management process113can include manager system110using various types of historical data such as data collected from, e.g., humidity sensors, ultrasound sensors, temperature sensors, accelerometer sensors and/or camera image sensors. Manager system110running health management process113, according to one embodiment, can include manager system110applying a multifactor formula. Manager system110running health management process113, according to one embodiment, can include manager system110querying one or more predictive model that is trained with use of training datasets defined by historical data stored in user's area2121of data repository108. Manager system110running natural language processing (NLP) process114can include manager system110running NLP process114to process data for preparation of records that are stored in data repository108. Manager system110can run NLP process114for determining one or more NLP output parameter of a message. NLP process114can include one or more of a topic classification process that determines topics of messages and output one or more topic NLP output parameter, a sentiment analysis process which determines the sentiment parameter for a message, e.g. polar sentiment NLP output parameters, “negative,” “positive,” and/or non-polar NLP output sentiment parameters, e.g. “anger,” “disgust,” “fear,” “joy,” and/or “sadness” or other classification process for output of one or more other NLP output parameters e.g. one of more “social tendency” NLP output parameter or one or more “writing style” NLP output parameter. By running of NLP process114, manager system110can perform a number of processes including one or more of (a) topic classification and output of one or more topic NLP output parameter for a received message, (b) sentiment classification and output of one or more sentiment NLP output parameter for a received message, or (c) other NLP classifications and output of one or more other NLP output parameter for the received message. Topic analysis for topic classification and output of NLP output parameters can include topic segmentation to identify several topics within a message. Topic analysis can apply a variety of technologies, e.g., one or more of Hidden Markov model (HMM), artificial chains, passage similarities using word co-occurrence, topic modeling, or clustering. Sentiment analysis for sentiment classification and output of one or more sentiment NLP parameter can determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. The attitude may be the author's judgment or evaluation, affective state (the emotional state of the author when writing), or the intended emotional communication (emotional effect the author wishes to have on the reader). In one embodiment, sentiment analysis can classify the polarity of a given text as to whether an expressed opinion is positive, negative, or neutral. Advanced sentiment classification can classify beyond a polarity of a given text. Advanced sentiment classification can classify emotional states as sentiment classifications. Sentiment classifications can include the classification of “anger,” “disgust,” “fear,” “joy,” and “sadness.” Manager system110running NLP process114can include manager system110returning NLP output parameters in addition to those specification topics and sentiments, e.g., can provide sentence segmentation tags, and part of speech tags. Manager system110can use sentence segmentation parameters to determine, e.g., that an action topic and an entity topic are referenced in a common sentence, for example. Manager system110running machine learning process115can include manager system110iteratively training predictive models stored in models area2122of data repository108. Manager system110can be configured so that if new data is collected for storage into data repository108, the newly stored data is concurrently utilized as training data for training various predictive models stored in models area2122. Accordingly, models of models area2122can be iteratively trained with new training data so that the predictive models remain current and capable of providing predictions based on the latest behaviors exhibited by users. Social media system140can include a collection of files, including for example, HTML files, CSS files, image files, and JavaScript files. Social media system140can be a social website such as FACEBOOK® (Facebook is a registered trademark of Facebook, Inc.), TWITTER® (Twitter is a registered trademark of Twitter, Inc.), LINKEDIN® (LinkedIn is a registered trademark of LinkedIn Corporation), or INSTAGRAM® (Instagram is a registered trademark of Instagram, LLC). Computer implemented social networks incorporate messaging systems that are capable of receiving and transmitting messages to client computers of participant users of the messaging systems. Messaging systems can also be incorporated in systems that have minimal or no social network attributes. A messaging system can be provided by a short message system (SMS) text message delivery service of a mobile phone cellular network provider, or an email delivery system. Manager system110can include a messaging system, in one embodiment. During a process of registration wherein a user of system1000registers as a registered user of system100, a user sending registration data can send, with permission data defining the registration data, a permission that grants access by manager system110to data of the user within social media system140. On being registered, manager system110can examine data of social media system140, e.g., to determine whether first and second users are in communication with one another via a messaging system of social media system140. A user can enter registration data using a user interface displayed on a client computer device of client computer devices130-130Z. Entered registration data can include e.g. name, address, social media account information, other contact information, biographical information, background information, preferences information, and/or permissions data e.g. can include permissions data allowing manager system110to query data of a social media account of a user provided by social media system140including messaging system data and any other data of the user. When a user opts-in to register into system1000and grants system1000permission to access data of social media system140, system1000can inform the user as to what data is collected and why, that any collected personal data may be encrypted, that the user can opt out at any time, and that if the user opts out, any personal data of the user is deleted. Features of smartwatches120A-120Z are described further in reference to the perspective view ofFIG.2Ashowing smartwatch120A which can be representative of the construction of smartwatches120A-120Z. Smartwatch120A can include display screen121which can be configured as a rollable display screen. Smartwatch120A can include one or more chambers for storing display screen segments defining a display screen. Smartwatch120A can include chamber122A containing a first display screen segment and container and chamber122B for storing and containing a second display screen segment. Smartwatch120A can include wristband124which can be adapted to be wrapped around a wrist of a user for retention of smartwatch120A on the wrist of a user. Display screen121A can be configured as a rollable display screen which, in a displayed configuration, is exposed to an area outside of a chamber and a second configuration in which the display screen is contained within a chamber. Referring to further aspects of smartwatch120A, smartwatch120A can include microbumps124B formed on an interior (wrist facing) surface of wristband124so that when worn by a user, wristband124can contact a user's wrist by the action of microbumps124B. The interior surface can include microbumps124B and a baseline surface124F which can face a wrist129. Baseline surface124F can be wrist facing and can be defined by the area of the interior surface of wristband not defined by microbumps. Respective ones of microbumps124B can include an active state and an inactive state. In an inactive state, a microbump124B does not extend inwardly from baseline (wrist facing) surface124F of wristband124. In an active state, a microbump124B can protrude inwardly from baseline surface124F to contact a wrist129. FIG.2Adepicts smartwatch120A in a standard operating mode in which display screen121features a viewing dimension to extend from a first terminal end of the wristband defined by chamber122A to a second terminal end of wristband124defined by chamber122B. In the standard operating mode depicted inFIG.2A, display screen121can extend lengthwise and widthwise according to the dimension of an average-sized smartwatch to emulate the operation of a standard size smartwatch. FIG.2Billustrates smartwatch120A in another display screen configuration.FIG.2Billustrates a hidden display screen configuration. In the display screen configuration depicted inFIG.2B, display screen121can be retracted completely into one or more of chamber122A and122B. In one embodiment, display screen121can be defined by a single display screen segment. The single display screen segment can be retractable to be fully contained within display screen chamber122A or chamber122B. According to one embodiment, display screen121A can be defined by the combination of a first display screen segment and a second display screen segment. The first display screen segment can be retractable to be contained within chamber122A, and the second display screen segment can be retractable to be contained within chamber122B. An extended screen display configuration is depicted inFIG.2C. In the extended display screen configuration depicted inFIG.2C, display screen121can be rolled out more extensively to define an extended viewing dimension that is extended lengthwise beyond the length as depicted in the standard display screen configuration ofFIG.2A. In the extended display screen configuration depicted inFIG.2C, display screen121can be rolled out at a distance so that an imaginary vertical plane121P extending lengthwise with an arm and perpendicularly through a display screen viewing surface will not intersect the wrist of a user. Such a configuration is depicted inFIG.2C. Display screen121, as depicted in the extended configuration ofFIG.2C, can be defined by a single display screen segment or multiple display screen segments. In the embodiment in which display screen121is defined by a single display screen segment, chamber122A can roll out the display screen segment into the configuration to define display screen121as shown inFIG.2C. In the embodiment in which display screen121is shown inFIG.2C, as defined by first and second display screen segments, chamber122A can roll out a first of the display screen segments, and chamber122B can roll out the second of the display screen segments to define display screen121in an extended configuration as depicted inFIG.2C. A limited screen display configuration is depicted inFIG.2C. In the display screen configuration depicted inFIG.2D, display screen121can be rolled out less extensively than the length as depicted in the standard display screen configuration ofFIG.2A. Chamber122A can roll out the display screen to the length as depicted inFIG.2D. In the limited display screen configuration as depicted inFIG.2D, a gap127is defined at a top elevation. Gap127inFIG.2Dcan be delimited on one end by display screen121and on another end by an endpoint of wristband124defined by chamber122B. FIG.2Eillustrates a front view of smartwatch120A worn on a wrist129of a user. As best seen inFIG.2E, wristband124can be configured to support chamber122A and chamber122B so that both chamber122A and chamber122B are fixed above a top elevation of wrist129. As shown inFIG.2E, a wrist129of a user can have a top elevation1291, and display screen121can have a bottom elevation of elevation1292which is an elevation above an elevation of elevation1291to define gap128. Thus, as depicted inFIG.2C, it is seen that the configuration permits continuous airflow to the top surface of wrist129depicted inFIG.2D. Manager system110can increase the continuous flow of air towards the skin surface of a user's wrist unencumbered by display screen121by activation of the limited display screen configuration as depicted inFIG.2D, and further increase airflow to a user wrist129by activation of the hidden display screen configuration as depicted inFIG.2B. Aspects of the operation of microbumps formed on an interior (wrist facing) surface of wristband124are described in reference toFIGS.3A and3B. Referring toFIG.3A, wristband124of smartwatch120A can be configured so that at any given time that a smartwatch is worn, a subset of microbumps124B can be active and the particular subset of microbumps that are active can dynamically change over time. For example, referring toFIG.3A, at time T=T1, microbumps124B at locations B0, B3, B4, B6, and B7can be selectively active and the remaining microbumps shown can be inactive. At time T=T1+1, microbumps124B at locations B0, B1, B4, B5, and B6can be active and the remaining microbumps shown can be inactive. At time T=T1+2, the microbumps124B at locations B1, B2, B6, and B7can be active and the remaining microbumps shown can be inactive. Referring to one aspect, smartwatch120A can be configured so that at any given time while smartwatch120A is worn, a user's wrist is contacted by wristband124only at locations of a subset of microbumps124B with the subset dynamically changing over the course of time in which the smartwatch is worn. Referring toFIG.3B,FIG.3Bdepicts smartwatch120A contacting wrist129of a user selectively and only at the location of microbumps124B that are in an active state. Dynamically active microbumps can be provided with use of dynamic tactile interface technologies. A dynamic tactile interface can feature a volume of transparent fluid contained within respective microfluidic channels associated to respective microbumps and a displacement device disposed within wristband124and/or chamber122A can displace a portion of the volume of fluid into the channel to transition a deformable region from the inactive state and the active state. A volume of fluid can be controlled to flow through the fluid channel and a variable volume associated to and defining respective microbumps in order to transition the deformable region between the active and inactive states. The volume of fluid can be manipulated by the displacement device to selectively transition a deformable region of a surface of wristband124defining a microbump between the active state and the inactive state. For example, a displacement device disposed within wristband124and/or chamber122A can pump fluid into the fluid channel to expand the deformable region thereby transitioning the deformable region from the inactive state into the active state, and the displacement device can pump fluid out of the fluid channel to retract the deformable region thereby transitioning the deformable region from the expanded setting back into the inactive state. The volume of fluid can be substantially transparent, translucent, and/or opaque. According to one embodiment, the volume of fluid can be selected to have an index of refraction to support ultrasound imaging functionally. A flowchart illustrating a method for performance by manager system110interoperating with smartwatches120A-120Z and other UE devices130A-130Z in social media system140is described in connection with the flowchart ofFIG.4. At block1201, smartwatches120A-120Z, including smartwatch120A, can be sending registration data for receipt by manager system110. In response to the receipt of registration data, manager system110at block1101can send the registration data to data repository108for receipt and storage by data repository at block1081. Registration data sent at block1201can include user-defined registration data. The user-defined registration data can be defined by a user, e.g., recent purchaser of a smartwatch. The user can define registration data using a user interface as depicted by user interface5002as shown inFIG.5. As shown inFIG.5, a user in contacts area5010of user interface5002can define contacts, e.g., name information, address information, social media account information, email addresses, and the like. In permissions area5020, a user can define permissions, e.g., permissions that permit manager system110to collect and use data of the user for performance of services in connection with the user's use of a smartwatch. Permissions can include, e.g., permissions to use sensor data of a smartwatch of a user and other UE devices of a user in permissions to use application usage data of a smartwatch and other UE devices of a user. Permissions data can also include permissions data specifying permissions to use social media data of a user. In preferences area5030, a user can define preferences of a user, e.g., likes and dislikes (positive preferences and negative preferences). In preferences area5030, a user can complete one or more survey that specifies user preferences. In response to the receipt of registration data in the storage of registration data into data repository108, manager system110can assign a user a UUID. The UUID can be associated to a product identifier for the user such as one or more product identifier identifying smartwatch120A and other UE devices of the user. In response to completion of block1101, manager system110can proceed to block1102. At block1102, manager system110can send data call data to smartwatches120A-120Z and in response to the data call data, smartwatches120A-120Z at send block1202can send return sensor output data and application usage data to manager system110for receipt by manager system110. At block1103, manager system110can send data call data to other UE devices130A-130Z of users of system1000, and in response to the receipt of the data call data sent at block1103, other UE devices130A-130Z at send block1301can send return sensor output and application usage data to manager system110for receipt by manager system110. At block1104, manager system110can send data call data to social media system140. In response to receipt of the data call data, social media system140at block1401can send social media data for receipt by manager system110. At block1105, manager system110can send the received return data sent at block1301and block1401to data repository108for receipt and storage by data repository108at block1082. With further reference toFIG.3A, wristband124can have disposed therein sensor array124X as shown inFIG.3A. Sensor array124X as shown inFIG.3Acan include a plurality of different types of sensors. Sensors of a first type can be substantially evenly distributed throughout an interior of a housing defined by wristband124. Sensors of a second type can be distributed substantially evenly throughout the wristband. Sensors of a third type can be distributed substantially spatially evenly throughout an interior of wristband124. Sensors of a first type can include, e.g., ultrasound sensors for sensing ultrasound images. Sensors of a second type can include, e.g., humidity sensors for sensing humidity. Sensors of a third type can include, e.g., temperature sensors for sensing temperature of a user's skin in a wrist area supporting a watch. Sensors of a fourth type can include, e.g., camera image sensors for sensing anomalous conditions on a user's skin in a wrist area of a user attributable to a rash or other medical skin condition. Referring further toFIG.3A, wristband124can have disposed therein an output device array124Y comprising output devices of one or more different output device types. According to one embodiment, output device types can include, e.g., micro blowers for blowing air toward an interior defined by wristband124heating elements and/or cooling elements. Referring toFIG.2D, one or more of chamber122A or122B can have disposed therein sensor array122X. Sensor array122X can include one or more different types of sensors spatially distributed within the shown chamber. One sensor type that can be disposed within a chamber122A and/or122B can be a camera image sensor. A camera image sensor can output image data processible to return a classifier indicating a gaze of a user toward a particular location. A camera sensor output can indicate whether a user wearing smartwatch120A is gazing or not gazing at a display screen area of smartwatch120A where the area occupied by display screen121is shown inFIG.2A. Other sensors that can be disposed within chamber122A and/or chamber122B can include, e.g., ultrasound sensors, humidity sensors, temperature sensors, and accelerometers. Sensor output data can define sensor metrics herein. Sensor output data can include raw sensor output data or processed sensor output data processed to return structured data defined by one or more classifier tag. Raw sensor output data from a humidity sensor can be processed to return sensor output data having the classifier tag of dryness level. Raw ultrasound sensor output data can be processed e.g. to return sensor output data having the classifier tag of dryness level based on an impact of perspiration on ultrasound transmissivity. Raw ultrasound sensor output data can be processed, e.g., to return sensor output data having a classifier tag that indicates blood flow and heart rate. Raw ultrasound sensor output data provided by ultrasound image data can be processed, e.g., to return skin condition classifier tags such as a classifier that specifies an anomaly level. At block1106, manager system110can initiate a next iteration of training of predictive models set forth herein for return of action decisions. Manager system110can initiate a next iteration of training at block1106, and the training can be completed in the background while next processes are performed, such as processes of block1107and1108. At block1107, manager system110can run screen management process112,FIG.1, for performance of screen optimization and can return a screen optimization action decision. Manager system110at block1107can query a trained predictive model such as is described in connection withFIG.6A. Predictive model6002, as shown inFIG.6A, can be trained with use of training data defined by historical data stored in users area2121of data repository108. Referring to predictive model6002, each respective training dataset for training predictive model6002can include (A) a set of input parameter values, and (B) an outcome parameter value. A set of input parameter values can include (i) a user ID (which can define an identifier of a smartwatch); (ii) a control parameter value set at time N; (iii) a sensor metrics set at time N; and (iv) an application usage parameter value set at time N. An outcome parameter value defining a training dataset for training predictive model6002can include (i) a display screen usage configuration parameter value at time N+1. For each iteratively applied training dataset applied for training of predictive model6002, the value of N can be incremented. Thus, each iteratively applied training dataset trains an outcome parameter value defined by a display screen viewing parameter value at a subsequent time period on input parameter values associated to a previous time period. Trained as described, predictive model6002learns of factors that are predictive and indicative of a user's subsequent viewing usage of a display screen. By the described training process, predictive model6002can be trained to learn a relationship between a current set of input parameter values and a subsequent display screen viewing parameter value. Predictive model6002can be trained to predict a user's viewing usage of a display screen at a next period based on current parameter values associated, e.g., to user ID control parameter values, sensor metrics, and application usage parameter values. Control parameter values of (i) can include, e.g., control parameter values can be controlled specifying current control of smartwatch120A associated to a current user during a specific time period. Such controls can include controls to control a display screen configuration (control screen configuration setting parameter values), an activation state of microbumps124B, operation of blowers within wristband124, and the like. Parameter values defining (ii) a sensor metrics set can include, e.g., sensor output values from sensor array124X and sensor output values from other UE devices130A-130Z associated to a certain user. Sensor output parameter values can also include sensor output values from sensors disposed in other areas of smartwatch120A such as within chambers122A and122B. Sensor output values can include sensor output values that specify a user's gaze on a display screen of a smartwatch. Sensor output values can include raw unstructured output values, and/or structured data values, e.g., classifiers resulting from processing of unstructured data. In one embodiment, sensor output values can include medical skin condition classifiers resulting from processing of camera image sensor data. In one embodiment, sensor output values can include medical skin condition classifiers resulting from processing of camera image sensor data that classifies a medical skin condition in terms of anomaly level. Sensor output data can define sensor metrics herein. Sensor output data can include raw sensor output data or processed sensor output data processed to return structured data defined by one or more classifier tag. Raw sensor output data from a humidity sensor can be processed to return sensor output data having the classifier tag of dryness level. Raw ultrasound sensor output data can be processed e.g. to return sensor output data having the classifier tag of dryness level based on an impact of perspiration on ultrasound transmissivity. Raw ultrasound sensor output data can be processed e.g. to return sensor output data having a classifier tag of that indicates blood flow and heart rate. Raw ultrasound sensor output data provided by ultrasound image data can be processed e.g. to return skin condition classifier tags such as a classifier that specifies an anomaly level. The training data provided by (iii) usage parameter values at time N can include application usage parameter values obtained from running applications of a smartwatch of a user and other UE devices of a user. Usage parameter values can include data indicating what applications are running during a given time period, and status data associated to the various applications. Embodiments herein recognize that what applications are running can indicate information about a user's current behavior. Applications that can be running can include, e.g., fitness applications, shopping applications, gaming applications, and the like. One usage parameter value can be a real time clock value indicating a time of day, which can be output from any number of applications that send usage data. Embodiments herein recognize that a user's use of a smartwatch can vary depending on time of day and accordingly predictive model6002can be trained in one aspect to predict a user's use of a display screen in dependence on a time of day. The training data provided by usage parameter values at time N (iii) can be replaced or supplemented by topics and/or sentiment parameter values associated to a user at time N for iteratively increasing values of N. Manager system110, for return of topic and/or sentiment parameter values, can subject one or more data source to natural language processing by running of NLP process (FIG.1). The one or more data source can include registration data of a user stored in data repository and/or user posts data of social media system140. Manager system110running NLP process114can subject survey data of registration data to natural language processing to extract topics indicative of physical activity, e.g., “running,” “exercise,” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Manager system110running NLP process114can subject user post data of social media system140to national language processing to extract topics indicative of physical activity, e.g. “running,” “exercise” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Manager system110running NLP process114can convert voice inputs captured by a user's smartwatch or other UE device to text and subject the converted text to natural language processing to extract topics indicative of physical activity, e.g. “running,” “exercise” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Embodiments herein recognize that an extracted topic parameter value indicative of physical activity can be predictive of a user's viewing usage of the display screen. For example, some users may regularly view biometrics data displayed on a display screen when engaging in physical activity. Embodiments herein recognize that an extracted sentiment parameter value can be predictive of a user's viewing usage of the display screen. For example, some users may regularly view a user interface displayed on a display screen when fearful. Manager system110can use a decision data structure as set forth in Table A to convert a topic or extracted activity classification (extracted from application usage data) to a physical activity level. TABLE ARowTopicPhysical activity level1Running0.92Reading0.13Gaming0.34Driving0.35Exercise0.86Eating0.4 The outcome parameter value (B) (i) (display usage parameter value at time N+1) applied as training data can be provided by adjusting a display screen configuration setting value. According to one embodiment, a display screen as described in connection withFIGS.2A-2Ccan have display screen configuration setting values provided on a scale of 0.0 to 2.0 where the configuration value of 0.0 defines a hidden display screen configuration setting (FIG.2B), where a configuration setting value of 1.0 defines a standard display screen configuration (FIG.2A), where a display screen configuration setting value of 2.0 defines an extended display screen configuration (FIG.2E), and where a display screen configuration value of between 0.0 and 1.0 defines a limited configuration setting. For providing a display screen viewing parameter value for use as training data as described in connection with predictive model6002, manager system110can scale up or scale down such a display screen configuration setting parameter values in dependence on sensor output data. For example, where the display screen configuration setting value is 0.0 to define a hidden display screen configuration and sensor data indicates in the relevant time period that the user was gazing toward the user's smartwatch during the relevant time period, manager system110can scale the configuration setting value upward, e.g., to 0.5 based on the sensor data indicating that the user intended to use the smartwatch in the relevant time period. In the case that an actual display screen configuration setting parameter value of a display screen configuration is 1.0, defining a standard use configuration, and during the relevant time period, the user never gazes at the user's smartwatch during the relevant time period based on collected sensor data, manager system110can scale the display screen configuration setting value downward, e.g., to 0.5 for providing of a display screen viewing parameter value for use as training data in training predictive model6002. In the case that an actual display screen configuration setting parameter value of a display screen configuration is 1.0, defining a standard use configuration, and during the relevant time period, the user constantly gazes at the user's smartwatch based on collected sensor data, manager system110can scale the display screen configuration setting value upward, e.g., to 1.5 for providing of a display screen viewing parameter value for use as training data in training predictive model6002. In other embodiments, a display screen configuration setting can be used as a display screen viewing parameter value. Predictive model6002, once trained, can respond to query data. A query dataset for using query, predictive model6002can include e.g. (i) a user ID, (ii) a current control parameter value set, (iii) a current sensor metric set, and (iv) a current usage parameter value set. Predictive model6002in response to the described query data can output prediction data. Prediction data can specify a predicted next display screen viewing parameter values for a current smartwatch, e.g., smartwatch120A associated to a current user. The predicted next display screen viewing parameter value can specify a predicted viewing usage of the user. Referring again to the flowchart ofFIG.4, manager system110, at screen optimization action decision block1107, can query predictive model6002to identify a predicted next display screen viewing parameter value for display screen121of smartwatch120A during a next time period. Manager system110can then generate command data to implement a display screen configuration in accordance with the predicted next display screen viewing parameter value. Embodiments herein recognize that based on the training data, querying of predictive model6002can return a prediction that specifies a predicted viewing parameter value on a scale of 0.0 (hidden display screen configuration) to 2.0. For example, manager system110can return a predicted viewing parameter value of, e.g., 0.3, 1.0, 1.8, 2.0, and the like. Embodiments herein recognize that different users can tend to look at their smartwatches under different scenarios. For example, a first user might tend to look at their smartwatch frequently when engaging in a web conference detectable as part of application usage data. Other users may tend to look at their smartwatch frequently when running as detectable with use of an accelerometer sensor output. Still other users may tend to look at their smartwatch frequently after being at rest for a substantial amount of time detectable with use of an accelerometer output. Other users may tend to look at their smartwatch frequently when playing a video game (activity classification=gaming) as can be detectable with use of application usage data as set forth herein. Predictive model6002can be trained with training data so that predictive model6002for a certain user can predict a usage trend of a certain user with respect to that user's smartwatch, i.e., can predict when that certain user will be viewing data from a display screen of smartwatch120A. Predictive model6002can be trained with use of all users data, e.g., every user of system1000. A user ID for use in training predictive model6002and for querying predictive model6002can be associated to a certain smartwatch identifier. Thus, a user ID can constitute a surrogate identifier for a particular smartwatch. Predictive model6002can be configured so that when a predictive model is queried with use of query data that specifies a user ID, predictive model6002outputs a prediction for a particular user using a particular smartwatch. Predictive model6002can be configured so that when query data is absent of a user ID, predictive model6002can predict a display screen viewing for a next time period of a certain smartwatch based on crowdsource training data of all users. Thus, querying predictive model6002with use of a user ID can render a prediction based on a certain user's use and querying predictive model6002without a user ID can return a prediction based on crowdsource data. System1000can be configured so that early in a deployment period, predictive model6002can be queried in a manner to produce a crowdsource data prediction, e.g., with query data absent of user ID and later in a deployment period of a smartwatch when more data on a particular user in a particular smartwatch is accumulated. Predictive model6002can be queried to return predictions based on the current user's utilization of a certain smartwatch. Ensemble model techniques can be used for prediction as return based on weighted use of crowdsource based data predictions and individual user prediction data. Manager system110on completion of block1107can proceed to block1108. At block1108, manager system110can run health management process113(FIG.1). At block1108, manager system110can run health management process113to determine a health optimization action decision. Such a health optimization action decision can optimize health to a user who is wearing smartwatch120A. For return of an action decision as indicated at block1108, manager system110can use Eq. 1 as set forth herein below. Eq. 1 can be used to score a predicted performance for each respective microbump124B of wristband124in an active state. Manager system110can assign higher than baseline scoring values for microbumps predicted to perform positively when in an active state and can assign lower than baseline scoring values to microbumps predicted to perform negatively when in an active state. Scoring values can be assigned on a scale, e.g., of 0.0 (minimally negative performance) to 1.0 (maximally positive performance). Manager system110can score each microbump124B of wristband124using the formula of Eq. 1 below and for each iteration of usage of Eq. 1 can return an ordered list of scoring values ranked from highest scoring to lowest scoring. Manager system110can use the ordered list for determination of which microbumps to activate for any given time period of operation of smartwatch120A. For example, referring again toFIG.3A, manager system110can control microbumps to have differentiated activation patterns at time T=T1, time T=T1+1, and time T=T1+2 as depicted inFIG.3. Eq. 1 is set forth as follows: S=F1W1+F2W2+F3W3+F4W4+F5W5+F6W6+F7W7+F8W8+F9W9+F10W10+F11W11+F12W12(Eq. 1) Where S is a predicted performance scoring value assigned by manager system110to each microbump of wristband124in an active state, where F1-F12are factors contributing to the scoring value and W1-W12are weights associated to the various factors, Eq. 1 can be subject to one or more constraints. For example, one constraint can be that a minimal number of microbumps must be activated at every specific time period of operation of smartwatch120A. Embodiments herein recognize that in some scenarios, it can be advantageous so that at least a minimal number of microbumps contact a user's wrist129at any given point in time. Contacting the user wrist can be advantageous, e.g., for return of accurate sensor data and/or for secure holding of a smartwatch on a user's wrist. Another constraint that can be implemented is that the number of microbumps selected for activation must not exceed a high threshold at every specific time period of operation of smartwatch120A. Such contact can assure sufficient airflow to a wrist. Another constraint that can be implemented is that the microbumps selected for activation can feature a threshold amount of dispersion through an interior surface of a wristband124. Another constraint that can be implemented is that a minimal number of microbumps can be aligned with a detected vein of a user for accurate detection of blood flow via ultrasound sensing, for example. In another aspect, system1000can be configured so that the constraints themselves can vary over time. For example, system1000can be configured so that if natural language processing extracts current topics associated to a user that reference significant physical activity where more perspiration and higher heat is expected, the maximum number of microbumps for activation can be decreased to increase airflow to a user's skin. Referring further to Eq. 1, factor F1can be a covered/uncovered factor referring to whether (display screen121will be controlled to cover a user's wrist during a next time period in the operation of the smartwatch as determined from block1107. Manager system110can assign higher than baseline scoring values under factor F1where a user's wrist area will be uncovered by display screen121, e.g., where display screen121is in a hidden display screen configuration and can assign lower than baseline scoring values under factor F1when the wrist area will be covered by display screen121in a next time period. Embodiments herein recognize that if a wrist area will be uncovered by display screen121, there can be additional air flow to a user's wrist, reducing risk associated to microbump contact. Referring to factor F2, factor F2of Eq. 1 can be a staleness factor. Staleness can refer to a length of time in which a current microbump being scored using Eq. 1 has been active and continually contacting a user's wrist. Manager system110can assign lower than baseline values under factor F2where a microbump being scored has been active in contacting a user's wrist for a substantial period of time and can assign higher than baseline scoring values under factor F2where a microbump is currently not active and not contacting a user's wrist. Factor F3can be a dryness factor. Manager system110can assign higher than baseline values to a microbump where a skin surface aligned to an associated microbump is currently dry and can assign lower than baseline values to a microbump where the wrist skin of a user aligned to the microbump is currently not dry. Manager system110under factor F4can assign higher than baseline scoring values under factor F4where a predicted next time period dryness of a user wrist in an area aligned to a certain microbump being scored is predicted to be dry and can assign lower than baseline scoring values under factor F4where a predicted dryness of a user's wrist in an area aligned to a current microbump being scored is predicted to be not dry (e.g., threshold exceeding perspiration level) during a next time period. For return of predictions as to next period dryness level (perspiration level), manager system110can use predictive model6004as set forth herein. Manager system110under factor F5can assign higher than baseline scoring values under factor F5where manager system110senses cooler temperatures in an area of a wrist aligned to a current microbump being scored and can assign lower than baseline scoring values under factor F5where manager system110detects higher temperatures on a user's wrist in an area aligned to a current microbump being scored. Manager system110under factor F6can assign higher than baseline scoring values under factor F6where predicted temperature of an area of a user's wrist aligned to a current microbump being scored is predicted to be lower than a low threshold and can assign lower than baseline scoring values under factor F6where a next time period temperature of an area of a wrist of a user aligned to a microbump being scored is predicted to be higher than a high threshold. Manager system110can use a predictive model6004as described in connection withFIG.6Bin order to make predictions as to future temperatures in areas of a user wrist aligned to certain microbumps of wristband124. Manager system110under factor F7can assign higher than baseline scoring values under factor F7where an anomaly level of a user's wrist in an area aligned to a microbump being scored is lower than a low threshold (indicating, e.g., a non-rash condition) and can assign lower than baseline scoring values under factor F7where a current anomaly level of a user's wrist in an area aligned to a current microbump being scored exceeds a high threshold (indicating, e.g., a rash condition). Manager system110under factor F8can assign higher scoring values under factor F8where a predicted anomaly level during a next time period in the operation of smartwatch120A is predicted to be lower than a low threshold (indicating, e.g., a predicted non-rash condition) and can assign lower than baseline scoring values under factor F8where a predicted anomaly level during a next time period is higher than a high threshold (indicating, e.g., a rash condition). Manager system110under factor F9can assign higher than baseline scoring values under factor F9where a current heart rate of a user is lower than a low threshold (indicating, e.g., relaxed state of a user) and can assign lower than baseline scoring values under factor F9where a current heart rate of a user exceeds a high threshold (indicating, e.g., a rash condition). Heartrate sensor output data can be extracted by raw sensor output values from ultrasound sensors of sensor array124X. Manager system110under factor F10can assign higher scoring values under factor F10where a predicted heart rate during a next time period in the operation of smartwatch120A is predicted to be lower than a low threshold (indicating, e.g., a relaxed state of a user) and can assign lower than baseline scoring values under factor F10where a predicted heart rate of a user during a next time period exceeds a high threshold (indicative of a stressed state). Factor F11of Eq. 1 can be a vein alignment factor. Manager system110can assign higher than baseline scoring values under factor F11where a microbump being scored is aligned to a vein of a user's wrist and can assign lower than baseline scoring values under factor F11where a microbump being scored is not aligned to user's vein. Embodiments herein recognize that maintaining a minimal number of vein-aligned microbumps can improve sensing of blood flow parameter values. Factor F12of Eq. 1 can be a natural language processing factor. Manager system110can assign higher than baseline scoring values under factor F12where natural language processing indicates a below baseline level of physical activity and can assign a higher than baseline scoring value under factor F12where natural language processing indicates an above baseline level of physical activity. Manager system110for assigning scoring values under factor F12can subject data from one or more data source to natural language processing by running of NLP process (FIG.1). The one or more data source can include registration data of a user stored in a data repository and/or user posts data of social media system140. Manager system110running NLP process can subject survey data of registration data to natural language processing to extract topics indicative of physical activity, e.g. “running,” “exercise,” and the like. Manager system110running NLP process can subject user post data of social media system140to natural language processing to extract topics indicative of physical activity, e.g., “running,” “exercise,” and the like. Embodiments herein recognize that if a user will be engaging in physical activity there can be a greater risk associated to a microbump contacting a user's wrist. As noted, sensors of sensor array124X disposed in wristband124can include ultrasound sensors. Ultrasound sensors can include imaging ultrasound sensors that can sense depth and other attributes of user tissue including depth and other attributes of veins of a user. Further, ultrasound imaging processing of a vein area can reveal biometric data such as blood flow parameter values. System1000can be configured so that in an initial iteration of health optimization at decision block1108, manager system110can activate ultrasound sensors of sensor array124X for an extended period to obtain sufficient data such that locations of veins can be ascertained with use of ultrasound imaging techniques. Thus, data can be collected associated to each microbump, indicating vein alignment status of each respective microbump, e.g., aligned with a vein or not aligned with a vein. The collected vein associated attributes can be used in performance of assigning scoring values under factor F9for each respective microbump of wristband124. Wristband124, according to one embodiment, can include between about 2 and 1,000,000 microbumps in one embodiment, between about 2 and 100,000 microbumps, and according to one embodiment, between about 2 and 10,000 microbumps, and according to one embodiment, between about 2 and 1,000 microbumps, and according to one embodiment, between about 2 and 100 microbumps. The microbumps can be substantially evenly distributed throughout an interior wrist facing surface of wristband124. For return of predicted dryness, temperature, and skin condition anomaly levels under factors F4, F6, and F8, manager system110can query predictive model6004as set forth inFIG.6B. Predictive model6004can be trained with use of iteratively applied training datasets. An iteratively applied training dataset can include (A) a set of input parameter values in combination with (B) a set of output parameter values. A set of input parameter values can include (i) a user ID (which can define an identifier of a smartwatch); (ii) a control parameter value set at time N; (iii) a sensor metrics set at time N; and (iv) an application usage parameter value set at time N. An outcome parameter value defining a training dataset for training predictive model6004can include (i) a sensor metrics set at time N+1. For each iteratively applied training dataset applied for training of predictive model6002, the value of N can be incremented. Thus, each iteratively applied training dataset trains an outcome parameter value defined by a sensor metrics set on input parameter values associated to a previous time period. By the described training process, predictive model6004can be trained to learn a relationship between a current set of input parameter values and a subsequent sensor metrics set at time N+1. Predictive model6004can be trained to predict a sensor metrics set associated to a user at a next period based on current parameter values associated, e.g., to a user ID control parameter value, sensor metrics, application usage parameter values, and/or NLP output parameter values. Control parameter values of (i) can include, e.g., control parameter values can be controlled specifying current control of smartwatch120A associated to a current user during a specific time period. Such controls can include controls to control a display screen configuration (control screen configuration setting parameter values), an activation state of microbumps124B, operation of micro blowers within wristband124and the like. Parameter values defining (ii) a sensor metrics set can include, e.g., sensor output values from sensor array124X and sensor output values from other UE devices130A-130Z associated to a certain user. Sensor output parameter values can also include sensor output values from sensors disposed in other areas of smartwatch120A such as within chambers122A and122B. Sensor output values can include sensor output values that specify a user's gaze on a display screen of a smartwatch. Sensor output values can include raw unstructured output values, and/or structured data values, e.g., classifiers resulting from processing of unstructured data. In one embodiment, sensor output values can include medical skin condition classifiers resulting from processing of camera image sensor data. In one embodiment, sensor output value can include medical skin condition classifiers resulting from processing of camera image sensor data that classifies a medical skin condition in terms of anomaly level. The training data provided by (iii) usage parameter values at time N can include application usage parameter values obtained from running applications of a smartwatch of a user and other UE devices of a user. Usage parameter values can include data indicating what applications are running during a given time period, and status data associated to the various applications. Embodiments herein recognize that what applications are running can indicate information about a user's current behavior. Applications that can be running can include, e.g., fitness applications, shopping applications, gaming applications, and the like. One usage parameter value can be a real time clock value indicating a time of day, which can be output from any number of applications that send usage data. Embodiments herein recognize that a user's use of a smartwatch can vary depending on time of day and accordingly predictive model6002can be trained in one aspect to predict a user's use of a display screen in dependence on a time of day. The training data provided by usage parameter values at time N (iii) can be replaced or supplemented by topics and/or sentiment parameter values associated to a user at time N, for iteratively increasing values of N. Manager system110, for return of topic and/or sentiment parameter values, can subject one or more data source to natural language processing by running of NLP process (FIG.1). The one or more data source can include registration data of a user stored in data repository and/or user post data of social media system140. Manager system110running NLP process114can subject survey data of registration data to natural language processing to extract topics indicative of physical activity, e.g., “running,” exercise,” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Manager system110running NLP process114can subject user post data of social media system140to natural language processing to extract topics indicative of physical activity, e.g. “running,” “exercise,” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Manager system110running NLP process114can convert voice inputs captured by a user's smartwatch or other UE device to text and subject the converted text to natural language processing to extract topics indicative of physical activity, e.g. “running,” “exercise,” and the like, and/or sentiment parameter values, e.g., fear=0.8/1.0. Embodiments herein recognize that an extracted topic parameter value indicative of physical activity can be predictive of sensor output values. For example, extracted topics indicative of threshold exceeding physical activity can be predictive of increased perspiration represented in sensor metrics. Embodiments herein recognize that an extracted sentiment parameter value can be predictive of next time period exhibited sensor metrics. For example, a threshold exceeding level of fear can be predictive of increased heart rate and increased perspiration represented in sensor metrics. Training of predictive model6004can include training the outcome parameter value (B) (i) (sensor metrics at time N+1) on prior time period input parameter values as set forth herein. Trained as described, predictive model6004can learn a relationship between a set of inputs and their impact on future sensor output values as defined by the sensor metric set at time N+1. The described iteratively applied training dataset can be applied for increasing values of N so that predictive model6004is trained to learn a relationship between a set of current input values in relation to a subsequent sensor metric set. Embodiments herein recognize that a future dryness (inverse perspiration level), future temperature, and future skin condition anomaly can be dependent on a variety of factors. The described predictive model6004can be trained with a plurality of inputs that express the plurality of factors. A sensor metrics set can include parameter values such as blood flow sensing values sensed with use of an ultrasound sensor of sensor array124X. Embodiments herein recognize that a perspiration level of a user's wrist area can depend on a variety of factors such as heart rate of a user, whether the user is afraid, or whether the user is exercising. The parameter values described in connection with the training data for training predictive model6004can express such factors. Predictive model6004, once trained, can predict an exhibited dryness (perspiration) level, temperature, and skin condition anomaly level at a particular location of a user's wrist as aligned with and associated to a particular microbump of a wristband124. Predictive model6004, once trained, can be subject to a query with use of query data. Query data for use in querying predictive model6004can include a dataset which comprises (i) a user ID, (ii) a current control parameter value set, (iii) a current metric set, and (iv) a current application usage parameter value set (and/or a current topic parameter value set). The returned output of predictive model6004, when subjected to the described query data, can include a predicted next time period sensor metric set for a certain area of a wrist aligned to and associated with a particular microbump of wristband124currently being subject to scoring using Eq. 1. Thus, with use of predictive model6004, manager system110can determine whether a particular area of a wrist aligned to a particular microbump is predicted to be dry in a next time period. It can also predict other metrics such as by providing predicted temperature and predicted redness level (indicative of a rash). Referring to predictive model6004and factors F3, F5, F7, and F9, manager system110can predict an adrenaline surge exhibited by a user. During an adrenaline surge, the following can occur: (a) eyes dilate; (b) heart beats faster; (c) sweat increases; (d) bronchioles dilate (so more oxygen is obtained); (e) blood vessels dilate (enlarge) in our muscles; (f) blood vessels constrict in our digestive tract to slow digestion (g) kidneys make more renin (to increase blood pressure) (g) glucose production increases, for energy. Depending on the configuration of system1000, system1000can detect some or all of the noted symptoms of an adrenaline rush, and system1000using predictive model6004can predict some or all of the noted symptoms. With historical learning, system1000can be predicting the adrenal hormone secretion timeline (e.g. based on parameter values indicating the threshold exceeding fear or any change in physiological parameters) and based on such, predicting manager system110can dynamically control an activation pattern of microbumps124B. At action decision block1108, manager system110can return an action decision to generate command data. With use of Eq. 1 and predictive model6004, manager system110can generate an ordered list of microbumps. Based on the ordered list, manager system110can generate command data for commanding activation of top ranked microbumps according to the ordered list. As noted, the ordered list can be subject to various constraints, including constraints involving minimal number, maximum number, spatial distribution of microbumps, and the like. After generation of such command data, manager system110can proceed to block1109. At block1109, manager system110can send command data to smartwatch120. In response to the received command data, smartwatch120A at block1203can implement the command data, e.g., to activate select microbumps as identified at health optimization action decision block1108. Predictive model6004can be configured so that when query data is absent of a user ID, predictive model6004can predict a display screen viewing for a next time period of a certain smartwatch based on crowdsource training data of all users. Thus, querying predictive model6004with use of a user ID can render a prediction based on a certain user's use and querying predictive model6004without a user ID can return a prediction based on crowdsource data. System1000can be configured so that early in a deployment period, predictive model6004can be queried in a manner to produce a crowdsourced data prediction, e.g., with query data absent of user ID and later in a deployment period of a smartwatch when more data on a particular user in a particular smartwatch is accumulated. Predictive model6004can be queried to return predictions based on the current user's utilization of a certain smartwatch. Ensemble model techniques can be used for prediction as return based on weighted use of crowdsource based data predictions and individual user prediction data. On completion of send block1109, manager system110can proceed to return block1110. At return block1110, manager system110can return to block1102. At block1102, manager system110can send a next iteration of data call data to smartwatch120A-12Z for return of a next iteration of sensor output data and application usage data from at least one smartwatch, e.g., smartwatch120A. It will be seen, with reference to the flowchart ofFIG.4, that manager system110can iteratively perform the loop of blocks1102-1110during a deployment period of smartwatch120A. With each iteration of the loop of blocks1102-1110, manager system110can retrain predictive models, such as predictive model6002and predictive model6004with training data defined by newly received sensor output data and application usage data. At each iteration of block1107, manager system110can generate command data for providing a certain display screen configuration, e.g. a hidden display screen configuration, limited display screen configuration, a standard display screen configuration, or an extended display screen configuration. At each iteration of health optimization action decision block1108, manager system110can generate a next iteration of command data for controlling activation states of respective microbumps124B. Manager system110at send block1109can iteratively apply command data for providing a certain display screen configuration and a certain microbump pattern. It will be seen with reference to the loop of blocks1102-1110andFIG.3A, that manager system110can iteratively change a pattern of active microbumps over time so that a subset of microbumps that are currently active dynamically change over time. The dynamic changing of microbumps can be provided to avoid the user being contacted by wristband124at a certain focus location for extended periods. Rather, the contact points at which wristband124contacts a user's wrist can dynamically change to increase the health of a user. In one embodiment, manager system110can monitor the aggregate score of each microbump of wristband124under Eq. 1. Given that the score, S, for each microbump specifies predicted performance of microbumps in an active state, which is an indication of a capacity of skin to tolerate contact, the aggregate score can provide an indication of the overall health of a user's wrist. In one embodiment, manager system110can be configured so that responsively to the aggregate score falling below a low threshold, manager system110can activate output device array124Y provided by micro blowers, to increase the capacity of a user's skin to tolerate contact. In one embodiment, manager system110can be configured so that responsively to the aggregate score falling below a low threshold, manager system110can decrease a maximum number of microbumps that can be activated at any given time period. In one embodiment, manager system110can be configured so that responsively to the aggregate score exceeding a high threshold, manager system110can deactivate output device array124Y provided by microblowers. In one embodiment, manager system110can be configured so that responsively to the aggregate score exceeding a high threshold, manager system110can increase a maximum number of microbumps that can be activated at any given time period. Various available tools, libraries, and/or services can be utilized for implementation of predictive model6002and/or predictive model6004. For example, a machine learning service can provide access to libraries and executable code for support of machine learning functions. A machine learning service can provide access to a set of REST APIs that can be called from any programming language and that permit the integration of predictive analytics into any application. Enabled REST APIs can provide e.g. retrieval of metadata for a given predictive model, deployment of models and management of deployed models, online deployment, scoring, batch deployment, stream deployment, monitoring and retraining deployed models. According to one possible implementation, a machine learning service provided by IBM® WATSON® can provide access to libraries of APACHE® SPARK® and IBM® SPSS® (IBM® WATSON® and SPSS® are registered trademarks of International Business Machines Corporation and APACHE® and SPARK® are registered trademarks of the Apache Software Foundation. A machine learning service provided by IBM® WATSON® can provide access to a set of REST APIs that can be called from any programming language and that permit the integration of predictive analytics into any application. Enabled REST APIs can provide e.g. retrieval of metadata for a given predictive model, deployment of models and management of deployed models, online deployment, scoring, batch deployment, stream deployment, monitoring and retraining deployed models. Training of predictive model6002and/or predictive model6004can include use of e.g. support vector machines (SVM), Bayesian networks, neural networks, and/or other machine learning technologies. At return block1083, data repository108can return to a stage preceding block1082to receive additional returned sensor output, application usage data, and returned social media data. Referring to smartwatches120A-120Z, the at least one smartwatch, on completion of implement block1203, can proceed to block1204. At block1204, the at least one smartwatch can return to a stage preceding block1202in order to respond to received data call data so that the at least one smartwatch can send return sensor data and application usage data to manager system110at block1202. Referring to other UE devices130A-130Z, other UE devices130A-130Z at return block1302can return to a stage preceding block1301so that the other UE devices130A-130Z are able to respond to data call data sent at block1103and respond at block1301to such data call data to return sensor output data and application usage data at block1301. Social media system140at return block1402can return to a stage preceding block1401so that social media system140is able to respond to data call data sent by manager system110at block1104in order for social media system140at block1401to send social media data to manager system110. There is set forth herein a method and system by which touch area on the skin can be reduced, and based on skin parameter analysis, the touch location on the skin can be changed dynamically, so that perspiration ducts are not covered. Embodiments herein recognize that some users may experience discomfort or skin irritation when wearing the smartwatch for prolonged periods. Embodiments herein recognize, for example, that when a smartwatch wristband is covering the wrist skin, then perspiration ducts can be blocked. Blocking of wrist skin can cause rashes in the skin. Embodiments herein also recognize that if a user wears a smartwatch too tightly or loosely for an extended period, various problems could arise such as redness, itchiness, swelling, and irritation. Embodiments herein recognize that in some scenarios, perspiration, water, soap, and other irritants get trapped against a wearer's skin under the device, causing skin reactions. If the wristband is too tight, it could block the perspiration ducts. If the wristband is too loose, the wristband can slide on a wrist to cause further irritation. Embodiments herein recognize that when a user wears a smartwatch, the smartwatch can be covering the skin surface around the wrist, and hence perspiration ducts can be blocked to potentially cause skin rashes. Embodiments herein provide a method and system by which a smartwatch wristband can be controlled to avoid blocking the perspiration ducts of any user when he wears the smartwatch. Accordingly, the skin surface will not be blocked and hence there will not be any change of skin rashes. According to another aspect, a smartwatch display screen can be dynamically controlled in dependence on predicting a user's need to look at the display area of smartwatch. According to one aspect, an area occupied by a display screen can be moved to increase airflow to a user's skin. A smartwatch display screen can be configured as a rollable display screen. Based on predicted viewing use of a user, a display screen can be unrolled and display viewing area defined by the display screen be created for smartwatch. Based on a user's predicted viewing use, the rollable display area can be expanded with use of first and second chambers which can be provided as cylindrical chambers, each of which can have a roller for winding and unwinding a display screen segment. A system can be configured to iteratively predict a user's viewing use of a smartwatch and can apply command data so that viewing area of the display screen matched the predicted viewing use. The inner (wrist facing) surface of a smartwatch can include microbumps with activation states controllable within use of microfluidic features defined within a housing. A system herein can control the microfluidic features to dynamically change the position of the microbumps to thereby alter the position of the smartwatch's contact area with skin. The smartwatch can be controlled so that a user's wrist is absent of an area that is continually contacted during a wear period of a smartwatch. A smartwatch wristband can have incorporated therein a plurality of sensors. The plurality of sensors can include, e.g., humidity sensors, temperature sensors, ultrasound sensors, and/or camera image sensors. The sensors can output sensor parameter values provided, e.g., by a perspiration level at a particular location on a wrist (defining with other locations a perspiration spatial pattern), temperature at a particular location on a wrist (defining with other locations a temperature spatial pattern), and skin condition anomaly level at a particular location on a wrist (defining with other locations a skin condition anomaly spatial pattern). A system herein can determine current sensor output parameter values, and, with use of a trained predictive model, can predict sensor output values at a next time period. A system herein can use current sensor output parameter values and/or predicted next time period sensor output parameter values to adjust to iteratively adjust a spatial pattern defined by active state wristband microbumps. A smartwatch wristband can incorporate an ultrasound scanning module which can scan the position of the vein, and accordingly, can identify an appropriate portion on a wristband where microbumps will be activated, so with minimum touch, biometric data can be gathered, and the touch area will dynamically be changing. Using historical learning, a smartwatch herein will know when a display screen viewing area should be created based on a user's predicted viewing use. When a system herein predicts that a user wants to view a display screen, the system can command a smartwatch to unroll a display screen to define an exposed viewing area of a display screen. The position, dimension, and shape (touch area with wrist) of activated microbumps can be provided based on a perspiration pattern so that one area is not covered for an entirety of a smartwatch wearing session. A system herein can be configured to track any irritation in the skin because of applied pressure. The frequency and density of microbumps that are activated can be determined dynamically based on a tracked current health condition of a user. System1000can be configured so that if the health condition of a user is poor, then continuous biometric data gathering will be required. Activities of a user can be tracked, e.g., with use of application usage data, and/or natural language processing output parameter values. Historical activities defining historical data can be used to train predictive models. Activities that can be tracked can include activities that are stressful, or which have a threshold exceeding physical activity level. A system herein can control a display screen and/or a wristband of a smartwatch in dependence on tracked activities of a user. According to one embodiment, a wristband can include an array of output devices provided by an array of microblowers configured to blow air through gaps defined amongst microbumps of a wristband. According to one embodiment, a system herein can control such microblowers dynamically in dependence on detected conditions. Embodiments herein can feature a rollable display screen defining an adjustable viewing area. A system herein can be configured so that when smartwatch display screen is not used for viewing, the display screen can be controlled to be in a hidden or limited display screen configuration to define a gap in an area that can be occupied by a display screen, resulting in increased airflow to a wrist. A system herein can be configured so that when a user wants to view a display screen, the system can apply command data for rolling out a rollable display screen to define an exposed viewing area for a user to view. Rolling out of a screen display can be expanded to increase a dimension of an exposed viewing area. The wristband of a smartwatch will be having microfluidics-based microbumps, and the same microbumps can be programmatically be created. The microbumps can be touching on the wrist skin, and from time to time, the touch position can also change. A system can be configured so that a certain area of skin on a wrist will not be persistently covered during a wearing session of a smartwatch. Accordingly, airflow is encouraged, and a perspiration duct will not be covered. The touch location on the skin of a wrist area can be changed from time to time. A wristband herein can include microfluidic layers which incorporate various sensors, e.g., humidity sensors, temperature sensors, ultrasound sensors, and camera image sensors. When a user wears a smartwatch, then the belt area will be identifying the touch location dynamically. The sensor feed from the wristband will be identifying any change in sensor output parameter values specifying a skin attribute and will be tracking the perspiration generation pattern in the smartwatch. A system can be configured to identify the microfluidics channels for activation of a microbump, and how microbumps will be created to touch the skin surface. A smartwatch can be identifying how the touch position on the skin area needs to be changed, and accordingly identifying microfluidics channels to control for activation of select microbumps touching on the skin surface. The smartwatch wristband can include ultrasound sensors for ultrasound scanning, and based on an ultrasound scan, a vein position can be identified. A system can be configured so that a minimal number of microbumps are in contact with a vein location of a wrist for accurate sensing of heart rate also by ultrasound. Based on the position of the vein, appropriate locations on an inner (wrist facing) surface of a wristband can be selectively activating select microbumps for touching on the skin surface. A display screen can be defined by a rollable display screen, and the same will be rolled and contained in one or more chamber such as a cylindrical chamber. When a user needs the display area is to be created, then the display area will be created dynamically, and accordingly, the user can view the display area. Using historical learning, the smartwatch will be identifying when the user needs to view the display area of the smartwatch and accordingly, the display area will be unrolled from the rollable cylinder When the smartwatch detects the user wants to look at the smartwatch display, then the rollable display area will be expanded and the user can view the required content in the display screen. Certain embodiments herein may offer various technical computing advantages involving computing advantages to address problems arriving in the realm of computer systems. Embodiments herein can involve improvements to computer systems, particularly in the realm of smartwatch computer systems. Embodiments herein can include features for predicting a user's display screen viewing of a smartwatch and can responsively generate a prediction as to subsequent use of applied command data for changing a display screen configuration of a smartwatch. Embodiments herein can include controlling a smartwatch to exhibit different display screen configurations such as a hidden display screen configuration, a limited display screen configuration, a standard display screen configuration, and an extended display screen configuration. Embodiments herein can predict physiological data associated to a user's wearing of a smartwatch. Embodiments herein can include a smartwatch wristband featuring microbumps that can be selectively activated to establish contact between a wristband and a user's wrist at selected locations. Embodiments herein can feature control of such microbumps so that at a given time, only a subset of microbumps of a wristband contact a user's wrist and further, so that a particular subset of microbumps which contact a user's wrist at any particular time changes dynamically over time. Embodiments herein can include predicting the level of perspiration on a user's wrist and can responsively control a smartwatch to optimize health and comfort to a user in dependence on the predicted perspiration level. Embodiments herein can include predicting a physiological parameter value associated to a user and can responsively control a smartwatch to optimize health and comfort to a user in dependence on the predicted physiological parameter value. Various decision data structures can be used to drive artificial intelligence (AI) decision making, such as a decision data structure that cognitively maps topics to physical activity levels. Decision data structures as set forth herein can be updated by machine learning so that accuracy and reliability is iteratively improved over time without resource consuming rules intensive processing. Machine learning processes can be performed for increased accuracy and for reduction of reliance on rules based criteria and thus reduced computational overhead. For enhancement of computational accuracies, embodiments can feature computational platforms existing only in the realm of computer networks such as artificial intelligence platforms, and machine learning platforms. Embodiments herein can employ data structuring processes, e.g., processing for transforming unstructured data into a form optimized for computerized processing. Embodiments herein can examine data from diverse data sources such as mobile device sensors, mobile device applications, and social media servers. Embodiments herein can include artificial intelligence processing platforms featuring improved processes to transform unstructured data into structured form permitting computer based analytics and decision making. Embodiments herein can include particular arrangements for both collecting rich data into a data repository and additional particular arrangements for updating such data, and for use of that data to drive artificial intelligence decision making. Certain embodiments may be implemented by use of a cloud platform/data center in various types including a Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Database-as-a-Service (DBaaS), and combinations thereof based on types of subscription. FIGS.7-9depict various aspects of computing, including a computer system and cloud computing, in accordance with one or more aspects set forth herein. It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.7, a schematic of an example of a computing node is shown. Computing node10is only one example of a computing node suitable for use as a cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. Computing node10can be implemented as a cloud computing node in a cloud computing environment, or can be implemented as a computing node in a computing environment other than a cloud computing environment. In computing node10there is a computer system12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system12may be described in the general context of computer system-executable instructions, such as program processes, being executed by a computer system. Generally, program processes may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program processes may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.7, computer system12in computing node10is shown in the form of a computing device. The components of computer system12may include, but are not limited to, one or more processor16, a system memory28, and a bus18that couples various system components including system memory28to processor16. In one embodiment, computing node10is a computing node of a non-cloud computing environment. In one embodiment, computing node10is a computing node of a cloud computing environment as set forth herein in connection withFIGS.8-9. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program processes that are configured to carry out the functions of embodiments of the invention. One or more program40, having a set (at least one) of program processes42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program processes, and program data. One or more program40including program processes42can generally carry out the functions set forth herein. In one embodiment, manager system110can include one or more computing node10and can include one or more program40for performing functions described with reference to manager system110as set forth in the flowchart ofFIG.4. In one embodiment, respective smartwatches120A-120Z can include one or more computing node10and can include one or more program40for performing functions described with reference to smartwatches120A-120Z as set forth in the flowchart ofFIG.4. In one embodiment, respective UE devices130A-130Z can include one or more computing node10and can include one or more program40for performing functions described with reference to UE devices130A-130Z as set forth in the flowchart ofFIG.4. In one embodiment, social media system140can include one or more computing node10and can include one or more program40for performing functions described with reference to social media system140as set forth in the flowchart ofFIG.4. In one embodiment, the computing node based systems and devices depicted inFIG.1can include one or more program for performing function described with reference to such computing node based systems and devices. Computer system12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system12; and/or any devices (e.g., network card, modem, etc.) that enable computer system12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. In addition to or in place of having external devices14and display24, which can be configured to provide user interface functionality, computing node10in one embodiment can include display25connected to bus18. In one embodiment, display25can be configured as a touch display screen and can be configured to provide user interface functionality, e.g. can facilitate virtual keyboard functionality and input of total data. Computer system12in one embodiment can also include one or more sensor device27connected to bus18. One or more sensor device27can alternatively be connected through I/O interface(s)22. One or more sensor device27can include a Global Positioning Sensor (GPS) device in one embodiment and can be configured to provide a location of computing node10. In one embodiment, one or more sensor device27can alternatively or in addition include, e.g., one or more of a camera image sensor, a gyroscope, a temperature sensor, a humidity sensor, a pulse sensor, a blood pressure (bp) sensor, an ultrasound sensor or an audio input device. Computer system12can include one or more network adapter20. InFIG.8computing node10is described as being implemented in a cloud computing environment and accordingly is referred to as a cloud computing node in the context ofFIG.8. Referring now toFIG.8, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.8are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.9, a set of functional abstraction layers provided by cloud computing environment50(FIG.8) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.9are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and processing components96for controlling a smartwatch as set forth herein. The processing components96can be implemented with use of one or more program40described inFIG.7. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”), and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a method or device that “comprises,” “has,” “includes,” or “contains” one or more steps or elements possesses those one or more steps or elements, but is not limited to possessing only those one or more steps or elements. Likewise, a step of a method or an element of a device that “comprises,” “has,” “includes,” or “contains” one or more features possesses those one or more features, but is not limited to possessing only those one or more features. Forms of the term “based on” herein encompass relationships where an element is partially based on as well as relationships where an element is entirely based on. Methods, products and systems described as having a certain number of elements can be practiced with less than or greater than the certain number of elements. Furthermore, a device or structure that is configured in a certain way is configured in at least that way, but may also be configured in ways that are not listed. It is contemplated that numerical values, as well as other values that are recited herein are modified by the term “about”, whether expressly stated or inherently derived by the discussion of the present disclosure. As used herein, the term “about” defines the numerical boundaries of the modified values so as to include, but not be limited to, tolerances and values up to, and including the numerical value so modified. That is, numerical values can include the actual value that is expressly stated, as well as other values that are, or can be, the decimal, fractional, or other multiple of the actual value indicated, and/or described in the disclosure. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description set forth herein has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of one or more aspects set forth herein and the practical application, and to enable others of ordinary skill in the art to understand one or more aspects as described herein for various embodiments with various modifications as are suited to the particular use contemplated. | 118,415 |
11860585 | DETAILED DESCRIPTION Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following descriptions are not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims. Embodiments disclosed herein are directed to an electronic device, such as a portable and/or wearable electronic device that may use an air-permeable seal for equalizing air pressure within the electronic device with the air pressure of the external environment. The air-permeable seal may be implemented on a smartwatch or smartphone and be positioned between a cover and a housing of the electronic device to allow pressure equalization between an internal chamber of the electronic device and the external environment. Unlike some traditional pressure equalization vents which can rupture, tear, and/or leak as pressure on the seal increases, or become clogged over time, the air-permeable seal system described herein may improve the robustness and reliability of electronic devices by compressing, and thereby sealing off, an internal cavity of the electronic device as the external pressure on the device increases. Compression of the air-permeable seal can increase a resistance of the seal to water ingress, which may allow a device incorporating these seals to be taken to greater underwater depths. In some embodiments, an electronic device may include an internal pressure-sensing device that is positioned within an internal chamber of the electronic device and measures environmental and/or internal pressures of the electronic device. Output from the pressure-sensing device may be used to determine the device's elevation, velocity, direction of motion, orientation, water depth, and so on. For example, a pressure-sensing device may make barometric pressure measurements to determine an elevation of the device or a change in elevation of the device. The accuracy of pressure measurements from the internal pressure-sensing device may rely on the rate of pressure equalization between the internal cavity and the external environment. Accordingly, if pressure equalization is slow, pressure measurements made by the internal pressure-sensing device may lag behind the actual external pressure. Embodiments described herein are generally directed to electronic devices incorporating a seal that is permeable to air, and resists/inhibits the ingress of water (which may be referred to as an “air-permeable seal”) that is positioned between a cover glass and a housing of the electronic device. Such a seal system may be incorporated into electronic devices such as smartwatches, mobile phones, tablet computing devices, laptop computing devices, personal digital assistants, digital media players, wearable devices, and the like to provide an air-permeable seal that allows pressure equalization between an internal chamber of the device and the external environment. When the pressure of the environment around the electronic device increases, the pressure on the cover glass can increase and compresses the seal to restrict air flow into and out of the device. As the external pressure continues to increase, the air-permeable seal may continue to compress, which may further restrict air flow through the seal and/or increase the water resistance of the seal. When the seal is fully compressed, the seal may become impermeable to air as well as resist water penetration at greater pressures (depths) thereby isolating/sealing the internal chamber of the electronic device from the external environment. As described herein, the air-permeable seal may be positioned between two or more outer housing members. For example, the air-permeable seal can be positioned between a cover glass and a housing of an electronic device. The air-permeable seal can extend around a perimeter of the cover glass such that the exposed surface area of the air-permeable seal is maximized to increase the air flow between the internal chamber and the external environment. In some embodiments, the air-permeable seal can couple the cover glass to the housing. Accordingly, the pressure that is applied to the front cover glass may be transferred to the air-permeable seal and compress the air-permeable seal, which can restrict air flow through the seal and/or increase a water resistance of the seal. As the pressure on the cover glass is decreased, the air-permeable seal may expand and the air flow through the seal may increase, thereby allowing pressure to equalize more quickly between an internal chamber of the device and the external environment. As described herein, the air-permeable seal may include multiple layers and/or multiple different materials. For example, the air-permeable seal can include a first air-permeable material forming a first layer of the air-permeable seal, where the first material is air-permeable and repels water. The first material may be coupled with the housing via a second layer of adhesive material and may also be coupled to the cover glass via a third layer of adhesive material. The second and third layers of adhesive materials can be stiffer than the first air-permeable material such that, as the cover glass is moved toward the housing, the first air-permeable material compresses. In some cases, the first and second layers of the adhesive materials may be substantially impermeable to both water and air. Accordingly, pressure equalization between the internal cavity of the device and the external environment may occur via air flow through the first air-permeable material. In some embodiments, the seal can include multiple layers of air-permeable material, which may be used to increase the air flow between the internal cavity and the external environment, which may reduce lag in pressure measurement from an internal pressure-sensing device. In some embodiments, as described herein, the air-permeable seal system can be used to estimate an external water pressure. For example, when the electronic device is brought underwater, the increased pressure on a cover glass of the device may compress the air-permeable seal thereby sealing the internal chamber from the external environment. In some cases, the air-permeable seal can include a second compressible layer that is also impermeable to water. As the external pressure increases (e.g., due to increasing depth), the second compressible layer may compress, thereby compressing air sealed within the internal chamber. The internal pressure-sensing device may measure these pressure changes in the internal chamber due to the seal compressing, and use these pressure measurements to estimate an external pressure and/or water depth of the device. In some embodiments, as described herein, the air-permeable seal system can include a compression limiter. For example, the compression limiter may restrict movement of the cover glass towards the housing thereby restricting the amount of compression experienced by the air-permeable seal. In some cases, the compression limiter may protect the air-permeable seal from damage due to over compression. As described herein, the air-permeable seal system can also include a backup or secondary seal system. For example, a second seal may be positioned between the cover glass and the housing. In an uncompressed state, the second seal may be offset from either the cover glass or the housing to form an air gap. Accordingly, in the uncompressed state, the air-permeable seal may be the primary mechanism for preventing water from entering the internal chamber while allowing the pressure to equalize with the external environment. In a compressed state, the cover glass may move toward the housing and the secondary seal may become compressed between the cover glass and the housing which may further seal the internal chamber. These and other embodiments are discussed below with reference toFIGS.1A-11. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these Figures is for explanatory purposes only and should not be construed as limiting. FIG.1Aillustrates a first view of an example electronic device100incorporating an air-permeable seal. The electronic device100is depicted as an electronic watch (e.g., a smartwatch), though this is one example embodiment of an electronic device and the concepts described herein may apply equally or by analogy to other electronic devices, including mobile phones (e.g., smartphones), tablet computers, notebook computers, head-mounted displays, digital media players (e.g., mp3 players), health-monitoring devices, other portable electronic devices, or the like. The electronic device100can incorporate an air-permeable seal as described herein. The electronic device100may be worn by a user and include one or more sensors that determine or estimate a condition of the environment (e.g., barometric pressure, moisture level, temperature, and so on) and/or condition(s) of the user (e.g., heart rate, position, direction of movement, body temperature, and so on), which may be displayed or presented to the user. Different sensors may be positioned at different locations on or within the electronic device100depending on operating requirements of a particular sensor, the condition being detected by the sensor, the design of the electronic device100, and so on. In some cases, it may be desirable to protect electronic and/or other water sensitive components that are located within the electronic device100from being exposed to water, or other environmental conditions such as dust, debris, contamination, and so on. Accordingly, the electronic device100can be sealed to protect these components. The electronic device100can include an air-permeable seal to allow pressure in the sealed internal chamber of the electronic device to equalize with the external environmental pressure. As used herein, the term air-permeable refers to materials that are permeable to air and/or impermeable or resistant to water ingress. For example, an air-permeable seal can allow air to move through one or more materials in the seal such that pressure differences across the seal can be equalized, and may prevent water from ingress into the seal. In some cases, the air-permeable seal may alleviate the buildup of pressure within the internal chamber of the electronic device100which, without the air-permeable seal, would cause other seals or components of the electronic device to fail. Additionally or alternatively, the air-permeable seal can allow a pressure-sensing device located within the internal chamber of the electronic device100to be used to determine a barometric pressure of the external environment. For example, the air-permeable seal can allow the pressure in the internal chamber to equalize with the pressure of the ambient environment. Accordingly, barometric pressure measured by the internal pressure-sensing device can correspond to the external barometric pressure. As used herein, the term air-impermeable refers to materials that do not allow air to move through the material. For example, an air-impermeable material can prevent an air pressure on one side of the seal (e.g., ambient air pressure) from equalizing with a second, different, air pressure on the other side of the seal (e.g., air pressure in an internal chamber). The electronic device100can include a housing102and a cover glass104(which may be referred to simply as a “cover”) coupled to the housing102. The cover104can be transparent and positioned over a display106. The housing102, the cover104and the air-permeable seal, along with other components, may form a sealed internal chamber or volume of the electronic device100. The sealed internal chamber can contain a pressure-sensing device along with other electrical components. In some cases, the cover104defines a substantial entirety of the front surface of the electronic device100. The cover104can also define an input surface of the electronic device100. For example, as described herein, the electronic device100may include touch and/or force sensors that detect inputs applied to the cover104. The cover104may be formed from or include glass, sapphire, polymer, dielectric, or any other suitable material. The display106can be positioned under the cover104and at least partially within the housing102. The display106can define an output region in which graphical outputs are displayed. Graphical outputs may include graphical user interfaces, user interface elements (e.g., buttons, sliders, etc.), text, lists, photographs, animations, videos, or the like. The display106can include a liquid-crystal display (LCD), organic light emitting diode display (OLED), or any other suitable components or display technology. In some cases, the display106can output a graphical user interface with one or more graphical objects that display information collected from or derived from the pressure-sensing system. For example, the display106can output a current barometric pressure associated with the electronic device100or estimated altitude of the electronic device100. The display106may include or be associated with touch sensors and/or force sensors that extend along the output region of the display and which may use any suitable sensing elements and/or sensing techniques. Using touch sensors, the electronic device100may detect touch inputs applied to the cover104, including detecting locations of touch inputs, motions of touch inputs (e.g., the speed, direction, or other parameters of a gesture applied to the cover104), or the like. Using force sensors, the device100may detect amounts or magnitudes of force associated with touch events applied to the cover104. The touch and/or force sensors may detect various types of user inputs to control or modify the operation of the device, including taps, swipes, multiple finger inputs, single- or multiple-finger touch gestures, presses, and the like. Touch and/or force sensors usable with wearable electronic devices, such as the device100, are described below. The electronic device100may also include a crown108having a cap, protruding portion, or component(s) or feature(s) (collectively referred to herein as a “body”) positioned along a side surface of the housing102. At least a portion of the crown108(such as the body) may protrude from, or otherwise be located outside, the housing102, and may define a generally circular shape or circular exterior surface. The exterior surface of the body of the crown108may be textured, knurled, grooved, or otherwise have features that may improve the tactile feel of the crown118and/or facilitate rotation sensing. The crown108may facilitate a variety of potential interactions. For example, the crown108may be rotated by a user (e.g., the crown may receive rotational inputs). Rotational inputs of the crown108may zoom, scroll, rotate, or otherwise manipulate a user interface or other object displayed on the display106(among other possible functions). The crown108may also be translated or pressed (e.g., axially) by the user. Translational or axial inputs may select highlighted objects or icons, cause a user interface to return to a previous menu or display, or activate or deactivate functions (among other possible functions). In some cases, the device100may sense touch inputs or gestures applied to the crown108, such as a finger sliding along the body of the crown108(which may occur when the crown108is configured to not rotate) or a finger touching the body of the crown108. In such cases, sliding gestures may cause operations similar to the rotational inputs, and touches on an end face may cause operations similar to the translational inputs. As used herein, rotational inputs include both rotational movements of the crown (e.g., where the crown is free to rotate), as well as sliding inputs that are produced when a user slides a finger or object along the surface of a crown in a manner that resembles a rotation (e.g., where the crown is fixed and/or does not freely rotate). In some embodiments, rotating, translating, or otherwise moving the crown108initiates a pressure measurement by a pressure-sensing system (such as an external and/or internal pressure-sensing device) located on or within the electronic device100. In some cases, selecting an activity, requesting a location, specific movements of the user, and so on may also initiate pressure measurements by the pressure-sensing system. The electronic device100may also include other inputs, switches, buttons, or the like. For example, the electronic device100includes a button110. The button110may be a movable button (as depicted) or a touch-sensitive region of the housing102. The button110may control various aspects of the electronic device100. For example, the button110may be used to select icons, items, or other objects displayed on the display106, to activate or deactivate functions (e.g., to silence an alarm or alert), or the like. The electronic device100may include a band112coupled to the housing102. The band may be configured to couple the electronic device100to a user, such as to the user's arm or wrist. A portion of the band112may be received in a channel that extends along an internal side of the housing102, as described herein. The band112may be secure to the housing within the channel to maintain the band112to the housing102. FIG.1Billustrates an exploded view of the electronic device100. The electronic device100can include an air-permeable seal105(hereinafter referred to as the “seal”) positioned between the housing102and the cover104. The seal105can extend along and/or around a perimeter of the cover104and couple the cover104to the housing102. In some embodiments, the seal105can be positioned on an upper surface of the housing102, and orient the cover104at least partially within an upper opening defined by the housing102. The seal105can include an air-permeable compressible material that inhibits water ingress. For example, the seal105can be include a polytetrafluoroethylene (PTFE) material, such as expanded PTFE, or nylon, polyester, acrylic, or any other suitable materials. In some embodiments, the seal105can be include foam or expanded materials that are permeable to air but resist the movement of water through the material. When a force is applied to the cover104, this force can be transferred to the seal105causing the seal105to compress between the housing102and the cover104. This compression can cause the density of the seal105to increase, which can increase the water resistance of the seal105(ability of the seal to inhibit water ingress) and/or restrict air flow through the seal105. In some cases, compression of the seal105can cause the seal105to become impermeable to air. The seal105can be configured such that when the pressure/force is removed from the cover104, the seal105can expand, which allows air to move through the seal105and equalize the pressure inside the housing with an external pressure. In some embodiments, the housing102may be sealed and/or otherwise include one or more watertight and/or airtight seals and the seal105may be the primary or only mechanism for equalizing a pressure inside the housing with an external pressure. Accordingly, if the seal105is compressed and air flow is restricted through the seal105, an internal pressure of the housing may not equalize with the external air pressure. In some embodiments, one or more input devices, such as the other portions of the housings, the crown108, and/or the button110, also include an air-permeable seal. For example, as illustrated inFIG.1B, the button110can include an air-permeable button seal111that is positioned between the button110and the housing102. The button seal111can function as described herein to allow air to move between the external environment and the internal chamber and prevent the ingress of water into the internal chamber. In some cases, the properties of the different seals can be configured based on their location and/or the type of opening being sealed. For example, the button seal111could be a softer material that compresses more easily than the cover seal105, such that the button seal111compresses in response to lower forces that may be generated by the smaller surface area of the button110. In this regard, the electronic device100can have multiple different seals that are positioned at different locations on the device and can have different properties that are based on the operating conditions of the structure that is being sealed. The housing102can define an upper opening103that is formed by one or more sidewalls of the housing and extends around an outer periphery of the housing102. The cover104can be positioned at least partially within the upper opening103. For example, a first portion of the cover104may be located above a top portion of the housing102, and a second portion of the cover104, such as a bottom surface, can extend into the housing and contact a portion of the housing such as a ledge. An upper surface of the cover104can function as a touch input surface and may be positioned above the housing102to allow a user to interact with the display106. The cover104can include one or side surfaces, between the bottom surface and the upper surface, that define a periphery of the cover104, and the shape of the periphery of the cover104can be configured to match the shape of the upper opening103. In some cases, the seal105can extend along the outer periphery defined by the side surfaces of the cover104. In this regard, the seal105may form a closed boundary between the housing102and the cover104, which can include the seal fully encircling the opening without any gaps or breaks that allow for the passage of water or unrestricted air flow. In some cases, the seal105can be configured to transition between a first state (in which the seal is air-impermeable and has a first resistance to water ingress) and a second state (having a second resistance to water ingress that is greater than the first resistance) based on other physical stimuli than pressure. For example, the seal105can include a hydrophilic material such as a hydrogel. Upon being exposed to water, the seal105could absorb water, which can increase the seal's105resistance to further water ingress. In other cases, the seal105could be heat and/or electrically activated. For example, at a first temperature, the seal105could exhibit characteristics of the first state (air-permeable and have a first resistance to water ingress). When heated or cooled to a second temperate, different from the first, the seal105could exhibit characteristics of the second state (increased resistance to water ingress). FIG.2Aillustrates a cross-sectional view of an electronic device200taken along section A-A ofFIG.1A. The electronic device200ofFIGS.2A and2Bmay correspond to the other electronic devices described herein, including the electronic device100ofFIGS.1A and1B. A redundant description of shared elements and features is omitted for clarity. The electronic device200can include a housing202, which can be an example of the housing described herein (e.g., housing102); a cover204, which can be an example of the covers described herein (e.g., cover104); and a seal205, which can be an example of the seals described herein (e.g., seal105). The housing202, the cover204and the seal205can form at least part of an internal chamber203of the electronic device200. The internal chamber203can define an internal volume of the electronic device200and various components such as electrical components of the electronic device200can be housed within the internal chamber203. As described herein, the cover204can be positioned at least partially within an opening defined by the housing. The cover204can couple to the housing202via the seal205. For example the seal205can be coupled to the housing202, and the cover204can be supported by the seal205, such that a force/pressure applied to the cover204is transferred to the seal205. In some cases, force (F) applied to the cover204may be due to a pressure of the external environment201. For example, the pressure of the external environment201can be a barometric pressure at the location of the electronic device200. In some cases, the electronic device200can be taken underwater, and the pressure of the external environment201can be a pressure exerted by the water on the electronic device, which can increase as the electronic device is taken deeper in the water. The internal chamber203can also exert a pressure on the cover204(and housing202), which can be based an internal pressure of air located within the internal chamber203. The difference in pressure between the external environment201and the internal chamber203can create a force on the cover204. For example, if the pressure of the external environment201is greater than the pressure of the internal chamber203, then the positive net force may be applied to an outer surface of the cover204, which can cause the seal205to compress moving the cover204toward the housing202. Subsequently, if the pressure of the external environment201decreases, the seal205may expand and move the cover204away from the housing202. In some embodiments, the seal205can include a porous material, which may allow air to move into and out of the internal chamber203. Accordingly, if a pressure differential exists between the internal chamber203and the external environment201, then the seal205may allow air to move into or out of the internal chamber203to equalize a pressure of the internal chamber with a pressure of the external environment201. In some embodiments, the seal205can be configured to remain substantially uncompressed when the electronic device200is located in an ambient air environment at external environmental pressures typically inhabited by a person (e.g., around sea level to around 5,000 or 10,000 feet above sea level, or greater). Accordingly, when located in an ambient air environment, the seal205may remain substantially uncompressed and can equalize the pressures of the internal chamber203with an ambient air pressure of the ambient air environment. Further, when subjected to the ambient air environment, the seal205can exhibit a first resistance to water entering the internal chamber203. The seal205can also be configured to compress when the electronic device200is submerged in water. For example, the weight of the water may apply an external pressure on the front surface of the cover204that compresses the seal205and increases the density of the seal205. As the electronic device is taken to deeper depths, the seal205may continue to compress until it is substantially fully compressed. When the electronic device200is subjected to the submerged water environment, the compressible seal can exhibit a second resistance to water entering the internal chamber203, which can be greater than the first resistance when the seal205is uncompressed. When compressed, the seal205may prevent air from moving between the internal chamber203and the external environment201. As the seal205is compressed the seal205may become more resistant to water passing through the seal205material. Accordingly, as the electronic device200is taken into the water, the seal205can compress, increasing in density, which may increase its resistance to water ingress into the internal chamber203. As the electronic device is brought to greater depths within the water, the seal205may continue to increase its water resistance until it is substantially fully compressed. In the compressed state, the seal205may reduce or prevent the pressure within the internal chamber203from equalizing with the pressure of the external environment201. Accordingly, while the electronic device200is submerged in water, a pressure differential can exist between the internal chamber203and the external environment201. For example, if the seal205compresses when the internal chamber203has a first internal pressure, the internal chamber203may remain around this first internal pressure even as the electronic device is take to greater depths resulting in greater external pressures being exerted on the outer surface of the housing202and cover204. FIG.2Billustrates a detailed view of the electronic device200shown by line B-B inFIG.2A. As illustrated inFIG.2B, the seal205can include multiple layers. A first layer206can include an air-permeable material that is permeable to air and resistant to water, as described herein. The first layer206can be coupled to the housing202and the cover204using one or more adhesive materials. For example, a second layer207acan include a first adhesive material that couples the first layer206(air-permeable material) to the housing202. A third layer207bcan include a second adhesive material that couples the first layer206to the cover204. Accordingly, the seal205can couple the cover204to the housing202such that the seal205can resist compressive, tensile, and shear forces, and the like or combinations thereof. The cover204may define an outer surface that faces the external environment and a lower/inner surface that faces the internal chamber203. In some cases, the seal can be coupled to the lower surface of the cover204. In some cases, the cover204can define a set of side surfaces212. The housing202can define a first upper surface208that forms an internal boundary of the opening. The housing202can also define a second upper surface210that forms a ledge for supporting the seal205and the cover204. In some embodiments, the seal205can couple to the second upper surface210and couple to the cover204, such that the set of side surfaces212of the cover204is positioned within the opening defined by the first upper surface208. In some embodiments, the set of side surfaces212can be offset from the first upper surface208of the housing202to form a gap between the housing and the cover204. This gap may extend between the seal205and the housing202. In this regard, the gap may allow for air and/or water to reach the seal, thereby allowing the seal205to equalize the pressure of the internal chamber203with the pressure of the external environment. In some cases, having the cover204and the seal205at least partially surrounded by the housing202can help protect these components from damage and/or constrain the movement of these components in relation to the housing202. For example, such a configuration may allow the cover204to move up and down and the seal to compress and expand, but limit side-to-side motion of the cover glass204, which can reduce sheer on the seal205. FIGS.3A and3Billustrate examples of a seal305in expanded (lower density) and compressed (higher density) states. The seal305may be an example of the seals described herein (e.g., seals105and205) and be coupled to a housing302, which may be an example of the housing described herein (e.g., housings102and202); and a cover304, which may be an example of the covers described herein (e.g., covers104and204). The seal305can include an air-permeable material306, which may be an example of the air-permeable materials described herein (e.g., air-permeable material206); and one or more adhesive materials307, which may be examples of the adhesive materials described herein (e.g., adhesive materials207). The seal305can separate an external environment301from an internal chamber303that is at least partially defined by the housing302and the cover304. As illustrated inFIG.3A, the seal305can be in an uncompressed state as described herein. In the uncompressed state, the air-permeable material306can have a first density, which may allow air to move between the external environment301and the internal chamber303. Additionally or alternatively, the air-permeable material306can have a first resistance to water that prevents water ingress into the internal chamber303. Accordingly, when the seal305is uncompressed, the air-permeable material306can allow the pressure of the internal chamber303to equalize with the pressure of the external environment301, while preventing water from entering the internal chamber303. In some embodiments, the air-permeable material306may be configured to support different flow rates of air between the external environment301and the internal chamber303. The air flow rate can depend on the properties of the air-permeable material306, the amount of surface area of the air-permeable material306between the external environment and the internal chamber303, as well as other factors. In some cases, positioning the seal305between the housing302and the cover304may increase the surface area of the seal305as compared to devices that incorporate air-permeable vents into ports on the housing, such as a speaker port. In some embodiments, the air flow rate of the seal305can be configured to be between 5 and 20 standard cubic centimeters per minute (SCCM). In other cases, the air flow rate of the seal305may be configured to be above 50, 100 or 150 SCCM. In some embodiments, the air flow rate of the seal may decrease over time. In this regard, the seal305can initially be configured with a higher air flow rate to maintain functions of the electronic device (e.g., internal pressure sensing) while accounting for decreases in the air flow rate over the life of the seal305. The air-permeable material306can include polymer materials such as expanded polymers, foams (open cell and/or closed cell), porous materials, or other materials that are permeable to air, and resistant to water ingress. For example, the air-permeable material can include PTFE materials, such as expanded PTFE (ePTFE), nylon, polyester, acrylic, or other suitable materials. In some cases, the air-permeable material can include composite materials, such as a polymer-metal composite or other suitable combination of materials. In some embodiments, the air-permeable material306and/or the adhesive materials307can be about 10 microns to about 100 microns thick. In some embodiments, in the uncompressed state, the air-permeable material306can define passages that allow air to move between the internal chamber303and the external environment301. For example, these passages may be property of the air-permeable material306, and may be homogenously distributed throughout the air-permeable material306, which may include channels formed from expanded portions of the air-permeable material306. In other examples, the passages can be one of more defined channels within the air-permeable material306. For example, the defined channels could be machined, etched, or otherwise formed in the air-permeable material306to allow air to move between the internal chamber303and the external environment301. For example, the channels could be formed in a circuitous path, such as a spiral pattern, that allows air to pass, but impedes the ingress of water or other liquid into the internal chamber303. In some cases, the channels can be formed in one or more of the adhesive layers307, and can be configured to compress, collapse, become blocked, or otherwise restricted as the seal305compresses. As illustrated inFIG.3B, the seal305can be compressed as described herein. In the compressed state, the air-permeable material306can have a greater density, which may prevent/restrict air from moving between the internal chamber303and the external environment301, and increase a water resistance of the seal305. In the compressed state, the seal305can prevent the pressure within the internal chamber from equalizing with the pressure of the external environment. Additionally or alternatively, the air-permeable material306may prevent water at greater pressures (depths) from moving through the air-permeable material306and into the internal chamber303. In some cases, compression of the seal305may close paths within the air-permeable material306that allowed air to move through the air-permeable material306in the uncompressed state. In some embodiments, the adhesive layers307can have a greater resistance to compression than the air-permeable material306. In this regard, the adhesive layers307may remain substantially uncompressed when the air-permeable material306becomes fully compressed. The adhesive layers307can also be impermeable to air and water, thus, any movement of air and/or water into or out of the internal chamber303would occur through the air-permeable material306. In some cases, compression of the air-permeable material306can also mechanically reinforce the seal305. For example, compression of the air-permeable material306can result in the shear resistance increasing between the seal305, the housing302and the cover304. In this regard, the compressed seal305may be able to withstand external and/or internal pressures that would cause an uncompressed seal to fail (detach, rip, etc.). In some cases, the air-permeable material306can be configured to progressively compress when brought to increasing depths in a submerged water environment. For example, if the electronic device is brought to relatively shallow submersion depths, such as near the water surface, the air-permeable material306may be configured to partially compress and have a first resistance to water ingress. As the electronic device is brought to increasing depth, the air-permeable material306may compress to a greater density and have a second, increased resistance to water ingress. Accordingly, as the electronic device is brought to deeper depths, the water resistance of the seal305may increase. In some embodiments, the seal305can be configured to expand when the pressure/force that cause the seal305to compress is removed. In this regard, the seal305may cycle between compressed and uncompressed states. FIG.4illustrates an example of a seal405for an electronic device400. The seal405can be an example of the seals described herein (e.g., seals105,205, and305) and can couple a housing402to a cover404, which may be examples of the housings and covers described herein (e.g., housings102,202, and302; and covers104,204, and304). The seal405can include multiple layers of an air-permeable material406to increase an air flow rate of the seal405. For example, the seal405can include a first layer of air-permeable material406aand a second layer of air-permeable material406bthat are stacked on top of each other to increase a surface area of the air-permeable material406contained within the seal405. In other embodiments, additional layers of air-permeable material406could be included in the seal to further increase the surface area of the air-permeable material406, which can be used to increase an air flow rate through the seal405. In some cases, one or more air-permeable layers406of can be coupled to each other and/or the housing402and the cover404via one or more adhesive layers407. Different adhesive layers407may be the same adhesive material. In other cases, the different adhesive layers407can be different. For example, if the cover404is a glass material, a first adhesive layer407athat is configured to bond with the glass material may be used to couple the air-permeable layer406to the cover404. Additionally, if the housing402includes a different material from the cover404(e.g., metal, ceramic, plastic, or the like) a second adhesive layer407bthat is configured to bond with the housing material can be used to couple the housing402to the air-permeable layer406. In other embodiments, the air-permeable layers406can be the same or different air-permeable materials, which may have different air flow rates, water resistance, compressibility, and so on. In some cases, the electronic device400can include a force sensor positioned between the housing402and the cover404. For example, the force sensor can include two electrode layers separated by a compressible material, and the amount of force can be estimated by detecting a change in capacitance between the two electrode layers due to compression of the compressible material. The compressible material can be formed from silicone, or other compressible or elastomer materials. In some cases, the force sensor can include a separate set of layers and be stacked with the seal405between the housing402and the cover404. In other examples, the force sensor can be integrated with the seal405. For example, the air-permeable layer406could form the compressible layer of the force sensor and two electrodes could be placed on either side of the air-permeable layer406. FIGS.5A-5Dillustrate examples of electronic devices500with seals505that include a compression limiter506. The electronic device500can be an example of the electronic devices described herein such as electronic devices100,200,300and400; and the seals505can be an example of the seals described herein (e.g., seals105,205,305and405). In some embodiments, the seals505can be positioned between a housing502and a cover504, which may be examples of the housings and covers as described herein. The electronic device500can include a compression limiter506, which may be used to limit the amount of compression experienced by the seal505. In some cases, compressing the seal505more than a certain amount may damage the seal505and/or result in the seal505not fully expanding when a pressure on the cover504is reduced. In this regard, the compression limiter506can be positioned between the housing502and the cover504. The compression limiter506can be formed from a material that is more rigid than the seal505and stops movement of the cover504toward the housing502to stop the seal505from compressing past a certain amount. FIG.5Aillustrates a first example of a compression limiter506that is positioned inside of the seal505and coupled to the housing502. In this regard, as the cover504moves toward the housing502, the cover504will contact the compression limiter506and stop moving toward the housing502before the seal505is fully compressed. In some cases, the compression limiter506may be configured to allow the seal505to compress enough to stop air movement through the seal505or increase the water resistance of the seal by a defined amount. FIG.5Billustrates another example of a compression limiter506that is defined by the housing502. For example, the compression limiter506can include a ledge formed in the housing502, wherein the ledge prevents full compression of the seal505.FIGS.5C and5Dillustrate additional examples of compression limiters506that are attached to the cover504and contact the housing502as the cover504moves toward the housing502to prevent full compression of the seal505.FIGS.5A-5Bare provided as examples of different compression limiter configurations506to illustrate how a compression limiter506may be implemented in the electronic device500. Accordingly, other configurations are possible. FIGS.6A and6Billustrate examples of an electronic device600including a seal605including a backup seal606. The electronic device600can be an example of the electronic devices described herein and can include a housing602, a cover604as described herein, and the seal605, which may be an example of the seals described herein (e.g., seals105,205,305,405, and505). As illustrated inFIG.6A, a backup seal606can be positioned between the housing602and the cover604. The backup seal606can be positioned alongside the seal605. In an expanded state, the backup seal606can be offset from the cover604to form a gap between a top of the backup seal606and the cover604. In this regard, air that passes through the seal605can also pass into an internal chamber603of the electronic device600, and allow a pressure within the electronic device to equalize with a pressure of the external environment603. As illustrated inFIG.6B, as the cover604moves toward the housing602and the seal605compresses, the cover604can contact the backup seal606. The backup seal606can be impermeable to water and/or air. Accordingly, even if air and/or water passes through the seal605, the backup seal606can prevent the water or air from reaching the internal chamber603. In some cases, the backup seal606can have a greater impermeability to water and/or air than the seal605. Additionally or alternatively, the backup seal606can function as a compression limiter as described herein. FIG.7illustrates an example of an electronic device700that includes a seal705including an air-permeable material706and a compression layer707. The electronic device700can be an example of the electronic devices described herein and can include a housing702and a cover704, which can be examples of the housings and the covers as described herein. The seal705can be an example of the seals described herein and can include an air-permeable material as described herein. The seal705can further include the compression layer707stacked with the air-permeable material706. The compression layer707can be used to estimate external pressures by compressing in response to increasing external pressure thereby decreasing the volume within the internal chamber703and increasing the pressure. For example, the compression layer707can be configured to undergo a greater deflection than the air-permeable material706. In this regard, once the air-permeable material706has been compressed, the air pressure in the internal chamber703can no longer equalize with the air pressure of the external environment, and the compression layer707may remain uncompressed. Then, further increases in the external pressure may cause the compression layer707to compress, thereby decreasing the volume of the internal chamber703and increasing the pressure within the internal chamber703. A pressure-sensing device709(e.g., pressure transducer, or other pressure-sensing device) located within the internal chamber can measure this increase in pressure and use this change in pressure to estimate an external pressure and/or change in external pressure of the environment around the electronic device700. For example, the estimated external pressure could correspond to a water pressure on the electronic device700and may be used as a depth gauge to determine a water depth, for example, when diving or performing other underwater activities. FIG.8illustrates an example of an electronic device800that includes a force sensor808positioned between a cover804and a housing802. The electronic device800can be an example of the electronic devices described herein. The force sensor808can be used to estimate a force applied to the cover804of the electronic device800. For example, a force sensor808could include a capacitive force sensor, a piezoelectric force sensor, a resistive force sensor, and so on, that is coupled between the cover804and the housing802. In some cases, the force sensor808can be stacked with a seal805. In other examples, the force sensor808could be mounted in parallel with the seal805, for example one or more force sensors could be positioned at intermittent locations along the seal805. FIG.9illustrates an example of an air-permeable material902that can be used in a seal, as described herein. The air-permeable material902can include one or more channels that form circuitous paths907between an external environment901and an internal chamber903of an electronic device. In a first state, for example, when the electronic device is located in an ambient air environment, the paths907may be substantially open and allow air to move between the external environment901and the internal chamber903. Also, in the first state, the paths907can prevent water at the ambient pressure from ingress into the internal chamber903. For example, the air-permeable material902can include hydrophobic elements at the paths907that resist water. In some cases, the size and/or shape of the paths907may prevent water from ingress into the internal chamber903. In a second state, for example, when the electronic device is submerged in water, the paths907may compress, collapse, or otherwise restrict such that the air-permeable material902increases in resistance to water ingress into the internal chamber903. FIG.10illustrates an exploded view of a backside of an electronic device1000with a back cover1004incorporating an air-permeable seal1005. The seal1005can be an example of the seals described herein and can be positioned between various sections of an electronic device to allow air movement between the inside of the device and the external environment, while resisting the ingress of water into the electronic device. For example, the seal1005can be positioned between a rear cover (e.g., rear crystal) and the housing1002of the electronic device1000. In this regard, the seal1005can allow the internal pressure of the electronic device to equalize with an air pressure of the external environment. In various other embodiments, one or more seals, as described herein, can be positioned at different locations and/or structures of the electronic device1000. FIG.11is a block diagram illustrating an example electronic device1100, within which an air-permeable seal can be integrated. By way of example, the device1100ofFIG.11may correspond to the electronic devices shown inFIGS.1A-10(or any other wearable electronic device described herein). To the extent that multiple functionalities, operations, and structures are disclosed as being part of, incorporated into, or performed by the device1100, it should be understood that various embodiments may omit any or all such described functionalities, operations and structures. Thus, different embodiments of the device1100may have some, none, or all of the various capabilities, apparatuses, physical features, modes, and operating parameters discussed herein. As shown inFIG.11, the device1100includes a processing unit1102operatively connected to computer memory1104and/or computer-readable media1106. The processing unit1102may be operatively connected to the memory1104and computer-readable media1106components via an electronic bus or bridge. The processing unit1102may include one or more computer processing units or microcontrollers that are configured to perform operations in response to computer-readable instructions. The processing unit1102may include the central processing unit (CPU) of the device. Additionally or alternatively, the processing unit1102may include other processing units within the device including application specific integrated chips (ASIC) and other microcontroller devices. In some embodiments the processing unit1102may modify, change, or otherwise adjust operation of the electronic device in response to an output of one or more of the pressure-sensing devices, as described herein. For example, the processing unit1102may shut off the electronic device1100or suspend certain functions, like audio playback, if the pressure sensed by the pressure-sensing device exceeds a threshold. Likewise, the processing unit1102may activate the device or certain functions if the sensed pressure drops below a threshold (which may or may not be the same threshold previously mentioned). As yet another option, the processing unit1102may cause an alert to be displayed if pressure changes suddenly, as sensed by the pressure-sensing unit. This alert may indicate that a storm is imminent, a cabin or area has become depressurized, a port is blocked, and so on. The memory1104may include a variety of types of non-transitory computer-readable storage media, including, for example, read access memory (RAM), read-only memory (ROM), erasable programmable memory (e.g., EPROM and EEPROM), or flash memory. The memory1104is configured to store computer-readable instructions, sensor values, and other persistent software elements. Computer-readable media1106also includes a variety of types of non-transitory computer-readable storage media including, for example, a hard-drive storage device, a solid-state storage device, a portable magnetic storage device, or other similar device. The computer-readable media1106may also be configured to store computer-readable instructions, sensor values, and other persistent software elements. In this example, the processing unit1102is operable to read computer-readable instructions stored on the memory1104and/or computer-readable media1106. The computer-readable instructions may adapt the processing unit1102to perform the operations or functions described above with respect toFIGS.1A-6. In particular, the processing unit1102, the memory1104, and/or the computer-readable media1106may be configured to cooperate with a sensor1116(e.g., an image sensor that detects input gestures applied to an imaging surface of a crown) to control the operation of a device in response to an input applied to a crown of a device (e.g., the crown108). The computer-readable instructions may be provided as a computer-program product, software application, or the like. The device1100may also include a battery1108that is configured to provide electrical power to the components of the device1100. The battery1108may include one or more power storage cells that are linked together to provide an internal supply of electrical power. The battery1108may be operatively coupled to power management circuitry that is configured to provide appropriate voltage and power levels for individual components or groups of components within the device1100. The battery1108, via power management circuitry, may be configured to receive power from an external source, such as an AC power outlet. The battery1108may store received power so that the device1100may operate without connection to an external power source for an extended period of time, which may range from several hours to several days. The device1100may also include a communication port1110that is configured to transmit and/or receive signals or electrical communication from an external or separate device. The communication port1110may be configured to couple to an external device via a cable, adaptor, or other type of electrical connector. In some embodiments, the communication port1110may be used to couple the device1100to an accessory, including a dock or case, a stylus or other input device, smart cover, smart stand, keyboard, or other device configured to send and/or receive electrical signals The device1100may also include a touch sensor1112that is configured to determine a location of a touch on a touch-sensitive surface of the device1100(e.g., an input surface defined by the portion of a cover104over a display109). The touch sensor1112may use or include capacitive sensors, resistive sensors, surface acoustic wave sensors, piezoelectric sensors, strain gauges, or the like. In some cases the touch sensor1112associated with a touch-sensitive surface of the device1100may include a capacitive array of electrodes or nodes that operate in accordance with a mutual-capacitance or self-capacitance scheme. The touch sensor1112may be integrated with one or more layers of a display stack (e.g., the display109) to provide the touch-sensing functionality of a touchscreen. Moreover, the touch sensor1112, or a portion thereof, may be used to sense motion of a user's finger as it slides along a surface of a crown, as described herein. The device1100may also include a force sensor1114that is configured to receive and/or detect force inputs applied to a user input surface of the device1100(e.g., the display109). The force sensor1114may use or include capacitive sensors, resistive sensors, surface acoustic wave sensors, piezoelectric sensors, strain gauges, or the like. In some cases, the force sensor1114may include or be coupled to capacitive sensing elements that facilitate the detection of changes in relative positions of the components of the force sensor (e.g., deflections caused by a force input). The force sensor1114may be integrated with one or more layers of a display stack (e.g., the display109) to provide force-sensing functionality of a touchscreen. The device1100may also include one or more sensors1116. In some cases, the sensors may include a fluid-based pressure-sensing device (such as an oil-filled pressure-sensing device) that determines conditions of an ambient environment external to the device1100, a temperature sensor, a liquid sensor, or the like. The sensors1116may also include a sensor that detects inputs provided by a user to a crown of the device (e.g., the crown108). As described above, the sensors1116may include sensing circuitry and other sensing elements that facilitate sensing of gesture inputs applied to an imaging surface of a crown, as well as other types of inputs applied to the crown (e.g., rotational inputs, translational or axial inputs, axial touches, or the like). The sensors1116may include an optical sensing element, such as a charge-coupled device (CCD), complementary metal-oxide-semiconductor (CMOS), or the like. The sensors1116may correspond to any sensors described herein or that may be used to provide the sensing functions described herein. In some cases, the device1100can include a pressure-sensing system that has multiple pressure-sensing devices that are positioned within different chambers or internal volumes of the electronic device. One pressure-sensing device may be located in a sealed volume or first internal chamber of the electronic device and another pressure-sensing device may be located in a vented or open volume or second internal chamber of the device. The sealed internal chamber may include an air-permeable seal, as described herein, that prevents water, dust, and/or other contaminants from entering the sealed housing. Air may pass through the air-permeable seal thereby equalizing the internal pressure of the sealed internal chamber with a pressure of an external environment. This internal pressure-sensing device is protected from moisture and contaminants, which helps maintain accurate pressure measurements over the life of the device and in a variety of operating environments. In some cases, the electronic device1100may include a pressure-sensing device located within a second unsealed chamber of a housing of the device. The second unsealed internal chamber may be coupled with an external environment (e.g., exposed to the atmosphere) via a port that is defined by an outer shell of the housing. Operation of the internal and external pressure-sensing devices may be coordinated based on one or more monitored conditions of the electronic device1100and/or an output from one or both of the pressure-sensing devices. In some cases, the electronic device1100may monitor one or more conditions, such as whether the external pressure-sensing device has been exposed to moisture. If the electronic device1100determines that the external pressure-sensing device has been exposed to moisture, the electronic device1100can use pressure signals from the internal pressure-sensing device to determine an environmental pressure, or determine when the external pressure-sensing device has dried sufficiently. For example, an electronic device1100may initially determine an environmental pressure using the external pressure-sensing device. Subsequently, the electronic device1100may determine that the external pressure-sensing device has been exposed to moisture and switch to using pressure signals from the internal pressure-sensing device while the external pressure-sensing device dries. In some embodiments, the device1100includes one or more input devices1118. An input device1118is a device that is configured to receive user input. The one or more input devices1118may include, for example, a push button, a touch-activated button, a keyboard, a key pad, or the like (including any combination of these or other components). In some embodiments, the input device1118may provide a dedicated or primary function, including, for example, a power button, volume buttons, home buttons, scroll wheels, and camera buttons. Generally, a touch sensor or a force sensor may also be classified as an input device. However, for purposes of this illustrative example, the touch sensor1112and the force sensor1114are depicted as distinct components within the device1100. As shown inFIG.11, the device1100also includes a display1120. The display1120may include a liquid-crystal display (LCD), organic light emitting diode (OLED) display, light emitting diode (LED) display, or the like. If the display1120is an LCD, the display1120may also include a backlight component that can be controlled to provide variable levels of display brightness. If the display1120is an OLED or LED type display, the brightness of the display1120may be controlled by modifying the electrical signals that are provided to display elements. The display1120may correspond to any of the displays shown or described herein. In some embodiments, the device1100includes one or more output devices1122. An output device1122is a device that is configured to produce an output that is perceivable by a user. The one or more output devices1122may include, for example, a speaker, a light source (e.g., an indicator light), an audio transducer, a haptic actuator, or the like. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings. | 61,969 |
11860586 | DETAILED DESCRIPTION OF EMBODIMENTS Next, an electronic timepiece will be described as an electronic device which is an embodiment of the present invention. In addition to having a time display feature, the electronic timepiece displays movement (running) time and movement distance for the runner. FIG.1is a functional block diagram of an electronic timepiece100according to the present embodiment. The electronic timepiece100includes a central processing unit (CPU)110, a memory120, a display unit130, an operation unit140, a positioning module150, and a movement distance detection sensor160. The CPU110executes programs stored in the memory120to control the electronic timepiece100. The memory120is constituted by a random-access memory (RAM), a read-only memory (ROM), a flash memory, or the like and stores programs for implementing the features of the electronic timepiece100, programs for controlling the power of the positioning module150, and any data necessary for program execution. The display unit130displays the time, satellite radio wave reception status, movement time, and movement distance. The operation unit140includes a rotary switch141and buttons142to145, which are illustrated inFIG.2and will be described later. The positioning module150is a sensor which receives GNSS satellite radio waves and executes a positioning process based on information from the received radio waves to output the current position to the CPU110. The movement distance detection sensor160is an acceleration sensor, for example, and detects and outputs movement distance to the CPU110by detecting locomotion and calculating step counts. The electronic timepiece100further includes a battery, a communication module for communicating with other electronic devices, and various types of sensors such as a direction sensor, although none of these are explicitly illustrated inFIG.1. FIG.2illustrates the external appearance of the electronic timepiece100according to the present embodiment. The electronic timepiece100is a wristwatch and includes the rotary switch141and the buttons142and143on the right side of the device as well as the buttons144and145on the left side of the device. The display unit130displays the date, the day of the week, the current time, and the remaining battery level. FIG.3is a state transition diagram for the electronic timepiece100according to the present embodiment. The states include a time display state201, a stopwatch state202, a timer state203, a preparing for run state204, a run preparation complete state205, a running state206, and a run paused state207. The time display state201is a state in which the electronic timepiece100displays the date and time (seeFIG.2). The stopwatch state202is a state in which the electronic timepiece100functions as a stopwatch, where the button142becomes a start/stop button and the button143becomes a reset button. The timer state203is a state in which the electronic timepiece functions as a timer, where rotating the rotary switch141sets the timer time and pressing the button142starts/stops/resumes the countdown. Pressing the button144cycles through the time display state201, the stopwatch state202, and the timer state203in that order. When the button145is pressed while in the time display state201, the electronic timepiece100transitions to the preparing for run state204and displays a preparing for run screen310(seeFIG.4(described later)) on the display unit130. The preparing for run state204is a state for preparing to take movement (running) records and is the state in which a positioning start process (seeFIG.8(described later)) is executed. Once the positioning start process in the preparing for run state204is complete, the device transitions to the run preparation complete state205. Moreover, when the rotary switch141is depressed while in the preparing for run state204, the device transitions to the running state206. FIG.4illustrates the preparing for run screen310that is displayed on the display unit130of the electronic timepiece100according to the present embodiment. Here, a satellite icon is flashed in the upper left and “Searching Positioning Satellites . . . ” is displayed, which allows the user to easily ascertain that the electronic timepiece100is in the preparing for run state204. Returning toFIG.3, the run preparation complete state205is a state in which preparation for taking movement records is complete and records can begin to be taken, and the electronic timepiece100displays a run preparation complete screen320(seeFIG.5(described later)) on the display unit130. If the positioning start process succeeded (YES in step S103inFIG.8(described later)), in the following processes the electronic timepiece100records movement on the basis of positional information output by the positioning module150. Meanwhile, if the positioning start process failed (YES in step S104inFIG.8(described later); also see S105), in the following processes the electronic timepiece100records movement on the basis of the movement distance output by the movement distance detection sensor160. When the rotary switch141is depressed while in the run preparation complete state205, the device transitions to the running state206. FIG.5illustrates the run preparation complete screen320that is displayed on the display unit130of the electronic timepiece100according to the present embodiment. Near the bottom of the screen, “Ready” is displayed to allow the user to easily ascertain that the electronic timepiece100is in the run preparation complete state205. Because this is a state prior to running (movement), the movement distance and movement time displayed in the middle of the screen are both zero. If the positioning start process succeeded and navigation satellite radio waves are being received, a satellite icon is displayed in the upper left. If the positioning start process failed and navigation satellite radio waves are not being received, no satellite icon is displayed in the upper left.FIG.5illustrates the run preparation complete screen320for a case in which navigation satellite radio waves are being received. Returning toFIG.3, the running state206is a state in which movement is recorded on the basis of the positional information output by the positioning module150or the movement distance output by the movement distance detection sensor160, and a running screen330(seeFIG.6(described later)) is displayed on the display unit130. When the rotary switch141is depressed while in the running state206, the device transitions to the run paused state207. FIG.6illustrates the running screen330that is displayed on the display unit130of the electronic timepiece100according to the present embodiment. Near the bottom of the screen, “Running” is displayed to allow the user to easily ascertain that the electronic timepiece100is in the running state206. Movement distance and movement time are displayed in the middle of the screen. If the positioning start process succeeded and navigation satellite radio waves are being received, a satellite icon is displayed in the upper left. If the positioning start process failed and navigation satellite radio waves are not being received, no satellite icon is displayed in the upper left.FIG.6illustrates the running screen330for a case in which navigation satellite radio waves are being received. Returning toFIG.3, the run paused state207is a state in which movement records are temporarily suspended, and a run paused screen340(seeFIG.7(described later)) is displayed on the display unit130. When the rotary switch141is depressed while in the run paused state207, the device transitions to the running state206. Moreover, when the button145is pressed while in the run paused state207, the device returns to the time display state201. FIG.7illustrates the run paused screen340that is displayed on the display unit130of the electronic timepiece100according to the present embodiment. Near the bottom of the screen, “Stopping” is displayed to allow the user to easily ascertain that the electronic timepiece100is in the run paused state207. In the middle of the screen, the movement distance and movement time up to that point are displayed. If the positioning start process succeeded and navigation satellite radio waves are being received, a satellite icon is displayed in the upper left. If the positioning start process failed and navigation satellite radio waves are not being received, no satellite icon is displayed in the upper left.FIG.7illustrates the run paused screen340for a case in which navigation satellite radio waves are not being received. FIG.8is a flowchart of the positioning start process executed in the preparing for run state204of the electronic timepiece according to the present embodiment. Here, the processes executed by the CPU110from when a run preparation instruction from the user is received until when navigation satellite radio waves are successfully received and positioning is successful or until when positioning does not succeed and fails will be described with reference toFIG.8. In step S101, the CPU110detects a switch to the preparing for run state204. More specifically, the CPU110detects that the user pressed the button145while in the time display state201and transitions to the preparing for run state204. In step S102, the CPU110powers ON the positioning module150and starts up the positioning module150. Then, the CPU110executes a looping process in which steps S103to S108are repeated. In step S103, if the positioning module150did not succeed in positioning (NO in step S103), the CPU110proceeds to step S104, and if the positioning module150did succeed in positioning (YES in step S103), the CPU110ends the positioning start process. If the positioning start process is ended at this point, this means that radio waves from navigation satellites were successfully received and that positioning was successful, so the electronic timepiece100transitions to the run preparation complete state205. If positioning is not successful (search timeout), this means that positioning by utilizing satellite radio waves was not successful within a first prescribed period of time (a search timeout time) or that positioning was not successful because the number of satellites from which radio waves were received was insufficient. The search timeout time is two minutes, for example. In step S104, if a third prescribed period of time (30 minutes, for example) from when the positioning module150was started up (see step S102) has elapsed and a search timeout occurred every time (YES in step S104), the CPU110proceeds to step S105. If the third prescribed period of time has not yet elapsed or a search timeout did not occur every time (NO in step S104), the CPU110proceeds to step S106. Here, “search timeout” means that the positioning module150was unable to successfully achieve positioning within the search timeout time in step S103. “Every time” means each time that the process of step S103to S108was repeatedly executed. “Search timeout occurred every time” means that each time that steps S103to S108were repeated, the state in which the positioning module150was unable to achieve positioning continued for the duration of the search timeout time in step S103. In step S105, the CPU110powers OFF the positioning module150and thereby ends the positioning start process. Unlike when the positioning start process is ended because positioning is successful in step S103(YES in step S103), if the positioning start process is ended at this point, the electronic timepiece100transitions to the run preparation complete state205after having failed to receive radio waves from navigation satellites. In step S106, the CPU110starts a positioning module150sleep (suspended) state. In step S107, the CPU110calculates the movement distance from when the positioning module150was started up (see step S102), and if the user has moved by at least a prescribed distance (100 m, for example) or if at least a prescribed period of time (second prescribed period of time) from when the positioning module150went to sleep in step S106has elapsed (YES in step S107), the CPU110proceeds to step S108. Otherwise (NO in step S107), step S107is repeated until this happens (the CPU110calculates movement distance on the basis of the output from the movement distance detection sensor160). For example, by referencing the output values of the acceleration sensor used as the movement distance detection sensor, the number of steps taken by the user since the positioning module150was started up is calculated. Moreover, step length is calculated from information about the user's body which is set in advance, and movement distance is calculated on the basis of the calculated step count and step length. In step S108, the CPU110wakes up (re-activates) the positioning module150. When step S107yields YES, the CPU110terminates the sleep state of positioning module150and wakes up (starts up) the positioning module150(see step S108) in order to resume positioning. If the user (and the electronic timepiece100) have moved to a position in which satellite radio waves can be received, positioning succeeds (YES in step S103) and the positioning start process ends. In this way, by resuming positioning by using a means other than the positioning module150to detect movement of at least a prescribed distance while the positioning module150is asleep, positioning can be started earlier than in conventional approaches in which movement is not detected and the positioning module150remains asleep. Moreover, this makes it possible to shorten the time required to prepare for a run, thereby making it possible to reduce waiting time for the user. The positioning start process in the embodiment described above (seeFIG.8) is a process that is executed during the preparing for run state204. However, processes similar to this positioning start process for resuming positioning when satellite radio waves can no longer be received are not limited to being executed during the preparing for run state204and may also be executed during the run preparation complete state205, the running state206, or the run paused state207. Although in the positioning start process described above movement that occurs while the positioning module150is asleep is detected using the movement distance detection sensor160, other approaches may also be used. For example, the electronic timepiece100may include a mobile phone radio wave sensor, and movement may be determined to have occurred when the signal strength of mobile phone radio waves has increased to at least a prescribed value. Alternatively, a short-range wireless communication receiving sensor may be included, and movement may be determined to have occurred when the signal strength of short-range wireless communications changes. Furthermore, a movement direction detection sensor may be further included, and the device may be controlled such that, when calculating movement distance using the movement distance detection sensor160, if the direction of movement is simultaneously detected and it can be determined that the user has moved by a prescribed distance in a prescribed direction, the sleep state of the positioning module150is terminated and positioned is resumed. Implementing this type of control makes it possible to reduce the likelihood of positioning being resumed when the user has moved a prescribed distance but has not moved significantly from the position at which the positioning module150was started up (such as when walking in circles within a fixed range while indoors, for example) and increases the likelihood that positioning will succeed. Moreover, this reduces the number of times that positioning is executed and thereby makes it possible to reduce power consumption. In the positioning start process described above, the sleep step (see step S106) and wake-up step (see step S108) are repeated. The CPU110may power OFF the positioning module150instead of putting the positioning module150to sleep and may power ON the positioning module150instead of waking up the positioning module150. In the positioning start process described above, the wake-up step (see step S108) is performed when movement of at least a prescribed distance from startup (see step S102) has occurred (see YES in step S107). This prescribed distance is not limited to being a single distance and may be a plurality of distances. The prescribed distances may be 30 m, 50 m, 70 m, and 90 m, for example, and the process may proceed to step S108and trigger the wake-up when movement of greater than or equal to any one of the distances among 30 m, 50 m, 70 m, and 90 m is detected. Alternatively, rather than using movement distance from startup, the wake-up may be triggered when the movement distance from when the most recent sleep state started (see step S106) is greater than or equal to a prescribed distance. Moreover, when the positioning start process is executed during the running state206or the run paused state207rather than during the preparing for run state204, the positioning start process may be executed with a shorter sleep time because the running state206and the run paused state207offer a higher likelihood of successfully receiving satellite radio waves than the preparing for run state204. In step S104of the positioning start process described above, if a third prescribed period of time from startup of the positioning module150has elapsed and a search timeout has occurred every time, the process proceeds to step S105and then the positioning start process ends. The process may alternatively proceed to step S105and then end the positioning start process when the start sleep step (see step S106) and the wake-up step (see step S108) have been repeated a prescribed number of times. Although in the embodiment described above the electronic timepiece100records movement distance and movement time, position may be recorded and a movement history may be displayed, for example. Moreover, although in the embodiment above the present invention was described using the electronic timepiece100as an example, the present invention may be applied to an electronic device which does not have time features such as time display or a stopwatch and may also be applied to an electronic timepiece that has other features such as an alarm. Although several embodiments of the present invention were described above, these embodiments are only examples and do not limit the technical scope of the present invention in any way. The present invention can take the form of various other embodiments, and various modifications such as removal or replacement of components can be made without departing from the spirit of the present invention. These embodiments and modifications thereof are included within the scope and spirit of the invention as described in the present specification and the like and are also included within the scope of the invention as defined in the claims, their equivalents, and the like. It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover modifications and variations that come within the scope of the appended claims and their equivalents. In particular, it is explicitly contemplated that any part or whole of any two or more of the embodiments and their modifications described above can be combined and regarded within the scope of the present invention. | 19,665 |
11860587 | The use of cross-hatching or shading in the accompanying figures is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the figures. Accordingly, neither the presence nor the absence of cross-hatching or shading conveys or indicates any preference or requirement for particular materials, material properties, element proportions, element dimensions, commonalities of similarly illustrated elements, or any other characteristic, attribute, or property for any element illustrated in the accompanying figures. Additionally, it should be understood that the proportions and dimensions (either relative or absolute) of the various features and elements (and collections and groupings thereof) and the boundaries, separations, and positional relationships presented therebetween, are provided in the accompanying figures merely to facilitate an understanding of the various embodiments described herein and, accordingly, may not necessarily be presented or illustrated to scale, and are not intended to indicate any preference or requirement for an illustrated embodiment to the exclusion of embodiments described with reference thereto. DETAILED DESCRIPTION Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims. The following disclosure relates to an electronic device (e.g., an electronic watch) having an input mechanism, such as a crown, that may receive translational inputs, rotational inputs, and/or touch inputs and a variable friction mechanism that may produce variable frictional feedback as the crown is rotated. Inputs received at the crown may result in changes in operation of the electronic device and/or outputs, such as graphical outputs, provided by the electronic device. In various embodiments, the crown includes a shaft extending through an opening in a device enclosure of an electronic device and a user-rotatable crown body coupled to the shaft and positioned at least partially outside of the device enclosure. In some cases, rotational and/or translational inputs provided at the user-rotatable crown body cause the shaft to translate and/or rotate. In general, the term “rotational input” may be used to refer to an input that causes a rotation of the crown and the term “translational input” may be used to refer to an input that causes a linear translation or displacement of the crown. One or more sensors may detect the rotation and/or translation and, in response, provide signal(s) to one or more circuits of the electronic device, such as a processing unit, for processing of the received input(s). In some cases, electronic sensors used to precisely detect rotation (e.g., optical rotation sensors) do not cause friction on the crown, for example because the sensors do not contact the crown. This allows for easy rotation, but it does not provide tactile feedback to the user. As noted above, the electronic watch may include a variable friction mechanism that produces a variable frictional feedback as the crown is rotated by varying a frictional force between the crown and the variable friction mechanism. As used herein, the term “variable frictional feedback” may be used to refer to a variable resistance to a rotational input, which may be perceived by the user as a variable frictional force or resistive torque. Because the output may be perceived through user touch, the output may also be referred to as a “tactile output” or a “haptic output.” In some cases, the variable frictional feedback may result in a change in a resistive torque (e.g., a torque that resists rotation of the crown). In some cases, the variable frictional feedback may cause the crown to require more or less torque to rotate. In some cases, the variable frictional feedback may cause the crown to be easier or harder to rotate. In some cases, the variable frictional feedback may simulate one or more mechanisms, such as detents, ratchets, brakes, and the like. The variable friction mechanism may produce variable frictional feedback in response to receiving signals, such as from a processing unit of the electronic watch. The variable frictional feedback may correspond to operational states, events, or other conditions at the electronic watch, including inputs received at the electronic watch (e.g., touch inputs, rotational inputs, translational inputs), outputs of the electronic watch (e.g., graphical outputs, audio outputs, haptic outputs), applications and processes executing on the electronic watch, predetermined sequences, a rotational position of the crown, user interface commands (e.g., volume, zoom, or brightness controls, audio or video controls, scrolling on a list or page, and the like), and the like. The variable friction mechanism may be positioned about or at least partially surround a portion of the crown. In some cases, the variable friction mechanism may contact and exert a variable force on the crown (e.g., a variable force having a component that is normal to a surface of the shaft, the crown body, and/or another component of the crown). The friction between the variable friction mechanism and the crown may be correlated with the force exerted on the crown by the variable friction mechanism. As a result, the force exerted on the crown may be varied to vary the friction between the crown and the variable friction mechanism. The varying friction may be used to provide the variable frictional feedback as the crown is rotated. As used herein, the terms “friction” or “frictional force” may be used to refer to a force that resists or impedes the relative motion of two surface or elements sliding against each other. The variable friction mechanism may include one or more friction elements that contact the crown and one or more actuators coupled to the friction elements. In some cases, the variable friction mechanism is positioned around a periphery of the shaft, and the friction elements at least partially surround the shaft. The actuators may apply a force to the friction elements that causes the friction elements to press against the shaft, resulting in a force exerted on the shaft by each friction element. The force applied by the actuators may be varied to vary the friction between the friction elements and the shaft and to produce the variable frictional feedback. In some cases, the actuator causes the friction element to deform and press against the shaft. In some cases, the actuator compresses and/or decompresses the friction element along an axis that is substantially parallel to the shaft to change the force exerted by the friction element on the shaft. Compressing the friction element along an axis that is substantially parallel to the shaft may change a shape of the friction element and cause it to press against the shaft and increase the force. Decompressing or stretching the friction element along the axis that is substantially parallel to the shaft may change the shape of the friction element and reduce or eliminate the force exerted on the shaft. In some cases, the actuator applies a force to the friction element along an axis that is substantially perpendicular to a surface of the shaft. For example, the actuator may compress the friction element inward and/or move the friction element toward the shaft to increase the force exerted on the shaft and decompress (or stretch) the friction element and/or move the friction element away from the shaft to decrease the force. The force applied by the actuator(s) need not be parallel or perpendicular to the shaft. In various cases, the force exerted by the friction element on the shaft may be varied by applying a force to the friction element in any of several directions. Similarly, the friction element contacting the shaft is not required. In various cases, the friction element may contact a component of the crown that is coupled to the shaft or crown body, such as a shaft retainer or the like. In some cases, the friction element may be brought out of contact with the shaft, crown body, and/or shaft retainer such that there is no friction between the variable friction mechanism and the crown. The crown body may include a conductive portion, such as an inner crown body, that defines a conductive surface, such as a touch-sensitive surface, for receiving touch inputs. In general, the term “touch input” may be used to refer to a touch or gesture applied to the crown by a finger, thumb, or other body part of the user. The touch input may be momentary or sustained, depending on the user's interaction with the device. The conductive portion having a conductive surface may be configured to measure an electrical property associated with the touch. For example, the conductive surface may function as an electrode to sense voltages or signals indicative of one or more touch inputs and/or biological parameters, such as an electrocardiogram, of a user in contact with the conductive surface. The conductive portion or conductive surface may be electrically coupled to one or more circuits of the electronic device to transmit signals from the conductive surface for detection and processing as touch inputs and/or biological parameters. For example, the conductive surface may be electrically coupled to the shaft, and an end of the shaft interior to the enclosure, or a conductive shaft retainer interior to the enclosure, may be in mechanical and electrical contact with a connector (e.g., a spring-biased conductor) that carries electrical signals between the shaft or shaft retainer and a circuit (e.g., a processing unit), thereby providing electrical communication between the crown and the circuit. As discussed above, the conductive portion may be an inner crown body of the crown body. The crown body may also include an outer crown body at least partially surrounding the inner crown body. In some embodiments, the outer crown body is electrically isolated from the shaft and/or inner crown body to prevent electrical grounding of the inner crown body with other components of the electronic device, such as the device enclosure, and/or to allow users to provide rotational and/or translational inputs at the crown without accidentally providing touch inputs by contacting the conductive surface of the inner crown body. Similarly, the crown may be electrically isolated from the enclosure. Generally, the crown body is attached to and/or coupled with the shaft to form the crown. The term “attached,” as used herein, may be used to refer to two or more elements, structures, objects, components, parts or the like that are physically affixed, fastened, and/or retained to one another. The term “coupled,” as used herein, may be used to refer to two or more elements, structures, objects, components, parts or the like that are physically attached to one another, operate with one another, communicate with one another, are in electrical connection with one another, and/or otherwise interact with one another. Accordingly, while elements attached to one another are coupled to one another, the reverse is not required. As used herein, “operably coupled” may be used to refer to two or more devices that are coupled in any suitable manner for operation and/or communication, including wiredly, wirelessly, or some combination thereof. In various cases, one or more components of the crown may be integrally formed with one another. As used herein, the term “integrally formed with” may be used to refer to defining or forming a unitary structure. For example, the crown body may be integrally formed with the shaft (e.g., the shaft and the crown body are a single part). These and other embodiments are discussed with reference toFIGS.1-15. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting. FIG.1shows a functional block diagram of an electronic device100. In some examples, the device100may be an electronic watch. The electronic device100may include a device enclosure116and a crown121, one or more input devices130, one or more output devices132, a display134, and a processing unit111positioned at least partially within the enclosure116. In some cases, the electronic device100includes a crown121configured to receive translational inputs, rotational inputs, and/or touch inputs. Inputs received at the crown121may result in changes in outputs provided by the electronic device100such as a graphical output of the display134, and/or otherwise modify operations of the electronic device. In some cases, the crown121may be positioned along a side of the enclosure116, and may extend through an opening123defined in the enclosure. The crown121may include a user-rotatable crown body120and a shaft122. The crown body may be positioned at least partially outside of the device enclosure116and may be coupled to the shaft122. In some cases, the shaft122extends from the crown body and extends through the opening123. In some embodiments, the electronic device100includes a variable friction mechanism124for providing variable frictional feedback at the crown121as the crown is rotated by a rotational input. In some cases, the variable friction mechanism124is positioned around or at least partially surrounds a portion of the crown121, such as the shaft122. The variable friction mechanism124may vary a frictional force between the crown121and the variable friction mechanism as the crown is rotated to change a resistive torque and/or simulate one or more mechanisms, such as detents, ratchets, brakes, and the like. In various embodiments, the variable friction mechanism124may produce variable frictional feedback in response to receiving signals from the processing unit111and/or other sources. The variable frictional feedback may correspond to operational states, events, or other conditions at the electronic watch, including a rotational position of the crown121, inputs received at the input devices130, outputs provided by the output devices132, and the like. The variable friction mechanism124may be electrically coupled to the processing unit111or another circuit of the electronic device100, for example via a connector158eand/or the shaft122. In some cases, the crown121may include a conductive portion that may be used to perform an ECG measurement. The crown body120may define a conductive surface for receiving touch inputs. In some cases, the conductive surface functions as an electrode to sense voltages or signals indicative of one or more touch inputs and/or biological parameters, such as an electrocardiogram, of a user in contact with the conductive surface. The enclosure116may define another touch-sensitive or conductive surface that is electrically coupled to the processing unit111and also functions as an electrode. The processing unit111may determine an electrocardiogram using outputs of the electrodes of the crown body120and the enclosure116. In various embodiments, the crown121is electrically isolated from the enclosure116, for example to allow separate measurements at the electrodes. In various embodiments, the crown body120may be electrically coupled to the processing unit111or another circuit of the electronic device100, for example via a connector158aand/or the shaft122. In various embodiments, the display134may be disposed at least partially within the enclosure116. The display134provides a graphical output, for example associated with an operating system, user interface, and/or applications of the electronic device100. In one embodiment, the display134includes one or more sensors and is configured as a touch-sensitive (e.g., single-touch, multi-touch) and/or force-sensitive display to receive inputs from a user. The display134is operably coupled to the processing unit111of the electronic device100, for example by a connector158d. In various embodiments, a graphical output of the display134is responsive to inputs provided at the crown121, the display, or another input device130. For example, the processing unit111may be configured to modify the graphical output of the display134in response to determining an electrocardiogram, receiving rotational inputs, receiving translational inputs, or receiving touch inputs. In some cases, the variable frictional feedback produced by the variable friction mechanism124is responsive to or otherwise corresponds to the graphical output of the display134. The display134can be implemented with any suitable technology, including, but not limited to liquid crystal display (LCD) technology, light emitting diode (LED) technology, organic light-emitting display (OLED) technology, organic electroluminescence (OEL) technology, or another type of display technology. In some cases, the display134is positioned beneath and viewable through a cover sheet that forms at least a portion of the enclosure116. Broadly, the input devices130may detect various types of input, and the output devices132may provide various types of output. The processing unit111may be operably coupled to the input devices130and the output devices132, for example by connectors158band158c. The processing unit111may receive input signals from the input devices130, in response to inputs detected by the input devices. The processing unit111may interpret input signals received from one or more of the input devices130and transmit output signals to one or more of the output devices132. The output signals may cause the output devices132to provide one or more outputs. Detected input at one or more of the input devices130may be used to control one or more functions of the device100. In some cases, one or more of the output devices132may be configured to provide outputs that are dependent on, or manipulated in response to, the input detected by one or more of the input devices130. The outputs provided by one or more of the output devices132may also be responsive to, or initiated by, a program or application executed by the processing unit111and/or an associated companion device. Examples of suitable processing units, input device, output devices, and displays, are discussed in more detail below with respect toFIG.15. FIG.2Ashows an example of a watch200(e.g., an electronic watch) that incorporates a crown221as described herein. The watch may include a watch body212and a watch band214. Other devices that may incorporate a crown include other wearable electronic devices, other timekeeping devices, other health monitoring or fitness devices, other portable computing devices, mobile phones (including smart phones), tablet computing devices, digital media players, virtual reality devices, audio devices (including earbuds and headphones), and the like. The watch body212may include an enclosure216. The enclosure216may include a front side housing member that faces away from a user's skin when the watch200is worn by a user, and a back side housing member that faces toward the user's skin. Alternatively, the enclosure216may include a singular housing member, or more than two housing members. The one or more housing members may be metallic, plastic, ceramic, glass, or other types of housing members (or combinations of such materials). A cover sheet218may be mounted to a front side of the watch body212(i.e., facing away from a user's skin) and may protect a display mounted within the enclosure216. The display may be viewable by a user through the cover sheet218. In some cases, the cover sheet218may be part of a display stack, which display stack may include a touch sensing or force sensing capability. The display may be configured to depict a graphical output of the watch200, and a user may interact with the graphical output (e.g., using a finger, stylus, or other pointer). As one example, the user may select (or otherwise interact with) a graphic, icon, or the like presented on the display by touching or pressing (e.g., providing touch input) on the display at the location of the graphic. As used herein, the term “cover sheet” may be used to refer to any transparent, semi-transparent, or translucent surface made out of glass, a crystalline material (such as sapphire or zirconia), plastic, or the like. Thus, it should be appreciated that the term “cover sheet,” as used herein, encompasses amorphous solids as well as crystalline solids. The cover sheet218may form a part of the enclosure216. In some examples, the cover sheet218may be a sapphire cover sheet. The cover sheet218may also be formed of glass, plastic, or other materials. In some embodiments, the watch body212may include an additional cover sheet (not shown) that forms a part of the enclosure216. The additional cover sheet may have one or more electrodes thereon. For example, the watch body212may include an additional cover sheet mounted to a back side of the watch body212(i.e., facing toward a user's skin). The one or more electrodes on the additional cover sheet may be used to determine a biological parameter, such as a heart rate, an ECG, or the like. In some cases, the electrodes are used in combination with one or more additional electrodes, such as a surface of a crown or other input device. The watch body212may include at least one input device or selection device, such as a crown, scroll wheel, knob, dial, button, or the like, which input device may be operated by a user of the watch200. In some embodiments, the watch200includes a crown221that includes a crown body220and a shaft (not shown inFIG.2A). The enclosure216may define an opening through which the shaft extends. The crown body220may be attached and/or coupled to the shaft, and may be accessible to a user exterior to the enclosure216. The crown body220may be user-rotatable, and may be manipulated (e.g., rotated, pressed) by a user to rotate or translate the shaft. The shaft may be mechanically, electrically, magnetically, and/or optically coupled to components within the enclosure216. A user's manipulation of the crown body220and shaft may be used, in turn, to manipulate or select various elements displayed on the display, to adjust a volume of a speaker, to turn the watch200on or off, and so on. The enclosure216may also include an opening through which a button230protrudes. In some embodiments, the crown body220, scroll wheel, knob, dial, button230, or the like may be touch sensitive, conductive, and/or have a conductive surface, and a signal route may be provided between the conductive portion of the crown body220, scroll wheel, knob, dial, button230, or the like and a circuit within the watch body212. The enclosure216may include structures for attaching the watch band214to the watch body212. In some cases, the structures may include elongate recesses or openings through which ends of the watch band214may be inserted and attached to the watch body212. In other cases (not shown), the structures may include indents (e.g., dimples or depressions) in the enclosure216, which indents may receive ends of spring pins that are attached to or threaded through ends of a watch band to attach the watch band to the watch body. The watch band214may be used to secure the watch200to a user, another device, a retaining mechanism, and so on. In some examples, the watch200may lack any or all of the cover sheet218, the display, the crown221, or the button230. For example, the watch200may include an audio input or output interface, a touch input interface, a force input or haptic output interface, or other input or output interface that does not require the display, crown221, or button230. The watch200may also include the aforementioned input or output interfaces in addition to the display, crown221, or button230. When the watch200lacks the display, the front side of the watch200may be covered by the cover sheet218, or by a metallic or other type of housing member. FIG.2Bshows a cross-section view of an example crown221and a variable friction mechanism224in the electronic watch200, taken through section line A-A ofFIG.2A, as viewed from the front or rear face of a watch body. As shown inFIG.2B, the crown221includes a crown body220and a shaft222coupled to the crown body. The crown body220may be rotated by a user of the electronic watch200, to in turn rotate the shaft222. In some cases, the crown body220may also be pulled or pushed by the user to translate the shaft222along its axis (e.g., left and right with respect toFIG.2B). The crown body220may be operably coupled to a circuit within the enclosure216(e.g., a processing unit296), but electrically isolated from the enclosure216. In some cases, the electronic watch200includes a variable friction mechanism224that produces a variable frictional feedback as the crown221is rotated. In some cases, the variable frictional feedback may correspond to inputs received by the electronic watch200(e.g., a rotational input received by the crown221) and/or outputs provided by the electronic watch (e.g., a graphical output provided by a display). In some cases, the variable friction mechanism224produces the variable frictional feedback by varying a frictional force between the crown and the variable friction mechanism. The variable frictional feedback may result in a change in a resistive torque (e.g., a torque that resists rotation of the crown). In some cases, the variable frictional feedback may cause the crown221to require more or less torque to rotate. Similarly, the variable frictional feedback may cause the crown221to be easier or harder to rotate. In some cases, the variable frictional feedback may simulate one or more mechanisms, such as detents, ratchets, brakes, and the like. The variable friction mechanism224may contact and exert a variable force on the crown221(e.g., a variable force having a component that is normal to a surface of the shaft222, the crown body220, and/or another component of the crown221). As discussed above, the friction between the variable friction mechanism224and the crown221may be correlated with the force exerted on the crown by the variable friction mechanism. As a result, the force exerted on the crown221may be varied to vary the friction between the crown and the variable friction mechanism224. The varying friction may be used to vary the resistive torque to produce variable frictional feedback as the crown221is rotated. The variable friction mechanism224may be disposed within the enclosure, coupled to the enclosure, or otherwise positioned to contact the crown221. In some cases, the variable friction mechanism224at least partially surrounds the shaft222and is configured to contact the shaft around a periphery of the shaft. The variable friction mechanism224may include one or more friction elements that contact the shaft222and one or more actuators coupled to the friction elements and configured to vary the force between the friction elements and the shaft, as discussed in more detail below with respect toFIGS.5A-8. The variable friction mechanism224may produce variable frictional feedback in response to receiving signals, such as from a processing unit296of the electronic watch200. The variable frictional feedback may correspond to operational states, events, or other conditions at the electronic watch, including inputs received at the electronic watch200(e.g., touch inputs, rotational inputs, translational inputs), outputs of the electronic watch (e.g., graphical outputs, audio outputs, haptic outputs), applications and processes executing on the electronic watch, predetermined sequences, a rotational position of the crown, user interface commands (e.g., volume, zoom, or brightness controls, audio or video controls, scrolling on a list or page, and the like), and the like. The variable friction mechanism224may be operably coupled to the processing unit296via a connector258eand/or via one or more additional components of the electronic device. As discussed above, the shaft222extends through an opening in a device enclosure216. In some cases, a collar242is disposed in an opening in the enclosure216and defines the opening through which the shaft222extends. In some embodiments, the collar242may be aligned with the opening in the enclosure216. In some embodiments, the collar242be coupled to the enclosure216or another component internal to the enclosure (not shown) via threads on a male portion of the collar242and corresponding threads on a female portion of the enclosure216. Optionally, one or more gaskets made of a synthetic rubber and fluoropolymer elastomer (e.g., Viton), silicone, or another compressible material may be disposed between the collar242, the enclosure216, and/or the shaft222to provide stability to the collar242and/or the shaft and/or provide a moisture barrier between the collar242, the enclosure216, and/or the shaft222. Example gaskets are discussed in more detail below with respect toFIGS.5A-5B. In some cases, a rotation sensor262for detecting rotation of the crown221is disposed within the enclosure216. In some cases, the rotation sensor262do not cause substantial friction on the crown221, for example because the rotation sensor does not contact the crown. This allows for easy rotation, but it does not provide tactile feedback to the user. The rotation sensor262may include one or more light emitters and/or light detectors. The light emitter(s) may illuminate an encoder pattern or other rotating portion of the shaft222or shaft retainer (not shown). The encoder pattern may be carried on (e.g., formed on, printed on, etc.) the shaft222or the shaft retainer. In some cases, the light emitter(s) may illuminate trackable elements (e.g., surface defects of the shaft222or another component of the crown221) instead of or in addition to an encoder pattern. The shaft222or another component of the crown221may reflect light emitted by the light emitter(s). In some cases, the reflected light is reflected off of the encoder pattern and/or the trackable elements. The light detector(s) may receive the reflected light, and the processing unit296may determine a direction of rotation, speed of rotation, rotational position, translation, or other state(s) of the crown221based on the reflected light. In some embodiments, the rotation sensor262may detect rotation of the crown221by detecting rotation of the shaft222. The rotation sensor262may be operably coupled to the processing unit296of the electronic device by a connector258a. In some cases, a translation sensor240for detecting translation of the crown221is disposed within the enclosure216. In some embodiments, the translation sensor240includes an electrical switch, such as a tactile dome switch, which may be actuated or change state in response to translation of the crown221. Thus, when a user presses on the crown body220, the shaft222may translate into the enclosure216(e.g., into the enclosure of a watch body) and actuate the switch, placing the switch in one of a number of states. When the user releases pressure on the crown body220or pulls the crown body220outward from the enclosure216, the switch may retain the state in which it was placed when pressed, or advance to another state, or toggle between two states, depending on the type or configuration of the switch. In some embodiments, the translation sensor240includes one or more light emitters and/or light detectors. The light emitter(s) may illuminate an encoder pattern or other portion of the shaft222or shaft retainer. The light detector(s) may receive reflections of the light emitted by the light emitter(s), and a processing unit296may determine a direction of rotation, speed of rotation, rotational position, translation, or other state(s) of the crown221. In some embodiments, the translation sensor240may detect translation of the crown221by detecting translation of the shaft222. The translation sensor240may be operably coupled to a processing unit296of the electronic device by a connector258c. In various embodiments, the shaft222and the crown body220are operably coupled to a processing unit296and/or one or more other circuits of an electronic device. One or more connectors may operably couple the shaft222to the processing unit296and/or one or more other circuits. In various cases, a connector258dis in mechanical and electrical contact with the shaft222or shaft retainer. In some cases, the connector258dmay be formed (e.g., stamped or bent) from a piece of metal (e.g., stainless steel). In other cases, the connector258dmay take on any of several forms and materials. When the shaft222is translatable, translation of the shaft222into the enclosure (e.g., into the enclosure of a watch body) may cause the connector258dto deform or move. However, the connector258dmay have a spring bias or other mechanism which causes the connector258dto maintain electrical contact with the shaft retainer or shaft, regardless of whether the shaft222is in a first position or a second position with reference to translation of the shaft222. In some embodiments of the crown221, the connector258dmay include a conductive brush that is biased to contact a side of the shaft222or a side of the shaft retainer. The conductive brush may maintain electrical contact with the shaft222or shaft retainer through rotation or translation of the shaft222, and may be electrically connected to the processing unit296and/or another circuit such that the shaft remains operably coupled to the processing unit as the shaft rotates. This allows the crown221, and in particular the crown body220, to remain operably coupled to the processing unit296as the crown221is manipulated (e.g., rotated and/or translated) by a user, which allows the electrode(s) on the crown to maintain their functionality as the crown is manipulated. The processing unit296or other circuit of the electronic device may be operably coupled to the crown221via the connector258d, the shaft retainer, and/or the shaft222. In some cases, the connector258dis coupled to the processing unit296via an additional connector258b(e.g., a cable, flex, or other conductive member). In some cases, as shown inFIG.2B, the connector258dmay be positioned between the shaft222and/or the shaft retainer and the translation sensor240. The connector258dmay be attached to the shaft, the shaft retainer, and/or the translation sensor240. In some cases, the connector258dmay be connected to the processing unit296via the translation sensor240and/or the connector258c. In some cases, the connector258dis integrated with the translation sensor240. For example, the shaft and/or the shaft retainer may be operably coupled to the translation sensor240to couple the crown221to the processing unit296. In some embodiments, a bracket256may be attached (e.g., laser welded) to the enclosure216or another element within the enclosure. The rotation sensor262and/or the translation sensor240may be mechanically coupled to bracket256, and the bracket may support the rotation sensor262and/or the translation sensor240within the enclosure216. In the embodiment shown inFIG.2B, the rotation sensor262and the translation sensor240are shown as separate components, but in various embodiments, the rotation sensor262and the translation sensor240may be combined and/or located in different positions from those shown. The connectors258a-emay be operably coupled to the processing unit296, for example as discussed with respect toFIG.15below. The processing unit296may determine whether a user is touching the touch-sensitive surface of the crown body220, and/or determine a biological parameter of the user based on a signal received from or provided to the user via the touch-sensitive surface of the crown body220. In some cases, the processing unit296may determine other parameters based on signals received from or provided to the touch-sensitive surface of the crown body220. In some cases, the processing unit296may operate the crown221and/or one or more additional electrodes as an electrocardiogram measurement device and provide an electrocardiogram to a user of the watch200including the crown221. As discussed above, in some cases, the crown body220includes a conductive portion defining a portion of the external surface of the crown body. In some cases, the crown body220defines the touch-sensitive surface, and is operably coupled to the shaft222. The crown body220may be a separate part that is mechanically coupled to the shaft222or the crown body220and the shaft222may be a single part. The crown body220may function as an electrode as discussed above. In various embodiments, the shaft222and the crown body220may be formed of any suitable conductive material or combination of materials, including titanium, steel, brass, ceramic, doped materials (e.g., plastics), or the like. One or more surfaces of the crown body220and/or the shaft222may be coated or otherwise treated to prevent or mitigate corrosion, wear, grounding effects, and the like. Coating processes may include electrophoretic deposition, physical vapor deposition, and the like. The shaft222and/or the crown body220may include various features for coupling the shaft to the enclosure, a collar, and/or other component(s) of the electronic device. In some embodiments, one or more attachment mechanism(s) may mechanically couple the crown body220to other components of the crown221. In some cases, an attachment mechanism that mechanically and/or operably couples the crown body220to the shaft222also mechanically couples the crown body220to other components of the crown221. In some embodiments, one or more components of the crown221may have a conductive surface covered by a thin non-conductive coating. The non-conductive coating may provide a dielectric for capacitive coupling between a conductive surface and a finger of a user of the crown221(or an electronic watch or other device that includes the crown221). In the same or different embodiments, the crown221may have a non-conductive coating on a surface of the crown body220facing the enclosure216. In some examples, the conductive material(s) may include a PVD deposited layer of aluminum titanium nitride (AlTiN) or chromium silicon carbonitride (CrSiCN). In some cases, the crown body220includes one or more non-conductive portions, such as an outer crown body that surrounds a conductive inner crown body. The outer crown body may be formed of any suitable material, including conductive and non-conductive materials (e.g., aluminum, stainless steel, ceramics, or the like). The crown body220may include a conductive portion, such as an inner crown body, which defines the touch-sensitive surface for receiving touch inputs, and an outer crown body at least partially surrounding the conductive portion. In some embodiments, the outer crown body is electrically isolated from the inner crown body to prevent electrical grounding of the inner crown body with other components of the electronic device, such as the device enclosure216, and/or to allow users to provide rotational and/or translational inputs at the crown221without accidentally providing touch inputs by contacting the touch-sensitive surface of the inner crown body. In various embodiments, the crown221may include adhesive and/or other fasteners for coupling the components and/or coupling the crown221to an electronic device. Any gaps or empty spaces shown inFIG.2Bmay be filled with adhesive or other substances to couple the components of the crown, electrically isolate the shaft from other components of the crown221and/or protect components of the crown (e.g., provide lubrication, mitigate corrosion, and the like). The example arrangements of components discussed with respect toFIG.2Bare for illustrative purposes, and are not meant to be limiting or exhaustive. In some cases, the crown221may include more or fewer components, and the illustrated components may be combined with one another and/or additional components. Similarly, the illustrated components may be divided into multiple separate components. As discussed above, the variable friction mechanism may produce variable frictional feedback in response to receiving signals, such as from a processing unit of the electronic watch. The variable frictional feedback may correspond to operational states, events, or other conditions at the electronic watch, including inputs received at the electronic watch (e.g., touch inputs, rotational inputs, translational inputs), outputs of the electronic watch (e.g., graphical outputs, audio outputs, haptic outputs), applications and processes executing on the electronic watch, predetermined sequences, a rotational position of the crown, user interface commands (e.g., volume, zoom, or brightness controls, audio or video controls, scrolling on a list or page, and the like), and the like. FIG.3shows an example method300for providing variable frictional feedback as a crown of an electronic watch is rotated. At block302, the electronic watch detects a rotational input at a crown by detecting rotational movement of the crown (e.g., a crown221). The electronic watch may detect the rotational input using a rotation sensor (e.g., rotation sensor262) operably coupled to a processing unit (e.g., processing unit296). Detecting the rotational input may include detecting rotational speed(s) and/or rotational position(s) of the crown. In some cases, the processing unit may determine whether the rotational input exceeds a threshold level of rotational movement (e.g., a threshold rotation speed or a threshold rotation distance). In some cases, the method only proceeds if the rotational input exceeds the threshold level of rotational movement. At block304, the processing unit determines a variable frictional feedback to produce as the crown rotates. In some cases, the variable frictional feedback is determined in response to detecting the rotational input at block302. In some cases, the variable frictional feedback corresponds to one or more characteristics of the rotational input detected at block302. For example, the variable frictional feedback may correspond to a rotational speed or position of the crown, an output associated with the rotational input, a user interface command associated with the user input, or the like. The processing unit may determine one or more characteristics of the rotational input. In some cases, determining the variable frictional feedback may include determining an amount of variable frictional feedback to be produced. The amount of variable frictional feedback may be used to refer to an amount of resistive torque, an amount of change in resistive torque, a length of variable frictional feedback, a pattern of variable frictional feedback, or the like. The processing unit may determine an amount of the variable frictional feedback to be produced based, at least in part, on a characteristic of the rotational input. At block306, the processing unit outputs a signal that corresponds to the variable frictional feedback determined at block304. The signal may be transmitted to a variable friction mechanism of the electronic watch (e.g., variable friction mechanism224) to direct the variable friction mechanism to produce the variable frictional feedback. At block308, in response to receiving the signal from the processing unit, the variable friction mechanism varies a friction between the variable friction mechanism and the crown to produce the variable frictional feedback as the crown is rotated. In some cases, varying the friction between the variable friction mechanism and the crown varies a resistive torque associated with rotating the crown. As a result, a user rotating the crown may perceive (e.g., by touch) that the crown becomes harder or easier to rotate and/or that the rotation of the crown simulates one or more mechanisms, such as detents, ratchets, brakes, and the like. In various embodiments, varying the friction between the variable friction mechanism and the crown includes increasing and/or decreasing the friction between the variable friction mechanism and the crown. The friction may be changed multiple times in response to a single signal, and multiple signals may be provided in sequence to produce multiple changes in the friction. The variable frictional feedback may be provided by changing the friction as the crown is rotated over a range of time, across a range of rotational positions of the crown, or some combination thereof. The variable frictional feedback may be provided by changing the friction in a continuous manner, at discrete times or rotational positions, and/or in a cyclical or otherwise patterned manner. As discussed above, in some cases, the variable friction mechanism is positioned around or at least partially surrounds a portion of the crown. The variable friction mechanism may contact a surface of the crown to produce the friction between the variable friction mechanism and the crown. In some cases, the variable friction mechanism is capable of being brought out of contact with the crown, thereby removing the friction between the variable friction mechanism and the crown. The method300is an example method for providing variable frictional feedback and is not meant to be limiting. Methods for providing variable frictional feedback may omit and/or add steps to the method300. Similarly, steps of the method300may be performed in different orders than the example order discussed above. FIGS.4A-4Dillustrate graphs of example variable frictional feedback sequences. As discussed above, in some cases, the variable friction mechanism provides variable frictional feedback as the crown is rotated, such as during a rotational input. The variable friction mechanism may vary the friction between the crown and the variable friction mechanism, which causes a varying resistive torque associated with rotating the crown. In some cases, the resistive torque associated with rotating the crown is proportional to the friction between the crown and the variable friction mechanism. In various embodiments, the variable friction mechanism may be configured to increase the resistive torque for a first portion of a rotational input and decrease the resistive torque for a second portion of the rotational input. In some cases, the resistive torque may be increased and/or decreased in response to receiving a signal from a processing unit. In some cases, the variable friction mechanism may increase and/or decrease the resistive torque in accordance with a predetermined number of times per complete rotation of the crown and/or at predetermined intervals of time. In some cases, an electronic watch may have multiple different graphical output modes in which different variable frictional feedback is provided. In some cases, in a first graphical output mode, the variable friction mechanism may alternately increase and decrease the resistive torque associated with rotating the crown in accordance with a first predetermined number of times per complete rotation of the crown and/or at first predetermined intervals of time having a first duration. In a second graphical output mode, the variable friction mechanism may alternately increase and decrease the resistive torque associated with rotating the crown in accordance with a second predetermined number of times per complete rotation of the crown and/or at predetermined intervals of time having a second duration. In some cases, the variable friction mechanism may vary the friction between the crown and the variable friction mechanism, and thus the resistive torque associated with rotating the crown, over time.FIG.4Aillustrates a graph400of varying resistive torque over time for an example sequence of variable frictional feedback. From T0 to T1, the resistive torque at the crown is F1. From T1 to T2, the resistive torque increases to F2. The resistive torque may be increased, for example, in response to receiving a first signal from a processing unit as discussed above with respect toFIG.3. From T2 to T3, the resistive torque remains constant at F2. Between T3 and T4, the resistive torque decreases to F3. The resistive torque may be decreased, for example, in response to receiving a second signal from the processing unit as discussed above with respect toFIG.3. After T4, the resistive torque remains constant at F3. In various embodiments, changes in resistive torque at predetermined intervals of time may result in variable frictional feedback as the crown is rotated. As discussed above, in some cases, the friction between the shaft and the variable friction mechanism, and thus the resistive torque associated with rotating the crown, is based on (e.g., is a function of) a rotational position of the crown. The rotational position of the crown may refer to an angular position of the crown across a range of positions. In some cases, the rotational position may correspond to an absolute angular position of the crown as it rotates (e.g., from 0 to 360 degrees as the crown completes one rotation). In some cases, the rotational position may correspond to a position of the crown that corresponds to an event or condition at the electronic watch, such as a volume or display brightness level, a position or selection on a scrollable or selectable page or list, a position in an audio track or video, as described in more detail with respect toFIGS.12A-14B. FIG.4Billustrates a graph410of varying resistive torque versus rotational position or speed for example sequences of variable frictional feedback. As shown inFIG.4B, the resistive torque associated with rotating the crown continuously increase or decrease between a first rotational position and a second rotational position. In some cases, as shown by curves412and416, the increase or decrease may be linear. In some cases, as shown by curves414and418, the increase or decrease may be non-linear. As described above, the variable frictional feedback may correspond to operational states, events, or other conditions at the electronic watch, including user interface commands. For example, the variable frictional feedback may be provided in accordance with a volume adjustment process in which the crown may be used to set and change a volume level at the electronic watch. The rotational position R0 may correspond to a first volume level and the rotational position R1 may correspond to a second different (e.g., higher or lower) volume level. A user may rotate the crown between R0 and R1 to adjust the output volume of the electronic watch or a connected device. In some cases, R0 corresponds to a volume of zero and R1 corresponds to a maximum volume. As shown by the example resistive torque curves412,414,416, and418, the friction between the variable friction mechanism and the shaft, and thus the resistive torque associated with rotating the crown, may continuously increase or decrease between R0 and R1 to give the user variable frictional feedback that indicates a current volume level in the range of possible volume levels. For example, with respect to curve412, the resistive torque may have a value of F4 at a first rotational position R2 that corresponds to a first volume and the resistive torque may have a higher value of F5 at a second rotational position R3 that corresponds to a second volume that is closer to the maximum volume. In some cases, the variable frictional feedback may result in the crown to be substantially non-rotatable at position R0 and/or R1 to indicate the ends of the volume range to the user. In some cases, the sequences of variable frictional feedback shown inFIG.4Bmay correspond to a position on a scrollable and/or selectable page or list of items (e.g., selectable elements) displayed by the electronic watch. For example, R0 may correspond to the top of the page or list and R1 may correspond to the bottom of the page or list, and a user may rotate the crown between R0 and R1 to navigate along the page or list. As shown by the example resistive torque curves412,414,416, and418, the resistive torque between the variable friction mechanism and the shaft may continuously increase or decrease between R0 and R1 to give the user variable frictional feedback that indicates a position on the page or list. For example, with respect to curve412, the resistive torque may have a value of F4 at a first rotational position R2 that corresponds to a first position and the resistive torque may have a higher value of F5 at a second rotational position R3 that corresponds to a second position that is closer to the end of the page or list. In some cases, the variable frictional feedback may result in the crown to be substantially non-rotatable at position R0 and/or R1 to indicate the top and/or bottom of the page or list to the user. For example, if a selected element in a scrollable list is not a last item of the scrollable list, the variable friction mechanism may allow rotation of the crown in both directions, and if the selected element in a scrollable list is the last item of the scrollable list, the variable friction mechanism may allow rotation of the crown in only one direction, such as the direction that allows scrolling away from the last item in the scrollable list. While the curves shown inFIG.4Bare continuous, variable frictional feedback may include non-continuous curves as well. The examples above of variable frictional feedback corresponding to a volume level or position on a page or list are not meant to be limiting. In various embodiments, the variable frictional feedback may correspond to any number of operational states, events, or other conditions at the electronic watch, including inputs received at the electronic watch (e.g., touch inputs, rotational inputs, translational inputs), outputs of the electronic watch (e.g., graphical outputs, audio outputs, haptic outputs), applications and processes executing on the electronic watch, predetermined sequences, a rotational position of the crown, user interface commands (e.g., volume, zoom, or brightness controls, audio or video controls, scrolling on a list or page, and the like), and the like. In some cases, the resistive torque associated with rotating the crown may vary based on a rotational speed of the rotation of the crown. For example, the resistive torque associated with rotating the crown at a first rotational speed may be greater or less than the resistive torque associated with rotating the crown at a second rotational speed. The resistive torque may continuously increase or decrease as a rotational speed changes and/or different ranges of rotational speed may have different associated resistive torque. As shown by the example resistive torque curves412,414,416, and418, the friction between the variable friction mechanism and the shaft, and thus the resistive torque associated with rotating the crown, may continuously increase or decrease between rotational speed R0 and rotational speed R1 to give the user variable frictional feedback based on the rotational speed of the crown. For example, with respect to curve412, the resistive torque may have a value of F4 at a first rotational speed R2 and the resistive torque may have a higher value of F5 at a second, faster rotational speed R3. In some cases, the variable frictional feedback may result in the crown to be substantially non-rotatable at position R0 and/or R1 to indicate the ends of the volume range to the user. In some cases, as shown in curves416and418, the resistive torque may have a first value for a first rotational speed and a second, lower value for a greater rotational speed. This allows for finer movements at slower speeds and easier rotation (e.g., for scrolling) at higher speeds. In some cases, the variable frictional feedback includes alternating increases and decreases of resistive torque and/or cyclical or otherwise patterned changes in resistive torque. As discussed above, the variable frictional feedback may be used to provide a tactile output or haptic output that may be perceived through user touch. In some cases, the variable frictional feedback may simulate one or more mechanisms, such as detents, ratchets, brakes, and the like. FIG.4Cillustrates curves420,422,424, and426of resistive torque versus rotational position for example sequences of variable frictional feedback. Curves420,422, and424may result in a sensation to a user that simulates a detent mechanism in which the resistance to rotation changes based on the rotational position of the crown. Curve426may result in a sensation to a user (e.g., a haptic output or tactile output) that simulates a ratchet mechanism in which rotating the crown in a first direction has a different sensation than rotating the crown in a second direction. As discussed above, the variable frictional feedback sequences shown inFIG.4Cmay be described as cyclical, in which a pattern is repeated to create the variable frictional feedback. For example, in curve420, a pattern421is repeated 10 times between rotational position R4 and rotational position R5. In curve422, a pattern423is repeated 10 times between rotational position R4 and rotational position R5. In curve424, a pattern425is repeated 5 times between rotational position R4 and rotational position R5. In curve426, a pattern427is repeated 9 times between rotational position R4 and rotational position R5. In some cases, the number of repetitions of a pattern may be varied to provide different variable frictional feedback. For example, in a first graphical output mode, a pattern may be repeated a first predetermined number of times per complete rotation of the crown and in a second graphical output mode, the pattern may be repeated a second predetermined number of times per complete rotation of the crown. For example, with respect to curve422, in a first mode in which rotational position R5 has an angular difference of 360 degrees from rotational position R4, the pattern421repeats 10 times per complete rotation of the crown. In a second mode in which rotational position R5 has an angular difference of 180 degrees from rotational position R4, the pattern421repeats 10 times per half rotation of the crown or 20 times per complete rotation of the crown. In some cases, the variable frictional feedback is not cyclical but includes one or more increases and/or decreases of resistive torque based on angular position or time.FIG.4Dillustrates a graph430of resistive torque versus rotational position for an example sequence of variable frictional feedback. As shown in graph430, the resistive torque increases from F6 to F8 between rotational positions R6 and R7 and decreases from F8 to F7 between rotational positions R7 and R8. Similarly, the resistive torque increases from F7 to F10 between rotational positions R8 and R9 and decreases from F10 to F9 between rotational positions R9 and R10. The resistive torque increases from F9 to F11 between rotational positions R10 and R11 and decreases from F11 back to F6 between rotational positions R11 and R12. The variable frictional feedback provided in the sequence shown inFIG.4Dmay simulate a combination of a continuous increase (or decrease) in resistive torque as shown and described with respect toFIG.4Band simulated mechanisms as shown and described with respect toFIG.4C. In some cases, the decreases in resistive torque between R7 and R8 and R9 and R10 may represent feedback features that are perceptible to a user and correspond to an event or condition at the electronic watch. For example, the feedback features may correspond to a particular value of an input or user interface command, such as a particular volume, zoom, or brightness level, a particular location in an audio file or video (e.g., beginnings or ends of segments, chapters, scenes, episodes, and the like), a particular position on a list or page, and the like. The feedback features may also correspond to a change in the rotational input. For example, in the case of seeking in an audio track or video, a first feedback feature (e.g., the decrease in resistive torque between R7 and R8) may represent a first time from a current position (e.g., 10 seconds ahead) and a second feedback feature (e.g., the decrease in resistive torque between R9 and R10) may represent a second time from a current position (e.g., 1 minute ahead), allowing users to easily skip back and forth at different increments of time. FIGS.4A-4Dare illustrative and are not meant to be limiting. Different sequences of variable frictional feedback may be provided in addition to what is shown inFIGS.4A-4D. For example, graphs that are shown as resistive torque versus time may apply to sequences of resistive torque versus rotational position and/or speed, and vice versa. Additionally, the movement between rotational positions may be bidirectional, such that a particular sequence of variable frictional feedback may travel in either or both directions along the curves shown inFIGS.4A-4D. FIGS.5A-5Billustrate an example of a crown521installed in an electronic watch500having a variable friction mechanism524, taken through section line A-A ofFIG.2A, as viewed from the front or rear face of a watch body. The crown521includes a crown body520and a shaft522, and may have similar functionality and/or features as the crowns discussed herein (e.g., crown121, crown221). The electronic watch500may have similar components and/or functionality as the electronic watches described herein (e.g., electronic watches100,200). FIG.5Adepicts an example of a crown having a variable friction mechanism524positioned around a portion of the crown521. The friction mechanism524may also be described as being positioned at least partially about the crown512. In various embodiments, the variable friction mechanism524may extend all or part of the way around one or more components of the crown512, such as the shaft522. As discussed above, in some cases, a variable friction mechanism524is positioned in the electronic watch500and adapted to produce a variable frictional feedback as the crown521is rotated by varying a frictional force between the crown and the variable friction mechanism. In some cases, the variable friction mechanism524may be disposed within a collar542. The variable friction mechanism524may include one or more friction elements529that contact the crown521(e.g., the shaft522or the crown body520) and exert a frictional force on the crown as the crown is rotated. In some cases, the friction element529is a ring of compressible (e.g., elastic) material disposed around a periphery of the shaft. The variable friction mechanism524may include one or more actuators525aand525bthat variably compress the friction element529to change the frictional force exerted on the crown521by the friction element. By variably changing the compression of the friction element529, variable frictional feedback as described herein may be provided. In some cases, the compression may occur in a direction that is substantially parallel to a rotational axis of the shaft522. In some cases, the compression may occur in different directions, such as substantially perpendicular to a rotational axis of the shaft522, radially inward toward the rotational axis of the shaft522, and the like. InFIG.5A, the friction element529of the variable friction mechanism524is in an uncompressed configuration. InFIG.5B, the friction element529is in a compressed configuration. As the friction element529is compressed in a direction that is substantially parallel to a rotational axis of the shaft522, the compression causes the force applied by the friction element529to the shaft522to increase, thereby increasing the frictional force between the friction element and the shaft as the shaft rotates. In some cases, compressing the friction element529causes a surface area of the friction element529that contacts the shaft522to increase. For example, as shown inFIG.5A, in an uncompressed configuration, a portion of528athe surface of the friction element529may contact the shaft522. As shown inFIG.5B, in a compressed configuration, a portion of528bthe surface of the friction element529that is larger than the portion528amay contact the shaft522. This may increase the frictional force between the friction element529and the shaft522. The friction element529may be formed of a synthetic rubber and fluoropolymer elastomer, silicone, or another compressible material As shown inFIGS.5A and5B, each of the actuators525aand525bincludes a base portion526aand526band a contact portion527aand527b. The base portion may include a motor or other mechanism that causes the contact portion to move relative to the base portion to compress the friction element529. The actuators525aand525bmay be any suitable type of actuator for compressing the friction element529, including hydraulic actuators, pneumatic actuators, electric actuators, piezoelectric actuators, electroactive polymers, and the like. In some cases, the friction element529may be an actuator itself. For example, the friction element529may be formed of an electroactive material that changes dimension to change the force exerted on the shaft522in response to a change in electrical current applied to the friction element. In various embodiments, the variable friction mechanism524may include any suitable number of actuators. In some cases, the electronic watch500includes one or more connectors558aand558bthat operably couple the variable friction mechanism524to the processing unit296. As shown inFIGS.5A and5B, the collar542is disposed in an opening in the enclosure216and defines an opening through which the shaft522extends. As discussed above, in some embodiments, the variable friction mechanism524may be coupled and/or attached to the collar542. In some cases, the crown521includes a shaft retainer536to retain the crown521within the opening in the enclosure216. In some cases, the shaft retainer536may be mechanically connected to the shaft522, interior to the enclosure216(e.g., interior to a watch body housing), after the shaft is inserted through the opening in the enclosure216with the crown body520positioned exterior to the enclosure216. In some cases, the shaft retainer536may include a nut, and the shaft522may have a threaded male portion that engages a threaded female portion of the nut. In some cases, the shaft retainer536may be conductive, or have a conductive coating thereon, and mechanical connection of the shaft retainer536to the shaft522may form an electrical connection between the shaft retainer536and the shaft522. In some embodiments, the shaft retainer536may be integrally formed with the shaft522, and the shaft522may be inserted through the opening in the enclosure216from inside the enclosure and then attached to the crown body520(e.g., the crown body520may screw onto the shaft522). As shown inFIGS.5A and5B, in some cases, the electronic watch500may include one or more gaskets made of a synthetic rubber and fluoropolymer elastomer (e.g., Viton), silicone, or another compressible material may be disposed between the collar542, the enclosure216, and/or the shaft522to provide stability to the collar and/or the shaft and/or provide a moisture barrier between the collar, the enclosure, and/or the shaft. In some cases, the gaskets include one or more O-rings552or other gaskets disposed around the shaft522. In some cases, the O-rings552may provide a seal between the shaft522and the collar542. The O-rings552may also function as an electrical insulator between the shaft522and the collar542. In some embodiments, the O-rings552may be fitted to recesses in the shaft522. In some cases, the O-rings552may define a bearing surface between the shaft522and the electronic watch500(e.g., the collar542). In some cases, the bearing surface contacts a surface of the collar542to stabilize the crown521and/or to facilitate consistent rotation of the crown521. In some cases, the variable friction mechanism524(e.g., the friction element529) may define an alternative or additional bearing surface between the shaft522and the electronic watch500(e.g., the collar542). In some cases the one or more O-rings552may cooperate with the variable friction mechanism524to stabilize the crown521and/or to facilitate consistent rotation of the crown521. FIG.6illustrates an example of a crown621installed in an electronic watch600having a variable friction mechanism624adapted to produce a variable frictional feedback as the crown is rotated, taken through section line A-A ofFIG.2A, as viewed from the front or rear face of a watch body. The crown621includes a crown body620and a shaft622, and may have similar functionality and/or features as the crowns discussed herein (e.g., crown121, crown221, crown521). The electronic watch600may have similar components and/or functionality as the electronic watches described herein (e.g., electronic watches100,200,500). As discussed above, in some cases, a variable friction mechanism624is positioned in the electronic watch600and adapted to produce a variable frictional feedback as the crown621is rotated by varying a frictional force between the crown and the variable friction mechanism. In some cases, the variable friction mechanism624may be disposed in the enclosure216. The variable friction mechanism624may be operably coupled to the processing unit296via a connector658.FIG.7illustrates an example of the variable friction mechanism624taken through section line B-B ofFIG.6.FIG.7shows an example configuration of the variable friction mechanism624. The variable friction mechanism624may include one or more friction elements629a,629b, and629cthat contact the shaft622and exert a frictional force on the crown as the crown is rotated. In some cases, the friction elements629a-care disposed around a periphery of the shaft622and cooperate to at least partially surround the shaft. The variable friction mechanism624may include one or more actuators625a,625b, and625cthat each apply a force to a respective friction element629a-cto change the frictional force exerted on the crown621by the friction element. As shown inFIG.7, each of the friction elements629a-cmay be a cantilever beam structure having a fixed end640and a free end642. The free end642may be adapted contact the shaft222to create friction between the friction element and the shaft. The fixed end640may be fixed with respect to an outer structure660. Each actuator625a-cmay expand to exert a force against a respective friction element629a-cto cause the friction element to move and/or to change a force exerted on the shaft by the free end642of the friction element. As shown inFIG.7, the variable friction mechanism624may include flexures635a,635b, and639cpositioned between each friction element629a-cand the outer structure660. In various embodiments, the flexures635a-cmay stabilize the friction elements629a-cand/or oppose movement of the friction elements. For example, each flexure635a-cmay oppose a force applied by a respective actuator625a-c. In some cases, the cantilever beam structure of the friction elements629a-cmay allow the free end642of the friction element to move more or less than a movement or dimensional change of an actuator. For example, an actuator625may move a first distance (or changing length by the first distance) and cause a corresponding movement of a friction element629at the location of the actuator. Due to the beam structure and/or the flexure635, the free end642of the friction element629may move a second, larger distance in response to the actuator moving the first distance. Similarly, the actuator625may exert a first force on the friction element629at the location of the actuator. Due to the beam structure and/or the flexure635, the free end642of the friction element629may exert a second, smaller force on the shaft622. In some cases, the actuators625a-cmay be piezoelectric actuators, but any suitable type of actuator may be used. In some cases, the friction elements629a-cand the flexures635a-care integrally formed with one another. For example, all of the friction elements629a-cand the flexures635a-cmay be arranged in an array and formed from a single part. Returning toFIG.6, in some cases, the crown621includes a shaft retainer636to retain the crown621within the opening in the enclosure216similar to the shaft retainer536discussed above with respect toFIGS.5A and5B. In some cases, a variable friction mechanism may be combined with a shaft retainer, for example as discussed below with respect toFIG.8. FIG.8illustrates an example of a crown821installed in an electronic watch800having a variable friction mechanism824adapted to produce a variable frictional feedback as the crown821is rotated, taken through section line A-A ofFIG.2A, as viewed from the front or rear face of a watch body. The crown821includes a crown body820and a shaft822, and may have similar functionality and/or features as the crowns discussed herein (e.g., crown121, crown221, crown521, crown621). The electronic watch800may have similar components and/or functionality as the electronic watches described herein (e.g., electronic watches100,200,500,600). As discussed above, in some cases, a variable friction mechanism824is positioned in the electronic watch800and adapted to produce a variable frictional feedback as the crown821is rotated by varying a frictional force between the crown and the variable friction mechanism. In some cases, the variable friction mechanism may be combined with a shaft retainer836. The shaft retainer836may be similar to the shaft retainers536and636discussed above with respect toFIGS.5A-6. In some cases, the shaft retainer836may have a conical cross-section along an outer surface, as shown inFIG.8. The variable friction mechanism824may include a friction element having a conical cross section on an inner surface that is configured to contact the outer surface of the shaft retainer836, as shown inFIG.8. The variable friction mechanism824may include actuators825aand825bthat are configured to change a force between the friction element829and the shaft retainer836, thereby changing a frictional force between the friction element and the crown821. For example, the actuators825a-bmay cause the friction element829to travel left and right with respect toFIG.8to change the force between the friction element and the shaft retainer836. The variable friction mechanism824may be operably coupled to the processing unit296by a connector828. The processing unit296may provide signals to the variable friction mechanism824to change the friction between the friction element829and the shaft retainer836. One or more changes in friction may be used to produce variable frictional feedback as discussed herein. FIG.9illustrates an example electronic device900that may incorporate a variable friction mechanism for producing variable frictional feedback. The electronic device may include a crown or similar input devices921aand921bas described herein. The electronic device900is a portable electronic device such as a smartphone, tablet, portable, media player, mobile device, or the like. The electronic device900includes an enclosure950at least partially surrounding a display952, and one or more input devices921aand921b. The input devices921aand921bof the electronic device900may be similar to the crowns (e.g., crowns121,221,521,621,821) discussed herein and may include similar structure and/or functionality. The electronic device900can also include one or more internal components (not shown) typical of a computing or electronic device, such as, for example, one or more processors, memory components, network interfaces, and so on. The input devices921aand921bmay be configured to control various functions and components of the electronic device, such as a graphical output of a display952, an audio output, powering the electronic device on and off, and the like. An input device921a-bmay be configured, for example, as a power button, a control button (e.g., volume control), a home button, or the like. The enclosure950provides a device structure, defines an internal volume of the electronic device900, and houses device components. In various embodiments, the enclosure950may be constructed from any suitable material, including metals (e.g., aluminum, titanium, and the like), polymers, ceramics (e.g., glass, sapphire), and the like. In one embodiment, the enclosure950is constructed from multiple materials. The enclosure950can form an outer surface or partial outer surface and protective case for the internal components of the electronic device900, and may at least partially surround the display952. The enclosure950can be formed of one or more components operably connected together, such as a front piece and a back piece. Alternatively, the enclosure950can be formed of a single piece operably connected to the display952. The display952can be implemented with any suitable technology, including, but not limited to liquid crystal display (LCD) technology, light emitting diode (LED) technology, organic light-emitting display (OLED) technology, organic electroluminescence (OEL) technology, or another type of display technology. The display952provides a graphical output, for example associated with an operating system, user interface, and/or applications of the electronic device900. In one embodiment, the display952includes one or more sensors and is configured as a touch-sensitive (e.g., single-touch, multi-touch) and/or force-sensitive display to receive inputs from a user. In various embodiments, a graphical output of the display952is responsive to inputs provided to the input devices921a-b. FIG.10illustrates an example electronic device1000that may incorporate a variable friction mechanism for producing variable frictional feedback. The electronic device1000may incorporate a crown or similar input device, configured as a wearable audio device. The electronic device1000is similar to the electronic devices discussed herein (e.g., electronic watches100,200,500,600,800), and may include similar features and/or components. The electronic device1000is depicted as headphones. In some embodiments, the headphones1000include enclosures1010aand1010bcoupled by a headband1080. The headphones1000include one or more input devices (e.g., input devices1021aand1021b) coupled to the enclosures1010. The input devices1021aand1021bof the headphones1000may be similar to the crowns (e.g., crowns121,221,521,621,821) discussed herein and may include similar structure and/or functionality. In some embodiments, each of the enclosures1010is configured to interface with the head and/or ear of a user to provide audio outputs to the user. Each enclosure may include an audio output element1001. In various embodiments, an audio output of the audio output element(s) is responsive to inputs provided to the input devices1021a-b. In The headband1080may be used to secure the headphones1000to the head of the user. FIG.11illustrates an example electronic device1100that may incorporate a variable friction mechanism for producing variable frictional feedback, configured as a wearable device. The electronic device1100is similar to the electronic devices discussed herein (e.g., electronic watch100), and may include similar features and/or components. The electronic device1100is depicted as a virtual reality device. In some embodiments, the virtual reality device1100includes an enclosure1110and a headband1080. The virtual reality device1100includes one or more input devices (e.g., input devices1121aand1121b) coupled to the enclosure1110. The input devices1121aand1121bof the virtual reality device1100may be similar to the crowns (e.g., crowns121,221,521,621,821) discussed herein and may include similar structure and/or functionality. In some embodiments, the enclosure1110is configured to interface with the head and/or ears of a user to provide visual and/or audio outputs to the user. The enclosure may include one or more displays and/or audio output elements. In various embodiments, a graphical output of the display(s) is responsive to inputs provided to the input devices1121a-b. In various embodiments, an audio output of the audio output element(s) is responsive to inputs provided to the input devices1121a-b. The headband1180may be used to secure the virtual reality device1100to the head of the user. FIG.12Adepicts an example electronic device1221(shown here as an electronic watch) that may incorporate a variable friction mechanism for producing variable frictional feedback. The electronic device includes a crown1202. The crown1202may be similar to the examples described above, and may receive force inputs along a first lateral direction, a second lateral direction, or an axial direction of the crown. The crown1202may also receive rotational inputs, for example at an outer crown body. A display1206provides a graphical output (e.g., shows information and/or other graphics). In some embodiments, the display1206may be configured as a touch-sensitive display capable of receiving touch and/or force input. In the current example, the display1206depicts a list of various items1261,1262,1263, all of which are example indicia. FIG.12Billustrates how the graphical output shown on the display1206changes as the crown1202rotates, partially or completely (as indicated by the arrow1260). Rotating the crown1202causes the list to scroll or otherwise move on the screen, such that the first item1261is no longer displayed, the second and third items1262,1263each move upwards on the display, and a fourth item1264is now shown at the bottom of the display. This is one example of a scrolling operation that can be executed by rotating the crown1202. Such scrolling operations may provide a simple and efficient way to depict multiple items relatively quickly and in sequential order. A speed of the scrolling operation may be controlled by the amount of rotational force applied to the crown1202and/or the speed at which the crown1202is rotated. Faster or more forceful rotation may yield faster scrolling, while slower or less forceful rotation yields slower scrolling. The crown1202may receive an axial force (e.g., a force inward toward the display1206or watch body) to select an item from the list, in certain embodiments. In various embodiments, the electronic device1221may provide variable frictional feedback using the crown1202, as discussed herein. In some cases, the variable frictional feedback corresponds to changes to the graphical output, such as those described with respect toFIGS.12A and12B. For example, the variable frictional feedback may simulate an analog detent corresponding to each new item that moves onto the screen. Similarly, as discussed above, the variable frictional feedback may indicate a length of the list and/or a position with respect to a beginning or end of the list. In some cases, the variable frictional feedback may correspond to states of the electronic device and/or transitions between states of the electronic device. For example, in a first state, a first selectable element (e.g., item1261) may be selected or highlighted. In a second state, a second selectable element (e.g., item1262) may be selected or highlighted. The crown1202may be used to transition between the first and second states, and the variable frictional feedback may be produced during the transition from the first state to the second state. FIGS.13A and13Billustrate an example zoom operation. The display1306depicts a picture1366at a first magnification, shown inFIG.13A; the picture1366is yet another example of an indicium. A user may apply a lateral force (e.g., a force along the x-axis) to the crown1302of the electronic device1300(illustrated by arrow1365), and in response the display may zoom into the picture1366, such that a portion1367of the picture is shown at an increased magnification. This is shown inFIG.13B. The direction of zoom (in vs. out) and speed of zoom, or location of zoom, may be controlled through force applied to the crown1302, and particularly through the direction of applied force and/or magnitude of applied force. Applying force to the crown1302in a first direction may zoom in, while applying force to the crown1302in an opposite direction may zoom out. Alternately, rotating or applying force to the crown1302in a first direction may change the portion of the picture subject to the zoom effect. In some embodiments, applying an axial force (e.g., a force along the z-axis) to the crown1302may toggle between different zoom modes or inputs (e.g., direction of zoom vs. portion of picture subject to zoom). In yet other embodiments, applying force to the crown1302along another direction, such as along the y-axis, may return the picture1366to the default magnification shown inFIG.13A. In various embodiments, the electronic device1321may provide variable frictional feedback using the crown1302, as discussed herein. In some cases, the variable frictional feedback corresponds to changes to the zoom as described with respect toFIGS.13A and13B. For example, the variable frictional feedback may indicate various zoom levels (e.g., using simulated detents) such as 1×, 10×, 100×, etc. Similarly, as discussed above, the variable frictional feedback may indicate the current zoom level in relation to a maximum or minimum zoom level. FIGS.14A and14Billustrate possible use of the crown1402to change an operational state of the electronic device1400, provide a user interface command, transition between modes, or otherwise toggle between inputs. Turning first toFIG.14A, the display1406depicts a question1468, namely, “Would you like directions?” As shown inFIG.14B, a lateral force may be applied to the crown1402(illustrated by arrow1470) to answer the question. Applying force to the crown1402provides an input interpreted by the electronic device1400as “yes,” and so “YES” is displayed as a graphic1469on the display1406. Applying force to the crown1402in an opposite direction may provide a “no” input. Both the question1468and graphic1469are examples of indicia. In the embodiment shown inFIGS.14A and14B, the force applied to the crown1402is used to directly provide the input, rather than select from options in a list (as discussed above with respect toFIGS.12A and12B). In various embodiments, the electronic device1421may provide variable frictional feedback using the crown1402, as discussed herein. In some cases, the variable frictional feedback corresponds to changes to the operational state of the electronic device1400described with respect toFIGS.14A and14B. In some cases, the variable frictional feedback corresponds to an operational state at the electronic device. For example, in a first operational state, a first variable frictional feedback may be provided, and in a second operational state, a second variable frictional feedback may be provided. In various embodiments, operational states may include graphical output modes of the user interface of the electronic device, such as scrollable pages or lists, images, videos, user interfaces made of up graphical objects or icons, and the like. For example, in a first graphical output mode, for example in which a scrollable list is displayed, a first variable frictional feedback may be provided, and in a second graphical output mode, for example in which an image is displayed, a second variable frictional feedback may be provided. In some cases, operational states may include applications or commands executing on the electronic device. For example, in a user interface for a first application, a first variable frictional feedback may be provided, and in a user interface for a second application, a second variable frictional feedback may be provided. As mentioned previously, force or rotational input to a crown of an electronic device may control many functions beyond those listed here. The crown may receive distinct force or rotational inputs to adjust a volume of an electronic device, a brightness of a display, or other operational parameters of the device. A force or rotational input applied to the crown may rotate to turn a display on or off, or turn the device on or off. A force or rotational input to the crown may launch or terminate an application on the electronic device. Further, combinations of inputs to the crown may likewise initiate or control any of the foregoing functions, as well. In some cases, the graphical output of a display may be responsive to inputs applied to a touch-sensitive display (e.g., displays1206,1306,1406, and the like) in addition to inputs applied to a crown. The touch-sensitive display may include or be associated with one or more touch and/or force sensors that extend along an output region of a display and which may use any suitable sensing elements and/or sensing techniques to detect touch and/or force inputs applied to the touch-sensitive display. The same or similar graphical output manipulations that are produced in response to inputs applied to the crown may also be produced in response to inputs applied to the touch-sensitive display. For example, a swipe gesture applied to the touch-sensitive display may cause the graphical output to move in a direction corresponding to the swipe gesture. As another example, a tap gesture applied to the touch-sensitive display may cause an item to be selected or activated. In this way, a user may have multiple different ways to interact with and control an electronic watch, and in particular the graphical output of an electronic watch. Further, while the crown may provide overlapping functionality with the touch-sensitive display, using the crown allows for the graphical output of the display to be visible (without being blocked by the finger that is providing the touch input). FIG.15shows a sample electrical block diagram of an electronic device1500that may incorporate a variable friction mechanism for producing variable frictional feedback. The electronic device may in some cases take the form of any of the electronic watches or other wearable electronic devices described with reference toFIGS.1-14, or other portable or wearable electronic devices. The electronic device1500can include a display1505(e.g., a light-emitting display), a processing unit1510, a power source1515, a memory1520or storage device, a sensor1525, an input device1530(e.g., a crown, an), and an output device1532(e.g., a crown, a variable friction mechanism). The processing unit1510can control some or all of the operations of the electronic device1500. The processing unit1510can communicate, either directly or indirectly, with some or all of the components of the electronic device1500. For example, a system bus or other communication mechanism1535can provide communication between the processing unit1510, the power source1515, the memory1520, the sensor1525, and the input device(s)1530and the output device(s)1532. The processing unit1510can be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, the processing unit1510can be a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), or combinations of such devices. As described herein, the term “processing unit” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements. It should be noted that the components of the electronic device1500can be controlled by multiple processing units. For example, select components of the electronic device1500(e.g., a sensor1525) may be controlled by a first processing unit and other components of the electronic device1500(e.g., the display1505) may be controlled by a second processing unit, where the first and second processing units may or may not be in communication with each other. In some cases, the processing unit1510may determine a biological parameter of a user of the electronic device, such as an ECG for the user. The power source1515can be implemented with any device capable of providing energy to the electronic device1500. For example, the power source1515may be one or more batteries or rechargeable batteries. Additionally or alternatively, the power source1515can be a power connector or power cord that connects the electronic device1500to another power source, such as a wall outlet. The memory1520can store electronic data that can be used by the electronic device1500. For example, the memory1520can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing signals, control signals, and data structures or databases. The memory1520can be configured as any type of memory. By way of example only, the memory1520can be implemented as random access memory, read-only memory, Flash memory, removable memory, other types of storage elements, or combinations of such devices. The electronic device1500may also include one or more sensors1525positioned almost anywhere on the electronic device1500. The sensor(s)1525can be configured to sense one or more type of parameters, such as but not limited to, pressure, light, touch, heat, movement, relative motion, biometric data (e.g., biological parameters), and so on. For example, the sensor(s)1525may include a heat sensor, a position sensor, a light or optical sensor, an accelerometer, a pressure transducer, a gyroscope, a magnetometer, a health monitoring sensor, and so on. Additionally, the one or more sensors1525can utilize any suitable sensing technology, including, but not limited to, capacitive, ultrasonic, resistive, optical, ultrasound, piezoelectric, and thermal sensing technology. In some examples, the sensors1525may include one or more of the electrodes described herein (e.g., one or more electrodes on an exterior surface of a cover sheet that forms part of an enclosure for the electronic device1500and/or an electrode on a crown body, button, or other housing member of the electronic device). In various embodiments, the display1505provides a graphical output, for example associated with an operating system, user interface, and/or applications of the electronic device1500. In one embodiment, the display1505includes one or more sensors and is configured as a touch-sensitive (e.g., single-touch, multi-touch) and/or force-sensitive display to receive inputs from a user. For example, the display1505may be integrated with a touch sensor (e.g., a capacitive touch sensor) and/or a force sensor to provide a touch- and/or force-sensitive display. The display1505is operably coupled to the processing unit1510of the electronic device1500. The display1505can be implemented with any suitable technology, including, but not limited to liquid crystal display (LCD) technology, light emitting diode (LED) technology, organic light-emitting display (OLED) technology, organic electroluminescence (OEL) technology, or another type of display technology. In some cases, the display1505is positioned beneath and viewable through a cover sheet that forms at least a portion of an enclosure of the electronic device1500. In various embodiments, the input devices1530may include any suitable components for detecting inputs. Examples of input devices1530include audio sensors (e.g., microphones), optical or visual sensors (e.g., cameras, visible light sensors, or invisible light sensors), proximity sensors, touch sensors, force sensors, mechanical devices (e.g., crowns, switches, buttons, or keys), vibration sensors, orientation sensors, motion sensors (e.g., accelerometers or velocity sensors), location sensors (e.g., global positioning system (GPS) devices), thermal sensors, communication devices (e.g., wired or wireless communication devices), resistive sensors, magnetic sensors, electroactive polymers (EAPs), strain gauges, electrodes, and so on, or some combination thereof. Each input device1530may be configured to detect one or more particular types of input and provide a signal (e.g., an input signal) corresponding to the detected input. The signal may be provided, for example, to the processing unit111. As discussed above, in some cases, the input device(s)1530include a touch sensor (e.g., a capacitive touch sensor) integrated with the display1505to provide a touch-sensitive display. Similarly, in some cases, the input device(s)1530include a force sensor (e.g., a capacitive force sensor) integrated with the display1505to provide a force-sensitive display. In some cases, the input devices1530include set of one or more electrodes. An electrode may be a conductive portion of the device1500that contacts or is configured to be in contact with a user. The electrodes may be disposed on one or more exterior surfaces of the device1500, including a surface of an input device1530(e.g., a crown), a device enclosure, and the like. The processing unit1510may monitor for voltages or signals received on at least one of the electrodes. In some embodiments, one of the electrodes may be permanently or switchably coupled to a device ground. The electrodes may be used to provide an electrocardiogram (ECG) function for the device1500. For example, a 2-lead ECG function may be provided when a user of the device1500contacts first and second electrodes that receive signals from the user. As another example, a 3-lead ECG function may be provided when a user of the device1500contacts first and second electrodes that receive signals from the user, and a third electrode that grounds the user to the device1500. In both the 2-lead and 3-lead ECG embodiments, the user may press the first electrode against a first part of their body and press the second electrode against a second part of their body. The third electrode may be pressed against the first or second body part, depending on where it is located on the device1500. In some cases, an enclosure of the device1500may function as an electrode. In some cases, input devices, such as buttons, crowns, and the like, may function as an electrode. The output devices1532may include any suitable components for providing outputs. Examples of output devices1532include audio output devices (e.g., speakers), visual output devices (e.g., lights or displays), tactile output devices (e.g., haptic output devices), communication devices (e.g., wired or wireless communication devices), and so on, or some combination thereof. Each output device1532may be configured to receive one or more signals (e.g., an output signal provided by the processing unit1510) and provide an output corresponding to the signal. In some cases, input devices1530and output devices1532are implemented together as a single device. For example, an input/output device or port can transmit electronic signals via a communications network, such as a wireless and/or wired network connection. Examples of wireless and wired network connections include, but are not limited to, cellular, Wi-Fi, Bluetooth, IR, and Ethernet connections. The processing unit1510may be operably coupled to the input devices1530and the output devices1532. The processing unit1510may be adapted to exchange signals with the input devices1530and the output devices1532. For example, the processing unit1510may receive an input signal from an input device1530that corresponds to an input detected by the input device1530. The processing unit1510may interpret the received input signal to determine whether to provide and/or change one or more outputs in response to the input signal. The processing unit1510may then send an output signal to one or more of the output devices1532, to provide and/or change outputs as appropriate. As described above, one aspect of the present technology is the gathering and use of data available from various sources to provide variable frictional feedback, electrocardiograms, and the like. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to provide electrocardiograms to the user and/or variable frictional feedback that is tailored to the user. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of variable frictional feedback and electrocardiograms or other biometrics, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, variable frictional feedback may be provided based on non-personal information data or a bare minimum amount of personal information, such as events or states at the device associated with a user, other non-personal information, or publicly available information. The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings. | 106,780 |
11860588 | In one or more implementations, not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure. DETAILED DESCRIPTION The detailed description set forth below is intended as a description of various implementations and is not intended to represent the only implementations in which the subject technology may be practiced. As those skilled in the art would realize, the described implementations may be modified in various different ways, all without departing from the scope of the present disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive. The following disclosure relates to identification of a band for use with an electronic device. The identification can serve as an input to initiate actions performed by the electronic device. For example, a type, model, color, size, or other characteristic of a band can be determined and used to select a corresponding action performed by the electronic device. A watch can include a watch body and a band for securing the electronic device to the user. In many traditional devices, the electronic device does not interact with or even identify the band that is used in conjunction with the electronic device. In other devices, the band may provide certain functionality to supplement the functionality of the electronic device. However, such bands often require a power source, such as from an integrated battery or from the battery of the electronic device. Furthermore, such bands often require a robust communication link with the electronic device for bidirectional communication. These features impose significant design considerations that increase the cost and complexity of the parts. In contrast to traditional devices, the band identification capabilities described herein provide simple and elegant solutions that allow an electronic device to readily identify a band. In some embodiments of the present disclosure, identification of the band can be achieved by a variety of mechanisms. For example, identification of the band can be performed by components of the electronic device that also serve other purposes. Existing sensors, communication elements, and/or detectors can be used to detect and identity a band provided to the electronic device. Accordingly, identification of a band with the electronic device can be achieved without adding dedicated components to the electronic device. Furthermore, identification can be achieved without sacrificing power to the band and without requiring a bidirectional communication link with the band. A selection of a certain band can influence operation of the electronic device in a variety of ways. For example, the electronic device can respond to the identification of a particular band by performing particular functions, such as changing an aspect of a user interface or altering settings of the electronic device. Such functions can be readily executed by the electronic device upon identification of the band, such that user input is not required. Accordingly, a user's experience with the electronic device can be enhanced based on the user's selection of a particular band. These and other embodiments are discussed below with reference toFIGS.1-15. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting. FIG.1illustrates a perspective view of a watch10. The watch10can include a watch body100and a band110. As shown, the watch body100includes a housing102that supports a display104. The watch body100can be worn on a user's wrist and secured thereto by the band110. The band110includes lugs112at opposing ends of the band that fit within respective recesses or channels114of the housing102and allow the band110to be removably attached to the housing102. The lugs112may be part of the band110or may be separable (and/or separate) from the band110. Generally, the lugs112may lock into the channels114and thereby maintain connection between the band110and the housing102. The user may release a locking mechanism (not shown) to permit the lugs112to slide or otherwise move out of the channels114. In some watches, the channels114may be formed in the band110and the lugs may be affixed or incorporated into the housing102. While lugs112and channels114are illustrated, it will be recognized that other attachment elements, such as locks, snaps, clasps, threads, and pins can be included on the band110for securely attaching to the watch body100. As shown inFIG.2, the band110can include an identification element190that is detectable by one or more components of the watch body100. Features of the identification element190can be selected to achieve detection by the watch body100, as described further herein. For example, the identification element190can include a feature on a surface of the band110and/or be embedded within the structure of the band110. The identification element190can be positioned on or along any portion of the band110to facilitate detection. For example, the identification element190can be located near an end of the band110(e.g., at or near a lug112). Alternatively or in combination, the identification element190can be located on a side of the watch10that is opposite the watch body100. The identification element190can be used to determine information about the band110, such as a type, characteristic, feature, or identity of the band110. Subsequent actions by the watch body100can correspond to the determined information. As further shown inFIG.2, the watch body100can include components that support the operations thereof. Such operations can include identification of a band110, actions based on the identification, and other operations that are independent of the identification. In some embodiments, components used for operations independent of the identification of the band110can also be used for identification. Such components are described below with reference toFIG.2. In some embodiments, as shown inFIG.2, the watch body100includes a processor150, memory152, a power source154, and/or a charger156for providing power to the power source154. The processor150can control or coordinate some or all of the operations of the watch body100. The processor150can communicate, either directly or indirectly with substantially all of the components of the watch body100. For example, a system bus or signal line or other communication mechanisms can provide communication between the processor150, the memory152, the power source154, as well as other components. The processor150can be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. As described herein, the term “processor” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements. The memory152can store electronic data that can be used by the watch body100. For example, a memory can store electrical data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for the haptic device184, data structures or databases, and so on. The memory152can be configured as any type of memory. By way of example only, the memory can be implemented as random access memory, read-only memory, Flash memory, removable memory, or other types of storage elements, or combinations of such devices. A power source154can be implemented with any device capable of providing energy to the watch body100. For example, the power source154can be a battery and/or a connection cable that connects the charger156to another power source such as a wall outlet. In other examples, wireless power can be used. In some embodiments, as shown inFIG.2, the watch body100can include components for interacting with a user. In some embodiments, the watch body100includes a display104, a speaker180, a microphone182, and/or a haptic device184. The display104may provide an image or video output for the watch body100. The display104may also provide an input surface for one or more input devices such as a touch sensing device, force sensing device, temperature sensing device, and/or a fingerprint sensor. The display104may be any size suitable for inclusion at least partially within the housing102of the watch body100and may be positioned substantially anywhere on the watch body100. Other input devices can be provided for operation by a user. For example, one or more buttons, dials, crowns, switches, or other devices can be provided for receiving input from a user. The haptic device184can be implemented as any suitable device configured to provide force feedback, vibratory feedback, tactile sensations, and the like. For example, in one embodiment, the haptic device184may be implemented as a linear actuator configured to provide a punctuated haptic feedback, such as a tap or a knock. In some embodiments, as shown inFIG.2, the watch body100can include components that facilitate detection of an identification element190, among other functions. In some embodiments, the watch body100includes a sensor124(e.g., biometric sensors, environmental sensors, etc.), a communication element160, and/or a detector170. As used herein, “a sensor” can include or be operably connected to any component that is capable of facilitating detection of an identification element190. A sensor can include or be operably connected to the sensor124, the communication element160, and/or the detector170. As described herein, components of the watch body100can be used as sensors for detection of an identification element190, yet also have other functions apart from detection of the identification element190. The watch body100can include a compass122. The compass122can include a magnetometer for detecting a presence and direction of a magnetic field. The compass122can be configured to detect a magnetic field of the Earth, and thereby provide information that can be used to determine the orientation of the watch body100with respect to magnetic poles of the Earth. The compass122can also be operated to detect magnetic fields from other sources, such as the identification element190of the band110, as discussed further herein. The watch body100may also include one or more sensors124positioned substantially anywhere on the watch body100. The sensor or sensors124may be configured to sense substantially any type of characteristic such as, but not limited to, images, pressure, light, touch, force, temperature, position, motion, and so on. For example, the sensor(s)124may be a photodetector, a temperature sensor, a light or optical sensor, an atmospheric pressure sensor, a humidity sensor, a magnet, a gyroscope, an accelerometer, and so on. In other examples, the watch body100may include one or more health sensors. In some examples, the health sensors can be disposed on a bottom surface of the housing of the watch body100, as discussed further herein. Other sensors124or detectors170can be provided with similar or different functionality. The communication element160can facilitate transmission of data to or from other electronic devices across standardized or proprietary protocols. For example, a communication element160can transmit electronic signals via a wireless and/or wired network connection. Examples of wireless and wired network connections include, but are not limited to, cellular, Wi-Fi, Bluetooth, infrared, RFID and Ethernet. The communication element160can communicate with or sense the band110or another device, as described further herein. Referring now toFIGS.3-5, the watch body100can detect an identification element190of a band110by operating one or more detectors170. As shown inFIGS.3-5, a lug112can include an identification element190. The detector170of the watch body100can be located at or near the channel114into which the lug112is received. As shown inFIGS.4and5, as the lug112is inserted into the channel114, the identification element190is brought into alignment with and close proximity to the detector170. While the identification element190is shown inFIGS.3-5as being at the lug112, it will be recognized that the identification element190can be located at other positions. For example, the identification element190can be at any location at, on, or within the band110, including any distance away from the lug112. In some embodiments, a detector170can be used to detect the identification element190before, during, and/or after the band110is attached to the watch body100. For example, the detector170can be operated to detect identification element190of the band110while the band is not connected to the watch body (e.g., with the lug112not inserted into the channel114), as shown inFIG.3. By further example, the detector170can be operated to detect identification element190of the band110while the band is being connected to the watch body (e.g., while the lug112is moving into the channel114), as shown inFIG.4. By further example, the detector170can be operated to detect identification element190of the band110after the band is connected to the watch body (e.g., with the lug112inserted into the channel114), as shown inFIG.5. In some embodiments, the detector170can include one or more contact pins within the channel114for providing an electrically conductive pathway to the identification element190. Multiple pins can be provided to conduct power, provide a connection to ground, and transmit signals. The pins of the detector170can retract within the channel114to accommodate passage of the lug112. In some embodiments, the detector170can optically sense the identification element190on the lug112or another portion of the band110. A light source can be provided to facilitate optical sensing by the detector170. For example, the detector can include or be accompanied by a light source that illuminates the identification element190, and the detector can optically detect the light reflected off of the identification element190, as discussed further herein. For example, the band110can be positioned so that the identification element190is within a light path of a light source and within a field of view of the detector170. Light emitted from the light source can be reflected off of the identification element190. For example, the identification element190can include a pattern on the band110that reflects the wavelength(s) of light emitted from the light source. The light can be infrared light, visible light, or another wavelength value or range. Where the identification element190reflects light outside of the visible spectrum, it can be non-visible to a user. For example, the identification element190can include ultraviolet-reflective ink. As such, the identification element190can provide identification capabilities without being noticeable by a user. In some embodiments, the detector170can directly detect an intrinsic characteristic of a band110. For example, various bands110can be of different materials, constructions, textures, and/or colors. The detector170can distinguish one or more characteristics of a given band110from those of another band110. The detector170can optically detect certain characteristics, such as color and reflectivity, of a band110and identify the band110based on whether it satisfies expected criteria relating to the detected characteristics. For example, the detector170can distinguish the color and reflectivity of a stainless steel band from the color and reflectivity of a brown leather band. Thereby, the watch body100can identify each of the bands110and perform corresponding actions. In some embodiments, the detector170include a magnetic field sensor (e.g., compass, magnetometer, Hall Effect sensor, etc.) and the identification element190can include one or more magnets. For example, the detector170can detect the magnitude, orientation, or other characteristic of a magnetic field emitted by the identification element190. The detected characteristic can have a distinct signature that is unique to the identification element190. Thereby, the watch body100can identify the band110and perform corresponding actions. By further example, where multiple magnets are included, the identification element190can include an arrangement of the magnets (e.g., different north-south orientations) that is distinct from the arrangement of another identification element190of a different band110. The magnets can be arranged across the lug112, such that insertion of the lug112into the channel114allows each of the magnets to pass across the detector170. Such action can automatically activate sensing by the detector170. The detector170can detect each of the magnets and determine an arrangement (e.g., north-south) thereof based on the magnetic fields of each magnet. The combined arrangement can have a distinct signature that is unique to the identification element190. Thereby, the watch body100can identify the band110and perform corresponding actions. In some embodiments, the detector170of the watch can include or operate in concert with a compass of the watch. For example, the same compass that is operated to detect a magnetic field of the Earth can be operated to detect the identification element190. The presence of a magnetic field from the identification element190can be detected by the same compass that is then calibrated to detect the magnetic field of the Earth. Likewise, the removal and/or absence of the magnetic field from the identification element190can be detected by the same compass that is then calibrated to detect the magnetic field of the Earth In some embodiments, the watch body100can detect an identification element190of a band110by using a detector that applies a communication protocol. Wireless or wired communication can be performed, at least in part, by a detector170that includes or operates in concert with a communication element (e.g., communication element160) of the watch body100. Communication between the band110and the watch body100can employ a short-range communication method, such as near field communication (“NFC”), radio-frequency identification (“RFID”), Bluetooth, Wi-Fi, Wi-Fi Direct, short-range 802.11, and high frequency focused beams such as 60 GHz. Alternatively or additionally, communication between the band110and the watch body100can employ a high frequency communication method, such as WirelessHD, WiGig, and Wi-Fi IEEE 802.11ad. Alternatively or additionally, communication between the band110and the watch body100can be with ultra-wideband (“UWB”), using low energy levels for short-range, high-bandwidth communications over a large portion of the radio spectrum (e.g., >500 MHz). For example, the watch body100can be placed near the band110. The watch body100and the band110can include a wireless system that is configured to enable one-way or two-way communications. The one- or two-way communication may include an identification of the band110and the watch body100to initiate a data connection between the two devices. The user initiates a communication between the watch body100and the band110by placing the watch body100near the identification element190(e.g., a tag). In some embodiments, the watch body100is configured to automatically detect the presence of the identification element190and initiate an identification process or routine. The system may include a unique identifier or signature that may be used to authenticate the identity of the band110. In some embodiments, the detector170include an antenna and the identification element190can include one or more features that reflect radiation. For example, the detector170can emit electromagnetic radiation (e.g., RF, WiFi, UWB, EHF, mmWave, etc.). The housing or other component of the watch body100can provide transmission of such radiation between the detector170and the identification element190. For example, the housing can provide a window that transmits radiation to and/or from the channel114and the lug112. The identification element190can include a surface, coating, or other feature that reflects the radiation. Such reflection can be passive or actively managed. The identification element190can be tuned to a resonant frequency, such that the reflection occurs at a particular frequency that is detectable by the detector170. The reflection of the radiation can be detected by the detector170and/or by another component capable of detecting such activity. The reflected radiation can be distinguished from the original emission and/or other sources of radiation to determine the presence or absence of the band110. The reflected radiation can have a distinct signature that is unique to the identification element190. Thereby, the watch body100can identify the band110and perform corresponding actions. It will be understood that the antenna of the detector170can be an antenna that operates for other functions, such as communication with other external devices. It will be further understood that such communication can be performed while simultaneously detecting for any reflected radiation to determine the absence or presence of the band110. In some embodiments, the detector170include a capacitive sensor and the identification element190can include one or more features that alter capacitance in a nearby region. The housing or other component of the watch body100can provide a surface or window for the sensor of the detector170to detect, by capacitive influence, the presence or absence of the identification element190. The identification element190can include a surface, coating, or other feature that, when brought into contact or proximity of the detector170, induces a change in the capacitance of the detector170. The change in the capacitance can be distinguished from other configurations of the detector170to determine the presence or absence of the band110. The change in the capacitance can have a distinct signature that is unique to the identification element190. Thereby, the watch body100can identify the band110and perform corresponding actions. In some embodiments, the detector170include an induction coil and the identification element190can include one or more features that alter induction in a nearby region. The induction coil can be dedicated to detecting the identification element190or operable for other purposes, such as inductive changing of a power source of the watch body100. The identification element190can include a magnet or other feature that, when brought into proximity of the detector170, induces a current to flow through the detector170. The induced current can be distinguished from other configurations of the detector170to determine the presence or absence of the band110. The change in the detector170can have a distinct signature that is unique to the identification element190. Thereby, the watch body100can identify the band110and perform corresponding actions. Referring now toFIG.6, a method200can be performed by a watch body100to interact with a band110. In an operation202, the watch body100can initiate a scan for an identification element. The watch body100can be placed into a scanning mode based on manual and/or automated initiation. For example, the user can place the watch body100into a scanning mode by providing manual inputs to the watch body100. For at least a limited period of time thereafter, the watch body100can activate its components to scan for the identification element190of a band110. Alternatively or additionally, the watch body100can automatically activate its components after it senses the presence of an identification element190, for example as described above. Additionally or alternatively, the electronic device can initiate a scan for an identification element upon detection that the electronic device is worn by a user. For example, when the electronic device detects that it is being worn (e.g., based on proximity to a user as can be sensed by a sensor) after a period of not being worn, the electronic device can initiate a scan to detect the identification element of any band present. By further example, the electronic device can detect the presence of a band based on sensed changes, such as a change in impedance or inductance of a coil when the band is inserted into the electronic device, as discussed further herein. Such changes can be used to initiate a scan. Additionally or alternatively, the electronic device can initiate scans periodically or based on a predetermined schedule. In an operation204, the watch body100can detect an identification element190of a band110. Examples of components and mechanisms for detecting the identification element190are described above. One of more of these components and/or mechanisms can be applied to effectively detect the identification element190. Once the identification element190has been detected, a record thereof can be stored within a memory152of the watch body100. The identification element190can be an indicator of a feature of the band110. For example, the identification element190can indicate a type, model, color, size, or other characteristic of the band110. Where the identification element190indicates one characteristic (e.g., model) of the band110, other characteristics (e.g., color, size) can be inferred. The identification can serve as an input to determine an action to be performed by the watch body100. In an operation206, the watch body100can determine an action associated with the identification element190. Each of a variety of identification elements190corresponding to different bands110can be recorded in the memory152of the watch body100. Each of the recorded identification elements190can have associated therewith a corresponding action. The record of identification elements190and associated actions can be in the form of a table, array, or other data structure. When a given identification element190is detected, it can be compared with the recorded identification elements190to find a match and determine the corresponding action. While the foregoing discussion relates to referencing memory152onboard the watch body100, it will be recognized that the watch body100can reference another database apart from the watch body100. The association of identification elements190and corresponding actions can be preprogrammed, user-selected, or a result of machine-learning based on prior usage with one or more bands110. In an operation208, the watch body100can perform the action that has been determined to be associated with the identification element190. For example, the recorded action corresponding to the detected identification element190can include instructions for execution by the processor150and/or other components of the watch body100. Alternatively or additionally, the action can include causing another device, apart from the watch body100, to execute instructions. The action can be performed automatically upon identification of a band110. Additionally or alternatively, the watch body100can provide a prompt requesting user confirmation of the action, and the action can be performed after user confirmation is received. Additionally or alternatively, a user can manually override or modify the action. Various examples of actions are discussed below. Actions performed by the watch body100in response to detection of an identification element190include influencing regular operation of the watch body100. For example, the regular operation of the watch body100can be maintained with additional or altered features based on the selected band110. As such, the user's experience with the watch body100during its regular operation is enhanced. In some embodiments, upon identifying a particular band110, the watch body100provides a feature of a visual user interface that corresponds to a characteristic of the band110. For example, the watch body100can display on the display104a feature that is substantially the same color as the band110. Alternatively or additionally, the feature can be a similar color, a matching color, or a complementary color. The feature can be any visible feature of the display104. Examples of features include watch hands, text, numbers, symbols, graphics, charts, markers, or any displayed item. One, some, or all of the features visible on the display104can be altered based on the color of the identified band110. By further example, the watch body100can display on the display104a feature that is associated with the band110, regardless of color selection. For example, displayed information, watch faces, menu items, and selectable icons can be selected based on the selection of band110. In some embodiments, upon identifying a particular band110, other settings of the watch body100can be modified. A band110can be associated with an activity that is supported by the watch body100. For example, an exercise band can be worn when a user is exercising. Upon identification of the exercise band, actions conducive to an exercise session can be performed by the watch body100. For example, the watch body100can display particular information, track activity of the user, take a biometric reading, record a location of the user, launch an activity tracking app, and/or modify notifications settings (e.g., to be more prominent). By further example, a formal band can be worn in a more formal setting. Upon identification of the formal band, actions conducive to a formal setting can be performed by the watch body100. For example, the watch body100can display particular information, modify notifications settings (e.g., to be less prominent), provide reminders to the user, and/or record a location of the user. Actions performed by the watch body100in response to detection of an identification element190include actions outside of the regular operation of the watch body100. For example, the watch body100can perform actions that are only available when a particular band110is detected. As such, the user's experience with the watch body100is expanded with the selection of bands110. In some embodiments, a band110can include an identification element190that provides authorization for otherwise unavailable actions. For example, a band110can facilitate redemption of items of value. The band110can be used with the watch body100to redeem items of value, such as credit, gift cards, funds, cash, prizes, digital media, access to content (e.g., online content), goods, and/or services. The identification element190can provide information to the watch body100for authorizing redemption of an item of value. For example, the identification element190can include a code that is verifiable by an external device. As shown inFIG.7, a system300can manage the redemption. The watch body100can identify a band110and communicate with an external device310. Information from the identification element190can be transmitted from the watch body100to the external device310. The external device310can verify the information and authorize redemption of an item of value. The external device310can further manage the redemption by executing a transfer to an account associated with the watch body100. Bands110that facilitate redemption of items of value can be provided by vendors, retailers, service providers, or entities that manage the redemption process. The bands110can be provided, exchanged, and transferred for sale or as gifts based on the value of the redeemable items. The bands110can be provided as promotional items in conjunction with an event. For example, bands110can be provided at a festival, convention, conference, concert, or reunion, to provide attendees possessing the bands110with access items of value that are associated with the event. Each attendee can access the items of value by using the bands110with their watch bodies100. In some embodiments, a band110and a watch body100can interact and operate in a manner that is not necessarily perceivable by a user. For example, a watch body100can track usage of one or more bands110. The tracked usage information includes dates, times, durations, locations, activities, biometrics of the user, and/or environmental features in relation to periods before, during, and/or after usage of each band110. The tracked usage information can be collected during a background process of the watch body100. The tracked usage information can be output to a user or uploaded to an external device for analysis. The tracked usage information can be used for machine learning in relation to how each band110is used. The watch body100can perform a variety of other actions upon identification of a band110. It will be recognized that the detection of an identification element190can be followed by any associated action that can be performed by the watch body100. For example, where the watch body100has the required capabilities, the watch body100launches an app, opens a website, starts a timer, displays a message, provides an alert, communicates with another device, and/or other functions. Referring now toFIGS.8and9, a watch can perform different actions when each of a variety of band combinations is detected. For example, multiple band portions can be provided across different sides of the watch body, and each band portion can be independently detected. For example, as shown inFIG.8, a first band portion110A and a second band portion110B can each be attached to a corresponding one of different (e.g., opposing) sides of the watch body100(e.g., at a given channel114). The first band portion110A can have a characteristic that is the same as or different from a characteristic of the second band portion110B. Such characteristics can include a type, model, color, material, size, output, or other characteristic of the corresponding band portion. The characteristic can be detectable or undetectable to a user. As shown inFIG.9, a first detector170A on a first side of the watch body100can detect a first identification element190A of the first band portion110A, and a second detector170B on a second side of the watch body100can detect a second identification element190B of the second band portion110B. Based on and in response to these detections, the watch body can perform an action. An example of such an action includes providing an output106on the display104of the watch body100. The output106can optionally include a visual feature with a characteristic that corresponds to a characteristic of the first band portion110A and a characteristic that corresponds to a characteristic of the second band portion110B. Further examples include providing other outputs to a user, initiation a function, terminating a function, communicating with another device, and the like. Referring now toFIG.10, multiple detectors can operate in concert to detect identification elements of a band. The multiple detectors can cooperatively provide frequent detection of a band along with accurate and effective detection of the identity of the band. For example, as shown inFIG.10, a band110can include a first identification element190A and a second identification element190B. When the band110is attached to the watch body100(e.g., by inserting the lug112into the channel114), the first identification element190A can be aligned with a first detector170A of the watch body100, and the second identification element190B can be aligned with a second detector170B of the watch body100. It will be recognized that some detection mechanisms can consume more power or have other effects that would preferably be minimized. For example, the second detector170B can, when operated, consume more power than the first detector170A, when operated. In some embodiments, the first detector170A detects the band110(e.g., by the first identification element190A) in a manner that is different than a manner in which the second detector170B detects the second identification element190B to accommodate the differences in operation. For example, the first detector170A can detect the band110on a more frequent basis than the second detector170B detects the second identification element190B. For example, the first detector170A can be “always on” or otherwise be ready to detect the band110continuously or periodically. Upon detection of the band110by the first detector170A, the watch can determine that the band110is present and infer that the second identification element190B may also be present and available for detection. Based on the first detection, the second detector170B can be activated to detect the second identification element190B by any mechanisms described herein. By activating the second detector170B only after detecting the presence of the band110with the first detector170A, the operation of the second detector170B can be limited to when it is needed. Additionally or alternatively, the second detector170B can be deactivated upon detection that the band110is not present. Accordingly, power consumption by the second detector170B can be reduced without losing an ability to detect the band110when present. Referring now toFIGS.11-13, a detector can optically sense the identification element on a lug or another portion of a band. As discussed above, a light source can be provided to facilitate optical sensing by the detector. For example, the detector can include or be accompanied by a light source that illuminates the identification element, and the detector can optically detect the light reflected off of the identification element. As shown inFIG.11, a lug112or another portion of a band110can include an identification element190along a surface thereof. The identification element190can include at least one reference row and at least one code row providing patterns with feature that, when compared to the reference row, can indicate an identity or other characteristic of the band110. A reference row can provide a basis for comparison with respect to other rows. A reference row130can be provided with a repeating pattern of a shape132. As used herein, a row providing a repeating pattern of a given shape is an arrangement of multiple instances of the given shape in a pattern, wherein the instances of the shape occur with at least one visible or otherwise detectable feature in common across the entire pattern. For example, the instances of the repeated shape can all have the same shape, size, dimensions, boundaries, color, periodicity, frequency, orientation, spacing, alignment, and the like. A code row can provide a feature that, when compared to the reference row, can be used to identify the band. A code row140can be provided with a repeating pattern of a shape142. The shape142repeated along the pattern of the code row140can be distinct from the shape132that is repeated along the pattern of the reference row130. For example, a length of the shape142of the code row140can be different (e.g., shorter or longer) than a length of the shape132of the reference row130. The repeating pattern of the shape142of the code row140can optionally be a positive integer multiple (e.g., harmonic) of a frequency of the repeating pattern of the shape132of the reference row130. Accordingly, an integer number (e.g., 1, 2, 3, 4, etc.) of shapes142of the code row140can fit within a single length of the shape132of the reference row130. This allows a view172captured by the detector to detect and compare the integer number of shapes142to a single shape132of the reference row130. It will be understood that any number of code rows can be provided. For example, an additional (e.g., second) code row144can be provided with a repeating pattern of a shape146. The shape146repeated along the pattern of the code row144can be distinct from the shape132of the reference row130and/or the shape142of the code row140. For example, a length of the shape146of the code row144can be different (e.g., shorter or longer) than a length of the shape132of the reference row130and/or the shape142of the code row140. The repeating pattern of the shape146of the code row144can optionally be a positive integer multiple (e.g., harmonic) of a frequency of the repeating pattern of the shape132of the reference row130. The positive integer multiple can be different than that of the shape142of the code row140. Accordingly, an integer number (e.g., 1, 2, 3, 4, etc.) of shapes146of the code row144can fit within a single length of the shape132of the reference row130. This allows a view172captured by the detector to detect and compare the integer number of shapes142to a single shape132of the code row140. It will be understood that any number of reference rows can be provided. For example, an additional (e.g., second) reference row134can be provided with a repeating pattern of a shape136. The reference row134can be on an opposite side of the identification element190from the other (e.g., first) reference row130. The shape136repeated along the pattern of the reference row134can be the same as the shape132of the reference row130. For example, the shape132and the shape136can all have the same shape, size, dimensions, boundaries, color, periodicity, frequency, orientation, spacing, alignment, and the like. Accordingly, the code rows140and144can be compared to either one or both of the reference rows130and134. A shown inFIG.12, a detector (not shown) can capture a view172of at least a portion of the identification element190. Each of the shapes, or portions thereof, within the view172can be captured for analysis. For example, the shape(s) of the code row(s) can be compared to the shape(s) of the reference row(s). The shapes can have one or more features that are used as a basis for comparison. For example, such a feature can include a relative phase, amplitude, color, reflectivity, diffraction grating, and/or texture of the second repeating pattern with respect to the first repeating pattern. The features of the shapes from rows captured in the view172can be compared to determine similarities and/or differences there between. In some embodiments, features to be compared can include a relative phase of at least two different repeating patterns. For example, the repeating patterns can have a detectable period. While the pattern can be different, a point within a given period (e.g., peak, trough, etc.) can be compared to a comparable point in a different pattern. For example, the phase of a pattern in a code row can be shifted (e.g., horizontally) relative to the phase of a reference row. The phase of each and/or a relative phase between pairs of patterns can be detected, quantified, and/or compared. The phases of both one or more code rows and/or one or more reference rows can be combined or otherwise analyzed, for example by summation. Further analysis can include, for example, discrete Fourier transform. The output can be used to determine an identify of the band among others and/or distinguish the band from other bands that have different identification elements (e.g., with different features). It will be understood that a comparison of features can be based on any one or more of a variety of features. For example, such features across different patterns can include a relative phase, amplitude, color, reflectivity, diffraction grating and/or texture of the repeating patterns. Where different types of features are used, the features of the same type from different repeating patterns can be compared, and the different types of features can provide additional dimensions that can be used to distinguish the features of the rows. The arrangement of the rows and the repeating patterns therein can facilitate the operation of the detector to capture and view with sufficient information to make an identification, even if less than an entirety of the identification element is captured within the view. For example, as shown inFIG.12, a length L of the view172can extend to capture at least one shape132of the reference row130, a corresponding (e.g., integer) number of shapes142of a first code row140, and a corresponding (e.g., integer) number of shapes146of a second code row144, as well as any additional rows. Because the shapes of each row repeats according to a pattern, the view172need not capture an entirety of any row. The length L can capture enough of one or more shapes to determine the characteristics of all of the shapes across an entire row. As such, the horizontal (e.g., along the length L) alignment of the view172(e.g., of the lug relative to the watch body or the identification relative to the detector) need not be limited to only one horizontal region of the identification element. This allows the detection to be made across a wide variety of horizontal alignment arrangements. Because the shapes of each row repeats according to a pattern, the view172need not capture an entire portion of any one shape. For example, where a portion of one shape and a portion of its neighbor in the pattern are captured within the view172, the characteristics of either shape can be inferred by the combination of portions within the view172. By further example, as shown inFIG.13, a height H of the view172can extend to capture portions of multiple rows. The height H of the view172can extend across an entire height of all of the code rows. The height H of the view172can extend across an entire height of at least one of the reference rows or at least a portion of both reference rows. As shown inFIG.12, the shapes of the different reference rows130and134can be the same, so that the view172need not capture an entire height of either one of the reference rows. For example, where a portion of a shape132from the first reference row130and a portion of a shape136from the second reference row134are captured within the view172, the characteristics of either shape can be inferred by the combination of portions within the view172. As such, the vertical (e.g., along the height H) alignment of the view172(e.g., of the lug relative to the watch body or the identification relative to the detector) need not be limited to only one vertical region of the identification element. This allows the detection to be made across a wide variety of vertical alignment arrangements. Additionally or alternatively, the identification element190can include another symbol, such as a barcode, including a machine-readable representation of information in the form of one or more patterns. The symbol may be formed as patterns of dark (e.g., black) and light (e.g., white) bars, circles, dots or other shapes. Other patterns are contemplated, such as patterns of dots, concentric circles and the like. Other examples of barcodes include Universal Product Codes (UPCs), Code 39 barcodes, Code 128 barcodes, PDF417 barcodes, EZcode barcodes, DataMatrix barcodes, QR Codebarcodes, or barcodes that utilize any other type of barcode symbology. A 1D sensor or a 2D sensor can be used to capture images of adequate resolution (e.g., pixels) to detect the identification element190(e.g., barcode). The depth of focus of the sensor can be arranged so that the barcode is in focus when the band is swiped past the detector170. Additionally or alternatively, the detector170can be configured to perform barcode scanning. In particular, the detector170can capture an image of the identification element190and use digital image processing techniques to decode the barcode. During a detection operation, the watch body100may prompt a user to line up the detector170with the identification element190in a particular manner, such as aligning the identification element190in the center of an image captured by the detector170and displayed on the display104. Referring toFIGS.14and15, a band can include an identification element that facilitates wireless communication from within an outer periphery of a lug of the band. The identification element can be housed within a portion of the lug that also helps secure the lug to the housing of the watch body. As shown inFIG.14, the lug112can include a locking element116within an opening of the body210. The locking element116may be positioned at any point along the body210and in any orientation. The locking mechanism can slideably move within the opening of the body210to engage a portion of the watch body. For example, when the lug112is received within the channel of the watch body, the locking element116can actuate and/or otherwise engage a portion of the watch body to secure the lug112within the channel. The locking element116can be released by a user to allow the lug112to be removed from the channel. The locking element116of the lug112can include one or more elements that can controllably protrude from the body210to engage the watch body. As further shown inFIG.14, the lug112can include one or more bumpers118that extend or protrude from a body194of the lug112. The bumpers118can be positioned at leading edges of the locking element116and distributed along the body194. The bumpers118can be positioned on one side or opposing sides of the body194. The bumpers118may include one or more alignment pads that act as a guide for the lug112when the lug112slides relative to a channel in the housing of the watch body. The bumpers118may have a rounded top surface that follows or substantially follows the contour of the body194while still protruding from the surface of the body194. The bumpers118may have a planar or substantially planar top surface. Although the bumpers118are shown in a rounded oblong or lozenge shape, the bumpers118may be curved, proud, flat, angled, have a raised edge and a flat interior or any combination thereof. The bumpers118may be positioned on various parts of the body194of the lug112. For example, a top surface of the body194of the lug112may include one or more bumpers118and the bottom surface of the body194of the lug112may also include the same or additional bumpers118. The bumpers118may include a casing that is of rubber, plastic, nylon, or other such material. The material may be a material that acts to increase friction between the lug112and the channel of the housing of the watch body. The body194of the lug112may include one or more recesses in which the bumpers118may be placed. Additionally or alternatively, the bumpers118may be placed directly on top, bottom and/or side surfaces of the body194of the lug112. Further, the bumpers118may be disposed in one or more openings that extend entirely though an axis of the body194of the lug112. The bumpers118, or at least a portion of each bumper118, extends or protrudes from one or more surfaces of the body194of the lug112. In such embodiments, the portion of the bumper118that extends beyond the surface of the body194of the lug112is used to: (1) increase friction between the lug112and the channel of the housing of the watch body into which the lug112is to be placed; and (2) maintain or substantially maintain spacing between one or more surfaces of the lug112and a surface of a channel of the housing of the watch body into which the lug112is to be placed. Accordingly, undesired movement, rattling and/or noise caused by any movement of the lug112may be reduced when the lug112is contained within the channel. As shown inFIG.15, a bumper118can contain an identification element190that facilitates wireless communication with a detector of a watch body. For example, an identification element190can include an antenna element162and an identification tag166(e.g., an NFC tag). The antenna element162can be a wound coil, an etched PCB, or another structure for receiving and/or emitting radiation. The identification tag166can be operably connected to the antenna element162. A magnetic (e.g., ferrite) core164can be provided (e.g., within the antenna element162) to facilitate detection of the identification element190and/or enhance operation of the antenna element162. The components of the identification element190, including the antenna element162, the magnetic (e.g., ferrite) core164, and/or the identification tag166can be housed within a casing168of the bumper118. For example, the casing168can be molded, potted, or otherwise formed about the other components of the bumper118. The casing168can seal the components of the identification element190from an external environment. Additionally, the casing168can define an outer periphery of the bumper118, including the portions thereof that extend beyond the body of the lug and engage the watch body. While only one would coil is shown for the antenna element162inFIG.15, it will be understood that a variety of coil arrangements can be provided. For example, the antenna element162may include two or more coils (e.g., a pair of coils) that are each wound around a respective core structure (e.g., a pair of corresponding core structures or support structures) that are mounted on, or formed as protrusions from, a surface. Optionally, a single wire can form all of the multiple (e.g., two) coils of the antenna element162. In one or more implementations, a multi-coil antenna module can communicate with a (e.g., mirrored) multi-coil antenna module in the watch housing to form one or more magnetic flux loops through the multi-coil antennas that can facilitate providing, exchanging, and/or receiving identification, power, and/or other communications between the processing circuitry of the wearable device and the band. The watch body can be provided with low-power impedance detection circuitry to detect the presence of the identification element190near a detector (e.g., NFC radio). One or more additional detectors can be positioned at locations to be aligned with the identification element when the lug is within the channel of the watch body. Detection can be contactless (e.g., non-conductive) so that the components can be protected from Galvanic corrosion that may occur in a contact-based arrangement. The detection can also be autonomous, such that user intervention or explicit operations are not required. Additionally, the detection can be performed without requiring the band to provide its own power source. It will be recognized that a variety of other configurations are contemplated to provide wireless communication for detection of the identification element190of the band110. Accordingly, watch bands described herein can facilitate a watch's ability to perform one or more operations based on the detected characteristic and configuration of the watch band. Characteristics of the watch band can change when placed in different configurations, and each of these characteristics can be correlated with each of the various configurations. The characteristics can be measured to detect in which of the various configurations the watch band is in. Various examples of aspects of the disclosure are described below as clauses for convenience. These are provided as examples, and do not limit the subject technology. Clause A: a watch comprising: a watch body comprising: a processor; and an optical detector; and a band that is attachable to the watch body for securing the watch to a user and comprising an identification element that is detectable by the optical detector, the identification element comprising: a reference row comprising multiple first shapes in a first repeating pattern; and a code row comprising multiple second shapes in a second repeating pattern. Clause B: a watch comprising: a watch body comprising: a processor; and an antenna; and a band that is attachable to the watch body for securing the watch to a user and comprising an identification element, wherein: the antenna is operable to emit radiation with a frequency; and the identification element comprises a reflective surface that is configured to reflect at least a portion of the emission. Clause C: a watch comprising: a watch body comprising: a housing; a processor; and a detector; and a band that is attachable to the watch body for securing the watch to a user and comprising: a lug for being received into a channel of the housing; a locking mechanism for engaging the channel; multiple bumpers protruding from the lug to abut the housing when the lug is received within the channel, each of the multiple bumpers being positioned on opposing sides of the locking mechanism; and an identification element housed within one of the bumpers, wherein the detector is configured to wirelessly communicate with the identification element when the lug is received within the channel. One or more of the above clauses can include one or more of the features described below. It is noted that any of the following clauses may be combined in any combination with each other, and placed into a respective independent clause, e.g., clause A, B, or C. Clause 1: the identification element corresponds to a characteristic of the band; and the processor is configured to determine whether to execute an action based on a comparison of a feature of the second repeating pattern with respect to the first repeating pattern. Clause 2: the feature comprises a relative phase, amplitude, color, reflectivity, or texture of the second repeating pattern with respect to the first repeating pattern. Clause 3: the second repeating pattern has a frequency that is a positive integer multiple of a frequency of the first repeating pattern. Clause 4: the reference row is a first reference row; and the identification element further comprises a second reference row on a second side of the identification element, opposite the first side, the second reference row comprising additional first shapes in a third repeating pattern. Clause 5: the optical detector is configured to capture a view, wherein when less than an entire height of the first reference row is within the view, at least a portion of a height of the second reference row is within the view. Clause 6: the code row is a first code row; and the identification element further comprises a second code row comprising multiple third shapes in a third repeating pattern. Clause 7: the second repeating pattern has a first frequency that is a positive integer multiple of a frequency of the first repeating pattern; and the third repeating pattern has a second frequency, different than the first frequency, that is a positive integer multiple of a frequency of the first repeating pattern. Clause 8: the identification element corresponds to a characteristic of the band; and the processor is configured to determine whether to execute an action based on the detected identification element. Clause 9: the watch body further comprises a display; the characteristic of the band is a color of the band; and the action is changing a feature on the display to include the color. Clause 10: the watch body further comprises: a housing containing the processor and the antenna; a channel for receiving a portion of the band; and a window positioned at the channel and configured to transmit the radiation between the antenna and the identification element when the band is received within the channel. Clause 11: the identification element is configured to resonate at the frequency of the radiation. Clause 12: the identification element comprises: a coil; a magnetic core within the coil; and a tag. Clause 13: the detector is a first detector; the identification element is a first identification element; the watch body further comprises a second detector; and the band further comprises a second identification element housed within another one of the bumpers, wherein the second detector is configured to detect the second identification element when the lug is received within the channel. As described above, one aspect of the present technology may include the gathering and use of data available from various sources. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In another example, users can select not to provide mood-associated data for targeted content delivery services. In yet another example, users can select to limit the length of time mood-associated data is maintained or entirely prohibit the development of a baseline mood profile. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publicly available information. To illustrate the interchangeability of hardware and software, items such as the various illustrative blocks, modules, components, methods, operations, instructions, and algorithms have been described generally in terms of their functionality. Whether such functionality is implemented as hardware, software or a combination of hardware and software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. A reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. For example, “a” module may refer to one or more modules. An element proceeded by “a,” “an,” “the,” or “said” does not, without further constraints, preclude the existence of additional same elements. Headings and subheadings, if any, are used for convenience only and do not limit the invention. The word exemplary is used to mean serving as an example or illustration. To the extent that the term include, have, or the like is used, such term is intended to be inclusive in a manner similar to the term comprise as comprise is interpreted when employed as a transitional word in a claim. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases. A phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list. The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, each of the phrases “at least one of A, B, and C” or “at least one of A, B, or C” refers to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. It is understood that the specific order or hierarchy of steps, operations, or processes disclosed is an illustration of exemplary approaches. Unless explicitly stated otherwise, it is understood that the specific order or hierarchy of steps, operations, or processes may be performed in different order. Some of the steps, operations, or processes may be performed simultaneously. The accompanying method claims, if any, present elements of the various steps, operations or processes in a sample order, and are not meant to be limited to the specific order or hierarchy presented. These may be performed in serial, linearly, in parallel or in different order. It should be understood that the described instructions, operations, and systems can generally be integrated together in a single software/hardware product or packaged into multiple software/hardware products. In one aspect, a term coupled or the like may refer to being directly coupled. In another aspect, a term coupled or the like may refer to being indirectly coupled. Terms such as top, bottom, front, rear, side, horizontal, vertical, and the like refer to an arbitrary frame of reference, rather than to the ordinary gravitational frame of reference. Thus, such a term may extend upwardly, downwardly, diagonally, or horizontally in a gravitational frame of reference. The disclosure is provided to enable any person skilled in the art to practice the various aspects described herein. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The disclosure provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles described herein may be applied to other aspects. All structural and functional equivalents to the elements of the various aspects described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”. The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter. The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way. | 73,076 |
11860589 | While the disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit aspects of the disclosure to the particular illustrative embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. DESCRIPTION The following description should be read with reference to the drawings wherein like reference numerals indicate like elements. The drawings, which are not necessarily to scale, are not intended to limit the scope of the disclosure. In some of the figures, elements not believed necessary to an understanding of relationships among illustrated components may have been omitted for clarity. All numbers are herein assumed to be modified by the term “about”, unless the content clearly dictates otherwise. The recitation of numerical ranges by endpoints includes all numbers subsumed within that range (e.g.,1to5includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5). As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include the plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. It is noted that references in the specification to “an embodiment”, “some embodiments”, “other embodiments”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is contemplated that the feature, structure, or characteristic may be applied to other embodiments whether or not explicitly described unless clearly stated to the contrary. FIG.1is a schematic block diagram of an illustrative control system10. The control system10may represent any number of different control systems in which one or more controllers are used to regulate operation of one or more pieces of equipment. The control system10may generally represent a process control system that regulates a variety of different operations within any of a variety of different industrial or other processes. A refinery is an example of an industrial process. The control system10may generally represent a building control system that regulates a variety of different systems within a building. For example, the control system10may generally represent portions of a Heating, Ventilating and Air Conditioning (HVAC) system. The control system10may generally represent portions of a lighting system within a building or a security system within a building. These are just examples. The illustrative control system10includes a number of controllers12that are individually labeled as12a,12band12c. While a total of three controllers12are shown, it will be appreciated that this is merely illustrative, as the control system10may have any number of controllers12and may have a substantially greater number of controllers12. In some instances, the controllers12may be part of a hierarchal control system that includes layers of control, with controllers at each control layer. The controllers12may be considered as being at a lowest or regulatory level in which each of the controllers12regulate operation of a corresponding piece of controlled equipment14. The controlled equipment14is individually labeled as14a,14band14c. As shown, each controller12is operably coupled with a corresponding single piece of controlled equipment14. In some cases, a single controller12may control two or more distinct pieces of controlled equipment14. While a total of three pieces of controlled equipment14are shown inFIG.1, it will be appreciated that this is merely illustrative, as the control system10may have any number of pieces of controlled equipment14and may have a substantially greater number of pieces of controlled equipment14. The controlled equipment14may represent any of a variety of different controllable components. In an HVAC system, for example, each piece of the controlled equipment14may represent an actuatable HVAC component such as a hot water valve, an air damper, a Variable Air Volume (VAV) box or other Air Handling Units (AHUs). The control system10may be considered as including sensors16, which are individually labeled as16a,16band16c. While a total of three sensors16are shown, it will be appreciated that this is merely illustrative, as the control system10may have any number of sensors16. Each sensor16may be operably coupled with one or more of the controllers12, and may provide feedback to the controller(s)12that permits the controller(s)12to more accurately regulate the corresponding piece(s) of controlled equipment14. If the piece of controlled equipment14ais, for example, a hot water valve providing hot water on demand to a radiator, the sensor16amay be a temperature sensor that reports a current room temperature to the controller12athat is operably coupled with the piece of controlled equipment14a. If the current room temperature is below a temperature setpoint for that room, the controller14amay command the piece of controlled equipment14a(in this case, a hot water valve) to open, or to open further if already open. When the current room temperature reaches or approaches the temperature setpoint for that room, the controller14amay command the piece of controlled equipment14a(in this case, a hot water valve) to at least partially close. This is just an example. In some cases, it may be appropriate to think about each piece of controlled equipment14as representing a single actuatable device that can be opened or closed, or turned up or turned down, in response to a command to do so from the corresponding controller12, with the corresponding sensor16providing feedback to the controller12that enables the corresponding controller12to better regulate operation of the piece of controlled equipment14. As can be seen, the delay between when the hot water valve is opened and when the room temperature changes may be dependent on the size of the room, the heat transfer efficiency of the radiator, the distance the sensor is from the radiator, as well as many other factors that are specific to the particular installation. Other factors such as how much the water valve should be opened and/or closed under different circumstances will often depend on the particular installation. These are just examples. As can be seen, in general, a controller that is generically tuned in the factory will often not be optimally tuned for a particular installation in the field. In some instances, each of the controllers12may be operably coupled with a network18. The network18may represent an internal network within a building or other facility. The network18may represent an internal network within a building, a factory or a refinery, for example. While the pieces of controlled equipment14are shown as being coupled directly to the corresponding controller12, and are not shown as being coupled directly to the network18, in some cases both the controllers12and the pieces of controllable equipment14may be directly coupled to the network18. In this case, each controller12may communicate with its corresponding piece of controllable equipment14through the network18. In some cases, the sensors16may also be directly coupled to the network18, rather than to a corresponding controller12. In some instances, the control system10may communicate with a remote device20via a network22. The network22may be considered as being an external network, and may for example rely on the Internet as being at least part of the network22. In some cases, the network22may have a cloud-based component, represented by the cloud24. The remote device20may be a computer that is remote from the facility in which the control system10is located. The remote device20may be a server such as a cloud-based server. In some instances, as will be discussed, the remote device20may be configured to receive data from the controllers12and be able to help fine tune operation of the controllers12. FIG.2is a schematic block diagram of an illustrative control system30that provides an example of a hierarchal nature of some control systems. The illustrative control system30, which may be considered as being an example of the control system10, and vice versa, includes a controlled technology level32. The pieces of controlled equipment14shown in the control system10may be considered as being at the controlled technology level32. The next level up from the controlled technology level32is a regulatory control level34. The controllers12shown in the control system10may be considered as being within the regulatory control level34. Above the regulatory control level34is a control level36. The control level36may be considered as including one or more controllers that each control a number of controllers that are at the regulatory control level34. In one example, a controller at the regulatory control level34may control an individual hot water valve, while a controller at the control level36may oversee operation of each of the controllers that themselves control hot water valves throughout a facility. In some cases, the controllers at the regulatory control level34may be considered as being edge controllers, as seen by an edge controller38. The edge controller38controls operation of the equipment at the controlled technology level32for which the edge controller38is responsible. The edge controller38may communicate with a cloud-based server40. In some cases, and as will be discussed, the cloud-based server40may include a reinforcement learning block42that may help to fine tune the edge controller38. In some cases, the edge controller and/or controller36may include a reinforcement learning block42to help fine tune the edge controller38instead of or in addition to the cloud-based server40. FIG.3is a schematic block diagram showing an illustrative controller50for controlling at least part of a process. The controller50may be considered as an example of the controllers12shown in the control system10. The controller50may be considered as an example of the edge controller38shown in the control system30. The controller50includes a memory52for storing a policy54of the controller50. As will be described, the policy54may be used in improving operation of the controller50by fine tuning the control parameters by which the controller50operates. As an example, if the controller50is a Proportional Integral Derivative (PID) controller, the parameters that can be adjusted may include a Proportional (P) gain parameter, an Integral (I) gain parameter and a Derivative (D) gain parameter. A processor56is operatively coupled to the memory52such that the processor56is able to access and update the policy54. The processor56is configured to perform a plurality of iterations. During each iteration, the processor56updates the policy54of the controller50and controls at least part of the process using the updated policy54for a period of time. In some cases, the processor56may be configured to determine the updated policy54to use during each iteration. The processor56is configured to associate the updated policy54with a performance level of the controller50in controlling the at least part of the process. During each iteration, the updated policy54is determined using the associations generated during one or more previous iterations between the previous policies54and the corresponding performance levels of the controller50in controlling the at least part of the process, such that the updated policy is optimized to have a highest likelihood of producing a positive change in the performance level of the controller50in controlling the at least part of the process rather than optimized to have a highest likelihood of producing a largest positive magnitude of change in the performance level of the controller50in controlling the at least part of the process relative to the immediate previous iteration(s). In some instances, the processor56may be configured to communicate one or more parameters indicative of the performance level of the controller50in controlling the at least part of the process to the remote device20and to subsequently receive the updated policy54from the remote device. In some cases, automated tuning may improve the performance of the controller50. Reinforcement Learning (RL) may be used to help automatically tune the controller. One challenge with RL is that regulatory level controllers such as the controllers12, the edge controller38and the controller50, is that the regulatory level controllers may lack the processing power necessary to perform RL. Generally, RL is a form of artificial intelligence that is concerned with optimizing the behavior of an RL agent, or to maximize the return for the RL agent. RL can describe many real-world decision-making problems including optimizations of company business profits, online auctions, election campaigns, computer or board games strategies, air combat problems, robotics etc., and has been successfully applied in many of these areas. In RL problem formulation, the RL agent is interacting with an uncertain environment that changes its state with time as a result of actions of the RL agent as well as intrinsic system dynamics. The RL agent usually operates in the discrete time domain periodically. For every discrete time instant, the agent may choose an action a based on the state of environment x and its policy π. The agent receives a reward r(a, x) which depends on the action chosen and the current state of environment. Subsequently, the environment state y will be partly affected by the actions taken previously. The optimal behavior of the agent should account not only for the immediate rewards but it should also consider the future impacts of the actions on the state of the environment. The optimal agent's behavior should involve the capability of planning. According, RL theory is often concerned with finding the optimal agent's policy when no model of the environment is available. It is an algorithm, which would use the previous observations, environment states, actions and rewards as the input data. It does not rely on other information, i.e. it is purely empirical. This fact, that the optimization does not rely on various assumptions, makes the RL a promising method for solving the regulatory control problem. RL calculates an approximation of the optimal policy. The policy is a function that maps the environment states to agent's actions. It can often be represented by a table, in which the agent may look up the optimal action to choose based on the current state. In a contemplated regulatory control regime, the agent chooses an action, e.g. the valve position for the next few seconds. This time can be called an evaluation period. The reward received for this action may be a combination of the temperature control accuracy and the valve position change over the evaluation period. Ideal control achieves good control accuracy with minimum actuator moves. In some instances, on the regulatory control level, the energetic efficiency of the building will not be directly considered as this problem will be solved on higher levels of the control hierarchy. The RL can be implemented using value functions, e.g. an advantage function. Another popular value function is state-action value function called Q-function. The results should be identical regardless whether the advantage or Q-function is used. The advantage function is more convenient for the present discussion. The advantage function is defined using the state-value or cost-to-go function Vπ(x). In this example, the state-value Vπ(x) is defined as the expected (i.e. statistically expected) agent's return when starting with the environment at state x and pursuing a given policy a=π(x). The agent's policy is a function, possibly randomized, mapping the states of environment to the agent's actions a. Then, the advantage Aπ(a, x) of an action a at a state x with respect to the baseline policy π is the difference between two costs-to-go: (1) the return expected when using the specified action a at the initial state x before switching to the baseline policy π minus (2) the return expected when following the baseline policy from x. Formally: Aπ(α,x)=r(α,x)+E{Vπ(y)|α,x}−Vπ(x) The advantage is the expected return difference caused by one-step variation in a given policy. The instantaneous reward received at state x is denoted as r(a, x). Per the above definitions, Vπ(x) is the return when following the baseline policy π from x, whereas r(a, x)+Vπ(y) would be the return when applying action a at state x and causing the next environment state to be y. Because the next state y is a random variable due to a non-deterministic environment, it is necessary to take the conditional expectation E{Vπ(y)|a, x} instead of simply Vπ(y). The advantage function has the following properties which make this function useful in finding the optimal policies:If Aπ(α, x)≤0 for all α and x, then π is the optimal policy.If Aπ(α, x)>0, then applying α at x defines an improved policy with a better return. The optimal policy can be found improving any initial policy iteratively gradually replacing all actions with ones which have positive (+ sign) advantage with respect to the current policy. The process terminates when the set of such actions is empty. Then the policy is optimal (i.e. no policy can gain better return on average). This policy improvement step is the core of policy iterations method. In what is known as a greedy RL approach, the RL agent attempts to maximize the magnitude of positive change relative to the previous iteration. In the greedy approach:1. The advantage function for a policy π is estimated.2. A new policy is defined by replacing the previous actions with πnew(x)=argmaxaAπ(a,x),i.e. the action with maximum magnitude advantage, i.e. making the maximum positive improvement.3. The steps 1. And 2. are repeated with the updated policy if the policy changed in 2. Rather than using the greedy approach, it has been found that a better approach is to use a non-greedy approach that attempts to optimize for a highest likelihood of producing a positive change in advantage, rather than a highest likelihood of producing a largest positive magnitude of change in advantage. Such non-greedy methods change the convergence process but the ultimate optimized policy remains basically the same. The advantage function may be estimated from the data using approximation techniques to fit the observed data {[xi, yi, αi, ri], i=1, 2 . . . }. These techniques can involve least squares optimization. The data are obtained by trying various actions at various states. This advantage estimation (or Q-function estimation) is the key element of many RL algorithms. The optimal policy is found when advantage function becomes known. In reality, it can only be approximated based on a finite data set that is available. Hence, the reinforcement learning is a process of converging to the optimal policy but generally not achieving it in a finite time. When the environment state is not completely known to the RL agent, the whole relationship between actions, states and rewards may get obscured and the learning process may not converge or its rate of convergence may be compromised. This makes RL application for regulatory control often difficult and possibly unreliable. In the regulatory control case, not all environment states are available. As a general rule, so-called transient states will be unknown. This can be illustrated using an example: the controller opens a hot water valve more. This action does not start increasing the controlled temperature immediately. At first, the heat increases the temperature of the heat exchanger, then the heat exchanger increases the temperature of the air around the heat exchanger, which is mixed with the air in the room, which will finally increase the temperature of the sensor body. Only then the algorithm will detect the change. There will be a delay. Only after a time (maybe several minutes), the change of the controlled temperature trend can be clearly noticed by the algorithm. The controlled temperature is the state that is at the end of the causal chain. Suppose what happens if the RL agent opens the valve but instead of waiting a sufficient time to notice the temperature trend change, it tries a new action too soon: it closes the valve this time. At this moment, the heat released by the previous opening action will arrive to the sensor. The agent will now conclude that the hot water valve closing (current action) makes the air temperature increase (which is in fact an effect of the previous action). Unfortunately, the conclusion is grossly incorrect and will have catastrophic impact on the controller performance. The trouble is that the environment state is also containing intermediate states xinot included in x known to the RL agent. Accordingly, short evaluation periods may not be optimal. Rather, it may be better to sacrifice the speed of learning in favor of robustness using a sufficiently long evaluation period, e.g. several minutes instead of one second (one second may be a typical sampling period used in BMS regulatory control layer). The extended period will effectively eliminate most problems stemming from unknown intermediate states. Any states which settle down in less than a minute will then not cause a problem. The knowledge of controlled states will then be sufficient. The disadvantage of extended evaluation periods is that the process will be uncontrolled for more than one minute, i.e. the agent will set the valve position and will not be allowed to change it for next few minutes. It will be regarded as unacceptable for many regulatory control loops. The control will be irresponsive. The extended evaluation period idea can still be used if the agent's action is not interpreted as choosing a valve position but choosing a control law. Testing an action then means running the controller with fixed parameters over the evaluation period. Then the process may be controlled always using a short sampling period, just the controller parameters will be updated only occasionally. Running a fixed controller for a sufficiently long period effectively eliminates problems with unknown intermediate states provided the controller is stabilizing the process and thus attenuating the effect of the intermediate states. The situation changes when the controller causes loop instability. Then the effects do not vanish over the testing period even if the period would be arbitrarily long. For the above reason, an extended evaluation period cannot be viewed as the ultimate solution to the problem. Many potential RL applications problems caused by the unknown intermediate states could be eliminated by two choices:1. Choosing a sufficiently long evaluation period to eliminate the effects of those states.2. Defining the agent's action as the control law (e.g. PID gains) selection, not actuator position, to avoid irresponsiveness during the evaluation period. As noted, the above two choices create a new problem: the advantage function estimate will be grossly affected by feedback loop instability which will be amplifying the intermediate effects instead of attenuating it. The longer the evaluation period, the more the y state will depend on xi. Moreover, the instable control is likely to hit some nonlinearity or saturation throughout the evaluation period: e.g. the valve will be either fully open or fully closed. These effects make the data obtained from such evaluation period contradictory, non-repeatable and often difficult to model. The situation is that:1. Those RL agent's actions which are close to optimal will produce valid data.2. Incorrect actions will produce low quality data which will cause problems in the algorithm. This situation resembles the role of outliers known in various problems in mathematical statistics, e.g. regression analysis. It is known that least squares estimators provide a consistent parameter estimates for many statistical models. On the hand, it is known that the least squares estimators are very inefficient when the probability distribution of errors is not normal, especially when large errors are more likely to occur. A handful of outliers may make the least squares estimates inaccurate. A solution to the outlier problem is to minimize other function than the sum of squares. The sum of Tukey's biweight (also known as bisquare) functions is a known method. Biweight behaves like the squared error function at first, but for larger errors, the function becomes constant. In this way, the sensitivity to outliers is limited. Biweight is just one example of a wider class of robust estimators developed in robust statistics. For the RL regulatory control problem solved, any regulatory loop instability behaves in a way like outliers: it produces bad data to be used for the advantage function estimation which cause the advantage estimate to be inaccurate. An example solution in accordance with this disclosure uses a modified policy that is based on the advantage function sign (positive, negative) ignoring its absolute value. A proposed method updates the policy taking the action which has the highest probability of bringing a positive advantage over the baseline policy instead of those which bring the largest positive magnitude in advantage. This may be implemented by maximizing the sign of the advantage instead of its value: anew=argmaxasignAπ(a,x) Or possibly a soft continuous version of the sign function σ(Aπ) to avoid problems with discontinuity: σ(Aπ)={-1,Aπ≤-A0+1,Aπ≥+A0Aπ/A0,-A0<Aπ<A0 This choice still secures the convergence to the optimal policy, although the convergence rate may be slower compared to the greedy approach in ideal conditions (without outliers). At the same time, this choice is less sensitive to outliers, i.e. effects of the unknown process states. Because it does not use the value of the advantage function but just its sign, the illustrative non-greedy method effectively classifies the actions into two categories: the actions that make the return better versus the actions that make the return worse (at an environment state). Then any of the former actions are adopted by the next policy iteration. The optimization may prefer the actions that improve the policy with the high probability. This improves the robustness of the approach even further. As an example, the RL problem may be simplified considering a finite horizon control. The agent starts with the environment at a state x and terminates at the next state y. At this state, the return is terminated and no future rewards are considered. The advantage function can be written without considering the Vπ(x) explicitly as: Aπ(α,x)=r(α,x)+E{r(α,y)|α,x}−r(π(x),x)−E{r(π(y),y)|π(x),x} This simplifies the problem: the advantage function estimate can be consistently approximated simply averaging N samples instead of considering the statistical expectation. First, define the empirical cost-to-go: Vα(x)=r(α,x)+(π(y),y) Vπ(x)=r(π(x),x)+r(π(y),y) Then the empirical advantage sample is the difference between those two costs ANπ(a,x)=∑k=1NVa(x)-Vπ(x) The average is an empirical advantage datum obtained by testing an action N times and observing the costs. Consider the actual advantage function at the current initial state is x Aπ(α,x)=1−16α2 From here, the optimal action is clearly zero. Suppose the empirical advantage converges to the actual advantage for N→∞ but the rate of convergence is much slower for suboptimal actions. This represents a similar mechanism like the regulatory control instability: it is much harder to determine the actual advantage or actual disadvantage for the suboptimal destabilizing controllers because these will be very sensitive to the intermediate states as well to the process nonlinearities and other complex effects. The purpose of this example is to visualize the difference between expected advantage maximization versus the expected advantage sign maximization. This can be seen inFIGS.4A through4D, which shows an example convergence of an expected advantage maximization approach (i.e. greedy approach) over an increasing number of samples N.FIG.4Avisualizes an example in which N=5.FIG.4Bvisualizes an example in which N=10.FIG.4Cvisualizes an example in which N=100.FIG.4Dvisualizes an example in which N=1000. It may be noted that average advantage converges rapidly for actions close to zero, i.e. close to the optimal action. However, the data further from zero are significantly affected by the outliers. In the example shown, the maximum averaged advantage value is for action a=1, even after 1,000 samples. Even after 1,000 tests, the optimal action for a state cannot be reliably determined. FIGS.5A through5Dare graphs showing an example convergence of an expected advantage sign maximization approach (i.e. non greedy approach) over the same increasing number of samples.FIG.5Avisualizes an example in which N=5.FIG.5Bvisualizes an example in which N=10.FIG.5Cvisualizes an example in which N=100.FIG.5Dvisualizes an example in which N=1000. As can be seen, the highest average advantage sign is perceived for actions close to zero even after as little as 10 samples. After 100 samples, the actions close to zero are clearly dominating, meaning that that an approximately correct answer is obtained at least 100 times faster compared to the greedy approach. It may be noted that the average sign is close to zero for actions very far from zero. It is because such actions gave inconsistent results with positive or negative advantage with almost equal probability. It should be noted thatFIGS.4A-4D and5A-5Dare based on the same data, i.e. the same outliers. It is the robustness of the non-greedy sign-based approach that makes the later results better. It cannot be concluded that the proposed non-greedy approach converges 100 times faster in general because this example is artificial in the sense that the outliers were emphasized. However, it is a valid conclusion that maximizing the average advantage “sign” is significantly more robust in the presence of outliers. It may be noted how the simplified example differs from the typical regulatory control example. The regulatory control problem is not a finite horizon problem. The advantage function will not be estimated by a least-squares fitting algorithm instead of simple reward averages. The illustrative sign-based approach improves the RL robustness and generally provides a faster convergence rate. Implementing such an approach on embedded computer hardware commonly used in a regulatory control layer may be difficult, depending on the processing power available at the regulatory control layer. In some cases, some or all of the algorithm may be performed on more powerful hardware such as on a server or in the cloud. While RL could be implemented by sending the process data to the server every sampling period, including the current controlled variable, set-point and the manipulated variable (e.g. valve position), this can represent a significant amount of data such as about 1 Mbyte per day per controller supposing single precision arithmetic and 1 second sampling period. Accordingly, and in some cases, the advantage function estimator does not use the raw data, but instead uses the initial and the terminal states x, y, the action a used over that evaluation period and the reward r(a, x). If the states are approximated with the control error and the action is representing PID gains, this would represent only about 33.75 kbyte per day per controller supposing single precision arithmetic and 1 minute evaluation period, which would present about a 30× data reduction. In an example implementation, the regulatory control edge device runs multiple PI and PID controller algorithms or similar fixed structure controllers each parameterized with a finite number of values. In the case of PID, the controller gain, integration and derivative time constant may represent the controller tuning vector of the control policy. At any time, the edge device may hold a tuning vector currently representing the best-known values which can be denoted a*. To achieve the autonomous optimization of the tuning vector, the edge device applies random perturbations to these currently best-known tuning values. The magnitude of the perturbations may be optimized but more often a reasonably small randomized perturbation ±10% may suffice. Such perturbations may be numbered by an index i. In terms of RL, each such perturbation represents an action of the agent. Each perturbation is applied for a sufficiently long evaluation period to minimize the effects of the intermediate states. At the evaluation period start, the initial state xiof the process is recorded. This xiinvolves only the observable states, the unknown states are ignored. In regulatory control, xiis often the initial control error, sometimes the control error and its derivative. During the evaluation period, the edge device integrates the instantaneous rewards to evaluate the tuning performance associated with the period: rti. At the evaluation period end, the process terminal state yiis recorded and the three items are send to the hardware running the RL algorithm along with the actual tuning αias a single record. Thus, the record #i may include the following items:1. Tuning values αi2. Initial process state xi3. Aggregated loss ri4. Terminal process state yi The reward aggregation for a typical regulatory control problem will include the summation of terms related to the control error and actuator activity. Usually the following two terms may be used: ri(t+1)=ri(t)−(ycv(t)−ysp(t))2−ρ(umv(t)−umv(t−1))2, where ycv(t), ysp(t) are the controlled variable and its set-point respectively and umv(t) is the manipulated variable (controller output) at time t. The non-negative p is a tuning parameter used to define the optimal speed of response. The hardware running the RL algorithm aggregates the records [xi, yi, αi, ri] and uses them to calculate the cost-to-go function V0(x) which represents the expected return as a function of the process state averaged over the tuning values tested so far. Such cost-to-go represents a baseline performance of the edge device controller when using the current tuning values a* including their random perturbations. If nothing would have changed, this would be the performance of the controller. It can be described as “historical performance.” The V0(x) or cost-to-go function estimation is a standard problem known in RL. A reasonable approach is the Least-Squares Temporal Difference Learning. It is known that V0(x) function is a multivariable quadratic function in case a) the controlled process is linear b) the reward function is a quadratic function of the process state and the controller output. Such approximations are often reasonable for PID regulatory controllers. If it is the case, the V0(x) estimation algorithm will be like a quadratic polynomial regression. After having estimated V0(x), the proposed algorithm calculates the advantage values achieved by the tested tuning values αiat all initial process states xi. Each test record issues one such advantage value: Ai0=ri+V0(yi)−V0(xi) Positive Aiindicate evaluation periods during which the edge device performed above average and vice versa. The algorithm uses such data to classify the actions (tuning vectors) into two classes: above average (or average at worst) Ai0≥0 and below average Ai0<0. This classification is in fact a model of the Ai0sign. The tuning values which performed below average can now be rejected and eliminated from the data. In the next iteration, the improved cost-to-go can be calculated V1(x) not accounting for the rejected evaluation periods. The further improvement is achieved classifying the perturbations into below versus above average with respect to V1(x) using the refined advantage values Ai1. This process finally converges to an Ainafter n iterations presumably approximating the advantage function of the optimal policy, i.e. Ain≥0. It can be noted that while the advantage values are calculated even for eliminated periods at every iteration, the elimination concerns only the cost-to-go calculations. The optimal controller tuning is finally defined as an action classified as being not below average with the highest possible probability: a(x)*=argmaxasignAn(a,x) This method would produce a controller tuning of which depends on the process state. However, simple controllers like PID are more frequently described by tuning values which are constant, independent on the process state. This can be overcome by eliminating the state x, e.g. averaging it: a*=argmaxasign1N∑xiAn(a,x). In this way, the tuning vector which performs optimally on average is preferred instead of a state—dependent optimal tuning. Sometimes, the tuning dependency on the state may be desirable. Finally, the above calculated a* representing an improved controller tuning vector is sent back to the edge device. There, it replaces the current values and the edge device starts applying it including the randomized perturbations. This process may be repeated going forward. In this way, the controller tuning is permanently adapting to the changing environment. The advantage function (or other value function like Q-function) based reinforcement learning is a standard machine learning method. All standard RL algorithm assume that complete state observation is available, and the state transition depends on the current state and the action (Markovian assumption). Partially observable Markov decision process (POMDP) is a generalization of Markov decision process (MDP) that incorporates the incomplete state observation model. It turns out that POMDP can be treated as the standard MDP using the belief state as opposed to the unknown state. The problem is that RL formulated for the belief state is complicated even for simple problems. For this reason, specific algorithms and approximations have been developed for POMDP learning. The present disclosure can be viewed as a simple heuristic solution to this complicated problem. The disclosed approach does not address the unknown states problem directly. Rather, it proposes to extend the evaluation period, i.e. the time an action is applied. Over an extended period, the unknown initial condition may typically become negligible. However, this works with stable controllers. Unstable controllers run for an extended time amplify the unknown initial condition. The disclosure addresses this by modifying the approach so that the likelihood that the new action is better (has a positive advantage) is maximized as opposed to the standard maximization of the advantage value. This makes the method more robust. The unstable controllers do not yield consistent advantage results. The advantage values observed by running unstable controllers will have large variance. However, their advantage values will not be consistently positive. FIG.6is a flow diagram showing an illustrative method100of tuning a controller (such as the controllers12, the edge controller38and the controller50) that is configured to control at least part of a process. The method100includes performing several steps during each of a plurality of iterations, as indicated at block102. During each iteration, a policy of the controller is updated, as indicated at block104. In some cases, the controller is a regulatory controller and the updated policy may include tuning parameters. The tuning parameters may, for example, include one or more of a Proportional (P) gain, an Integral (I) gain and a Derivative (D) gain. These are just examples. In some cases, the controller may be configured to control an HVAC actuator such as but not limited to a water valve or an air damper. The controller may be configured to control at least part of an industrial process such as but not limited to a refinery process. The at least part of the process is controlled using the controller with the updated policy, as indicated at block106. The updated policy is associated with a performance level of the controller in controlling the at least part of the process, as indicated at block108. As indicated at block110, and for each iteration, the updated policy is determined using the associations generated during one or more previous iterations between the policies and the corresponding performance levels of the controller in controlling the at least part of the process, such that the updated policy is optimized to have a highest likelihood of producing a positive change in the performance level of the controller in controlling the at least part of the process rather than optimized to have a highest likelihood of producing a largest positive magnitude of change in the performance level of the controller in controlling the at least part of the process relative to the previous iteration. In some cases, and for each iteration, the updated policy may be determined using reinforcement learning based on an advantage function, and wherein the updated policy is based on a sign of the advantage function and not an absolute value of the advantage function. During each of the plurality of iterations, the at least part of the process is controlled using the controller with the updated policy for at least a period of time, wherein the period of time is sufficient to allow a measurable response to control actions taken by the controller in accordance with the updated policy. In some cases, the controller is an edge controller operatively coupled to a remote server, and the updated policy is determined by the remote server and communicated down to the controller before the controller controls the at least part of the process using the updated policy. FIG.7is a flow diagram showing an illustrative method120of tuning a regulatory controller that is configured to regulate at least part of process. The method120includes performing several steps during each of a plurality of iterations, as indicated at block122. During each iteration, one or more tuning parameters of the regulatory controller are updated, as indicated at block124. At least part of the process is regulated using the one or more updated tuning parameters, as indicated at block126. A performance of how well the regulatory controller controlled the at least part of the process is monitored, as indicated at block128. For each iteration, and as indicated at block130, the one or more updated tuning parameters are determined based at least in part on the performance of how well the regulatory controller performed in controlling the at least part of the process during one or more previous iterations, such that the updated one or more tuning parameters are optimized to have a highest likelihood of producing a positive change in the performance of how well the regulatory controller controlled the at least part of the process rather than optimized to have a highest likelihood of producing a largest positive magnitude of change in the performance of how well the regulatory controller controlled the at least part of the process relative to the immediate previous iteration. In some instances, and for each iteration, the updated one or more tuning parameters may be determined using reinforcement learning based on an advantage function, and wherein the updated one or more tuning parameters are based on a sign of the advantage function and not an absolute value of the advantage function. Controlling the at least part of the process using the regulatory controller with the updated one or more tuning parameters may be performed for at least a period of time, wherein the period of time is sufficient to allow a measurable response to control actions taken by the regulatory controller in accordance with the updated one or more tuning parameters. The one or more tuning parameters may include one or more of a Proportional (P) gain, an Integral (I) gain and a Derivative (D) gain. The regulatory controller may be configured to control an HVAC actuator of an HVAC system. In some cases, the regulatory controller may be an edge controller operatively coupled to a remote server, and wherein the updated one or more tuning parameters are determined by the remote server and communicated down to the regulatory controller before the regulatory controller controls the at least part of the process using the updated one or more tuning parameters. Those skilled in the art will recognize that the present disclosure may be manifested in a variety of forms other than the specific embodiments described and contemplated herein. Accordingly, departure in form and detail may be made without departing from the scope and spirit of the present disclosure as described in the appended claims. | 45,538 |
11860590 | DETAILED DESCRIPTION OF THE EMBODIMENTS The present invention will be described in detail by reference to the flow diagram of the present invention. The model-free optimization method of process parameters of injection molding provided by the present invention uses an iterative gradient estimation method to calculate the gradient direction of the current point as a parameter adjustment direction, thus optimizing the parameters constantly; during parameter optimization process, the adaptive moment estimation algorithm is used to achieve the adaptive adjustment of each parameter step, and the parameter obtained after adjustment each time will be subjected to a test to obtain an optimal target value corresponding to the parameter for such kind of iterative optimization till being up to the standard. The present invention will be further described in detail in combination with the examples below. As shown inFIG.1, a model-free optimization method of process parameters of injection molding, includes the steps of: (1) Method initiation: determining an optimized quality index Q (e.g., product weight, warping, and the like), a quality index target value Qtargetand a permissible error δ of an optimization objective; and choosing parameters X=(x1,x2, . . . , xn)Tto be optimized and feasible scope thereof, as shown in the Formula (1): minXJ(X)=❘"\[LeftBracketingBar]"Q(X)-Qtarget❘"\[RightBracketingBar]"s.t.Li≤xi≤Ui,i=1,2,…,nX=(x1,x2,…,xn)T(1) J(X) denotes the difference between the current quality index value and the target value. The quality index Q(X) can be regarded as an implicit function of process parameters X, and X consists of n parameters x1,x2, . . . , xn, such as, packing pressure, melt temperature; Uiand Lidenote upper and lower bound values of parameter xi. Different parameters have differences of orders of magnitude, for example, there is a difference of two orders of magnitude between melt temperature and injection time; usually, parameters need to be normalized by the Formula (2) during parameter optimization process, and the normalization can ensure that each parameter can change with the same extent. xinorm=xi-LiUi-Li(2) (2) Initial parameter selection and initial gradient calculation. Firstly, an initial point X0is selected within a feasible scope, and each parameter is disturbed respectively; that is, xij=xj0+δi, where i denotes the ith parameter, j shows the jth set of data, xijdenotes the ith parameter in the jth set of parameter samples, xj0is the jth parameter in the current parameter sample, δiis perturbing quantity; and the perturbing size and direction are not strictly limited, to obtain n+1 sets of process parameters and the corresponding quality values after perturbation, thereby obtaining the corresponding optimal target value J (a difference value between the corresponding quality value and quality target value): X0=[x10,x20,...,xn0]T,J(X0)=J0X1=[x11,x21,...,xn1]T=[x10+δ1,x20,...,xn0]T,J(X1)=J1⋮Xn=[x1n,x2n,...,xnn]T=[x10,x20,...,xn0+δn]T,J(Xn)=Jn J(X) is expanded according to the Taylor Series Expansion thereof at X0: J(X1)=J(X0)+(x11-x10)∂J∂x1+...+(xn1-xn0)∂J∂xn+R1(X)(3)⋮J(Xn)=J(X0)+(x1n-x10)∂J∂x1+...+(xnn-xn0)∂J∂xn+Rn(X)(4) A matrix form is written below: [J(X1)-J(X0)⋮J(Xn)-J(X0)]=[x11-x10…xn1-xn0⋮⋱x1n-x10xnn-xn0][∂J∂x1⋮∂J∂xn](5) The gradient value at the initial point X0is calculated by a gradient computation matrix (6); ∇J(X0)={∂J∂x1⋮∂J∂xn]=[x11-x10…xn1-xn0⋮⋱x1n-x10xnn-xn0]-1·[J(X1)-J(X0)⋮J(Xn)-J(X0)](6) (3) Formula (6) is used to calculate the reverse direction of the gradient, that is, −∇J(X0) is an optimized direction of the parameter; a next iteration point is produced by Formula (7), and the obtained new process parameter will be closer to the preset target value; Xupdte0=X0−α0∇J(X0) (7) α0is a step size of the current parameter adjustment. After obtaining the updated parameter Xupdte0, the updated parameter needs to be de-normalized (as shown in Formula (8)), so that the parameter returns to a physical parameter value, which facilitates technicians to obtain a quality index value Q(Xupdte0) at Xupdte0through experimental operation. Xdenorm=Xnorm×(U−L)+L(8) (4) Judging whether Q(Xdenorm) satisfies the requirement, if |Q(Xdenorm)−Qtarget|<δ, the obtained optimization objective has satisfied termination conditions, and the optimization progress stops; at this time, Xdenormis the optimal parameter, otherwise, it cannot satisfy the termination conditions, continuously perform step (5). (5) Updating the gradient computation matrix. A newly obtained (Xupdte0,J(Xupdte0)) is used to substitute the point whose quality value is the worst in the primary gradient computation matrix (if it is (X0),J(X0), it is without loss of generality), and J(X) is expanded according to Taylor at Xupdte0thereof to obtain an updated gradient computation matrix at this time X0=Xupdte0: ∇J(Xn+1)=[∂Q∂x1⋮∂Q∂xn]=[x11-x1,0…xn1-xn0⋮⋱x1n-x10xnn-xn0]-1·[J(X1)-J(X0)⋮J(Xn)-J(X0)](9) (6) Adaptive step adjustment. After obtaining the gradient information, the existing optimization method is to update parameters usually by a stochastic gradient descent. In such case, a step size α0is set as a constant, and parameter variation ΔX=α0·∇J(X0) only depends on the gradient of the current single point, but overlooks the accumulation of history gradient information; and the accumulation of history gradient information may provide a useful suggestion to the following optimized direction and step, thus adjusting the step size α0more adaptively in the subsequent optimal steps. Therefore, adaptive moment estimation algorithm is used in the present invention, and after through adaptive adjustment, the optimal step of each parameter is Xupdte0=X0-ΔX=X0-ηv^s^+δ, and the method has the following specific steps: (6.1) Calculation of the first-order exponential moving average of history gradients: v0=0v1=β1·v0+(1-β1)∇J(X1)⋮vm=β1·vm-1+(1-β1)∇J(Xm)(10) β1is a first-order exponential decay rate coefficient, and generally set as 0.9. M is the current step of updating, choose the corresponding computational formula when corresponds to the frequency of updating, the first-order exponential moving average is calculated to accelerate the optimization procedure. For example, if the gradient of a certain parameter component in preceding several optimal steps is a positive value, the first-order exponential moving average v will record and accumulate the trend, and increases the step of the parameter in the subsequent optimal step, thus achieving acceleration. The function of the first-order exponential moving average v is similar to the concept of momentum in physics; therefore, it is called momentum algorithm. (6.2) Calculation of the second-order exponential moving average of history gradients: s0=0s1=β2·s0+(1-β2)∇J(X1)⊙∇J(X1)⋮sm=β2·sm-1+(1-β2)∇J(Xm)⊙∇J(Xm)(11) β2is a second-order exponential decay rate coefficient, and generally set as 0.999; and ⊙ denotes the element-wise multiplication corresponding to the matrix. The optimization objective is only sensitive to a portion of parameters, but not sensitive to the change of other parameters. In respect to the optimization problem of multiple parameters, different optimal step sizes need to be configured directed to different parameters, thus fitting in the influence to the optimization objective. The second-order exponential moving average sevaluates the influence to each parameter by accumulating the square of a gradient, thus improving the adaptability of the optimization method. If the square of the gradient of a target function on a certain dimension is always small, the descending step on the dimension will increase to accelerate convergence; otherwise, for some parameters having larger fluctuation, the optimal step will decrease, thus reducing fluctuation. (6.3) Error correction of exponential moving average. Initial values of v and s are generally set to 0, which will result in an error in exponential moving average; and according to the Formula (12), such kind of error caused by the initial value is corrected; during updating for the mth time, correction of the first-order exponential moving average of the corresponding history gradients is; and correction of the second-order exponential moving average of the corresponding history gradients is: ¯=vm-11-(β1)m-1=sm-11-(β2)m-1(12) where: m is the current step of updating, vm-1is a first-order exponential moving average of the history gradients during updating for the mth time; sm-1is a second-order exponential moving average of the history gradients during updating for the mth time; v0=0; s0=0; ∇J(Xm-1) is a gradient value obtained after updating for the (m−1)th time. (6.4) Obtaining the updating formula shown in (13) of the current parameter sample, and returning to step (4). Xm=Xm-1-η+δ(13) η is a preset step size coefficient; and δ is a very small positive number (e.g., 10−8) to avoid that denominator is 0. The present invention will be further described in detail in combination with the examples below. EXAMPLE Hereafter, optimization of process parameters of an optical plastic lens was set as an example to describe the specific implementation measures of the prevent invention in the process parameter optimization of plastic injection molding. The injection molding machine used in the example is type Zhafir VE400 from China HAITIAN GROUP, and the material used is polymethyl methacrylate (PMMA) from Sumipex. Firstly, step (1) was performed for the initialization of the optimization method. Product weight served as an optimization objective, and the standard weight of the product was 8.16 g, that is, the target value of the product weight was 8.16 g, and process parameters to be optimized and scope thereof were selected, as shown in Table 1. TABLE 1Process parameters to be optimized and scope thereofParameterSymbolScopeUnitInjection timex10.4-2.0sDwell timex20.5-4.0sDwell pressurex330-60MPaMelt temperaturex4200-230° C.Die temperaturex520-50° C. Afterwards, step (2) was performed to choose an initial parameter within a scope of parameters randomly. The initial parameter selected in the example was X0=[x10,x20, . . . , x50]T=[1.6 s, 3 s, 50 MPa, 200° C., 20° C.]T, and then X0was normalized to be Xnorm0=[0.75, 0.714,0.667, 0, 0]T; and the product weight corresponding to the initial parameter was 25.3371 g. Each parameter component was perturbed to obtain other 4 sets of process parameters: Xnorm1=[0.812, 0.714,0.667, 0, 0]T, Xnorm2=[0.75, 0.742,0.667, 0, 0]T, Xnorm3=[0.75, 0.714,0.733, 0, 0]T, and Xnorm4=[0.75, 0.714,0.667, 0.0667, 0]T; Xnorm5=[0.75, 0.714,0.667, 0, 0.0667]Twas subjected to a test to obtain the corresponding product weight (product weight values corresponding to 1-5 points on the x-coordinate inFIG.2) and to calculate the gradient of the initial point according to the Formula (6). An optimal step size α=0.01 was set in step (3) to obtain a new process parameter Xnormnew=[0.749, 0.70, 0.616, 0.011,0.0047] after being optimized for one step. Step (4) was used to judge whether a new process parameter Xnormnewsatisfied terminal conditions, and then the optimization proceeded to step (5), and a new process parameter Xnormnewand its corresponding product weight replaced Xnorm0, then gradient computation matrix was updated to obtain the gradient information at Xnormnew. Step (6) was executed to adaptively adjust the optimal step of each parameter component, returning to step (3). This cycle was repeated till the obtained product weight can satisfy the desired error range. FIG.2. shows a variation trend diagram of experiment number and product weight in the optimization procedure of an example; it can be seen that after iterative optimization for 10 times, the product weight is up to the preset target; moreover, for the optimization method using adaptable step adjustment, the test number required is obviously less than that of the optimization method without adaptable step adjustment (seeFIG.3(corresponding to “a gradient descent method”); and in the optimization method without adaptable step adjustment, the step size is set as a constant value α=0.01). Experimental results indicate that the present invention can achieve the target requirements with a small amount of tests and through rapid optimization of process parameters. The optimization method of the present invention can be not only used for the process optimization of plastic injection molding, but also used for the injection molding technology using other injection materials, such as, rubber, magnetic powder, metal and the like or process optimization having the similar principle. | 12,772 |
11860591 | DETAILED DESCRIPTION OF EMBODIMENTS Implementations described herein provide for process recipe (“recipe”) creation and matching using machine learning feature models. Manufacturing processes may be disrupted due to a variety of factors such as wear and tear on equipment, process drifts, inconsistent operation, maintenance events and product changes. Process disruptions can result in lots that are out-of-specification or off target. For example, to address pad wear in a chemical mechanical polish process that results in wafer thickness variances, process engineers can make recipe adjustments to ensure proper process targeting. Other variances due to tool age, if not corrected, may also lead to scrapped wafers. Advanced process control (APC) tools, such as Run-to-Run (R2R) controllers, can be used to monitor and reduce process variances. An R2R controller, such as the Applied SmartFactory® Run-to-Run Solution provided by Applied Materials®, can improve process capability (Cpk) and optimize recipe parameters from batch-to-batch (B2B), lot-to-lot (L2L) and/or wafer-to-wafer (W2 W) based on knowledge of material context, feedback from process models, incoming variations, metrology data, etc. R2R controllers can be used to improve processes performed during front-end semiconductor wafer manufacturing, semiconductor assembly and testing, display manufacturing, etc. With R2R, manufacturers can make automatic adjustments to processes to maintain a required target value for specific properties, such as wafer thickness and critical dimension. Manufacturers can also use metrology data from each process operation to adjust process recipes on an R2R basis and define customized strategies, such as rework, to be performed in an automated fashion. The solution is designed to support high-mix and low-volume manufacturing operations, which have always posed a challenge in the industry. Conventionally, a recipe for a processing method can be embodied as a table of recipe settings including a set of inputs or recipe parameters (“parameters”) and processes that are manually entered by a user (e.g., process engineer) to achieve a set of target properties (e.g., on-wafer characteristics), also referred to as a set of goals. For example, the inputs can correspond to rows of the table and the processes can corresponds to the columns of the table. However, such manual population of parameters and/or processes can lead to recipes that are not optimized in view of multiple desired characteristics. For example, complex interrelationships may exist between desired characteristics, in which modifying the parameters and/or processes of the recipe to achieve a desired characteristic can have potentially unintended consequences on one or more other desired characteristics. Accordingly, by not taking all of these complex interrelationships into account, a sub-optimal recipe can be created. Aspects of the present disclosure address the above noted and other deficiencies by providing for recipe creation and matching using feature models (e.g., machine learning feature models). The recipe creation and matching described herein can be performed using a set of feature models. A feature model can be a type of supervised regression model. One example of a feature model is a multiple-input-single-output (MISO) feature model of the form Y=ƒ({right arrow over (X)}), where the input {right arrow over (X)} includes multiple parameters organized as a vector and the output Y is a single scalar output. Another example of a feature model is a multiple-input-multiple-output (MIMO) feature model of the form {right arrow over (Y)}=ƒ({right arrow over (X)}), where the input {right arrow over (X)} includes multiple parameters organized as a vector and the output {right arrow over (Y)} includes multiple outputs organized as a vector. One type of MIMO feature model is a spatial-output MIMO that further defines spatial coordinates for each output. Feature models can be implemented using any suitable regression algorithm. Examples of regression algorithms include linear regression, Gaussian process regression, partial least squares, gradient boosted trees, random forest, fully connected neural networks, etc. In the context of recipe creation described herein, the set of feature models can include a number of individual feature models each corresponding to a desired feature (e.g., on-wafer feature). For example, each feature model can be a spatial MIMO model in which the input vector includes input recipe parameters, the output vector includes output measurements of features at different locations on the wafer, and the spatial location (e.g., X-Y coordinate) for each output measurement can be included. Illustratively, a user can input a desired requirement for each feature. Using numerical optimization routines based on the feature models, the continuous process space can be searched to obtain a set of optimal recipes in view of the desired requirements. Moreover, the recipe can be used by a matching technique to generate a set of offsets to correct for mismatch between the expected or desired process behavior indicated by the recipe and a current or predicted behavior. Advantages of the present disclosure include, but are not limited to, increased speed-to-value, quicker deployment time, minimized risk during development, robustness to incoming noise to improve recipe stability, improved process capability, minimized scrapped wafers and send-ahead wafers, and reduction or elimination of manual tuning. Accordingly, aspects of the present disclosure can improve device yield and reduce costs. FIG.1depicts an illustrative computer system architecture100, according to aspects of the present disclosure. Computer system architecture100can include a client device120, a predictive server112(e.g., to generate predictive data, to provide model adaptation, to use a knowledge base, etc.), and a data store140. The predictive server112can be part of a predictive system110. The predictive system110can further include server machines170and180. In some embodiments, computer system architecture100can be included as part of a manufacturing system for processing substrates or wafers. In such embodiments, computer system architecture100can include manufacturing equipment124, metrology equipment128and/or testing equipment (not shown). Manufacturing equipment124can produce products, such as electronic devices, following a recipe or performing runs over a period of time. Manufacturing equipment124can include a process chamber, such as tool/chamber220described with respect toFIG.2. Manufacturing equipment124can perform a process for a wafer (e.g., a substrate, semiconductor, etc.) at the process chamber. Examples of wafer processes include a deposition process to deposit a film on a surface of the wafer, an etch process to form a pattern on the surface of the wafer, a wafer heating process to heat a wafer to a target temperature prior to a deposition process or an etch process, a wafer cooling process to cool a wafer to a target temperature following a deposition process and/or an etch process, etc. Manufacturing equipment124can perform each process according to a process recipe. A process recipe defines a particular set of operations to be performed for the wafer during the process and can include one or more settings associated with each operation. For example, a wafer heating process can include a positional setting for the wafer disposed within the process chamber, a temperature setting for the process chamber, a pressure setting for the process chamber, a pressure setting for the process chamber, etc. In some embodiments, manufacturing equipment124can include one or more sensors126configured to generate process sensor data for an environment within or outside of a process chamber and/or a wafer disposed within the process chamber. Sensor data can include a value of one or more of temperatures (e.g., heater temperature), spacing (SP), pressure, high frequency radio frequency (HFRF), voltage of electrostatic chuck (ESC), electrical current, flow, power, voltage, etc. Sensor data can be associated with or indicative of manufacturing parameters such as hardware parameters, such as settings or components (e.g., size, type, etc.) of the manufacturing equipment124, or process parameters of the manufacturing equipment124. The sensor data can be provided while the manufacturing equipment124is performing manufacturing processes (e.g., equipment readings when processing products). The sensor data can be different for each wafer processed at manufacturing equipment124. Metrology equipment128can provide metrology data associated with wafers (e.g., wafers, etc.) processed by manufacturing equipment124. In some embodiments, metrology data can include data generated for a film on a surface of a wafer before, during, or after a deposition and/or an etch process is performed for that wafer. For example, metrology data can include a value of film property data (e.g., wafer spatial film properties), dimensions (e.g., thickness, height, etc.), dielectric constant, dopant concentration, density, defects, etc. generated for a wafer after completion of a wafer process. In some embodiments, the metrology data can further include data associated with a portion of a wafer that is not subject to a deposition and/or an etch process. For example, a film can be deposited on a top surface of a wafer prior to an etch process that is to etch away a portion of the film and create a target wafer surface pattern. A wafer heating process can initiated for the wafer to heat the wafer to a target temperature prior to initiate of the etch process. The client device120can include a computing device such as personal computers (PCs), laptops, mobile phones, smart phones, tablet computers, netbook computers, network connected televisions (“smart TVs”), network-connected media players (e.g., Blu-ray player), a set-top box, over-the-top (OTT) streaming devices, operator boxes, etc. In some embodiments, computer system architecture100can receive data associated with a process recipe for a process to be performed for a wafer at manufacturing equipment124from client device120. For example, client device120can display a graphical user interface (GUI), where the GUI enables a user (e.g., an engineer, an operator, a developer, etc.) to provide, as input, data associated with one or more process recipe settings for a wafer heating process and/or a wafer cooling process to be performed for a wafer at a process chamber of manufacturing equipment124. Data store140can be a memory (e.g., random access memory), a drive (e.g., a hard drive, a flash drive), a database system, or another type of component or device capable of storing data. Data store140can include multiple storage components (e.g., multiple drives or multiple databases) that can span multiple computing devices (e.g., multiple server computers). In some embodiments, data store140can store sensor data, metrology data, predictive data, and/or contextual data. Sensor data can include historical sensor data (e.g., sensor data generated by sensors126for a previous wafer processed at manufacturing equipment124) and/or current sensor data (e.g., sensor data generated by sensors126for a current wafer being processed at manufacturing equipment124). In some embodiments, current sensor data can be data for which predictive data is generated. Sensor data can include but is not limited to, data indicating a temperature of one or more components of manufacturing equipment124(e.g., a temperature of a lid and/or a window of a process chamber, a temperature of a heating element embedded within a wafer support assembly of the process chamber, etc.), data indicating a temperature of a wafer during a wafer process, data indicating a pressure at one or more portions of an environment within manufacturing equipment124(e.g., a pressure of the environment between a lid and/or window of a process chamber and a surface of a wafer, a pressure of the environment between a surface of a wafer and a surface of a wafer support assembly, etc.), data indicating a concentration or flow rate of one or more gases flowed into manufacturing equipment124before, during and/or after a wafer process, and so forth. Data store can store metrology data, in some embodiments. Metrology data can include historical metrology data (e.g., metrology data generated by metrology equipment128for a previous wafer processed at manufacturing equipment124). Contextual data refers to data associated with a wafer and/or a wafer process performed at manufacturing equipment124. In some embodiments, contextual data can include data associated with the wafer (e.g., such as an identifier for a wafer, a type of the wafer, etc.). Contextual data can additionally or alternatively include data associated with one or more components of manufacturing equipment124used to process the wafer. For example, contextual data can include an identifier for the one or more components of manufacturing equipment124, one or more physical properties associated with the one or more components (e.g. an emissivity of the one or more components, a molecular weight of the one or more components, etc.), an identifier associated with an operator of manufacturing equipment124, a type of the process performed at manufacturing equipment124, etc. In additional or alternative embodiments, contextual data can include data associated with a process recipe performed for the wafer at manufacturing equipment124. For example, contextual data can include an identifier of a name for the process recipe, an operation number for an operation of the process recipe, or settings for one or more operations of the process recipe (referred to herein as a process recipe setting). A process recipe setting can include a positional setting for the wafer or one or more components of manufacturing equipment124, such as a setting for a position of a wafer disposed within a process chamber relative to a lid and/or a window of the process chamber, a position of the wafer relative to a wafer support assembly of the process chamber, a position of the wafer support assembly relative to the lid and/or the window of the process chamber, a velocity of a movement of the wafer support assembly (with or without a wafer) toward or away from the lid and/or the window of the process chamber, a velocity of a movement of the wafer toward or away from a surface of the wafer support assembly, etc. A process recipe setting can also include a temperature and/or pressure setting for one or more components of manufacturing equipment124and/or the wafer disposed within manufacturing equipment124. A process recipe setting can also include a gas flow setting for the wafer process, including a setting indicating a target composition and/or concentration of a gas flowed into a process chamber of manufacturing equipment124, a flow rate of the gas flowed into the process chamber, a temperature of the gas flowed into the process chamber, etc. Contextual data can include historical contextual data (e.g., contextual data for a prior wafer process performed for a prior wafer at manufacturing equipment124) and/or current contextual data (e.g., contextual data for a wafer process currently performed or to be performed for a current wafer at manufacturing equipment124). Current contextual data can be data for which predictive data is generated, in accordance with embodiments described herein. Historical contextual data and/or current contextual data can be provided to system100via a GUI of client device120, in accordance with previously described embodiments. In some embodiments, data store140can be configured to store data that is not accessible to a user of the manufacturing system. For example, testing data, contextual data, etc. for a wafer support assembly is not accessible to a user (e.g., an operator) of the manufacturing system and/or testing system. In some embodiments, all data stored at data store140can be inaccessible by the user of the system. In other or similar embodiments, a portion of data stored at data store140can be inaccessible by the user while another portion of data stored at data store140can be accessible by the user. In some embodiments, one or more portions of data stored at data store140can be encrypted using an encryption mechanism that is unknown to the user (e.g., data is encrypted using a private encryption key). In other or similar embodiments, data store140can include multiple data stores where data that is inaccessible to the user is stored in one or more first data stores and data that is accessible to the user is stored in one or more second data stores. In some embodiments, predictive system110can include a server machine170and/or a server machine180. Server machine170includes a training set generator172that is capable of generating training data sets (e.g., a set of data inputs and a set of target outputs) to train, validate, and/or test a machine learning model190(e.g., feature model). For example, training set generator172can generate training sets to train, validate, and/or test the machine learning model190to predict process recipe settings for a process to be performed for a wafer at manufacturing equipment124, in accordance with embodiments provided herein. In some embodiments, training set generator172can generate training sets for machine learning model190based on historical sensor, metrology, and/or contextual data associated with one or more prior wafer processes performed at manufacturing equipment124. In additional or alternative embodiments, training set generator172can generate training sets for machine learning model190based on predictive or simulated sensor, metrology, and/or contextual data generated by a digital replica model (e.g., digital twin) of manufacturing equipment124. A digital replica model (also referred to as a digital replica herein) can be an algorithmic model that simulates manufacturing equipment124, in some embodiments. In some embodiments, digital representation server160can be a digital replica of manufacturing equipment124. Digital representation server160can use supervised machine learning, semi-supervised learning, unsupervised machine learning, or any combination thereof to generate a virtual representation of the physical elements and/or the dynamics of how manufacturing equipment124operate. Digital representation server160can be updated via reinforcement learning using periodic updates from sensors126and/or data associated with generating and maintaining the digital replica data of manufacturing equipment124, such as sensor data, performance data (e.g., data associated with an efficiency, latency, throughput, etc. of one or more components of manufacturing equipment124), library data, etc. In some embodiments, digital representation server160can include a processing chamber model162that is associated with the physical elements and dynamics of a process chamber of manufacturing equipment124. Digital representation server160can generate simulation data that is used to determine how manufacturing equipment124would perform based on current or simulated parameters. The simulation data can be stored at data store140, in some embodiments. In some embodiments, the simulation data can include one or more process recipe settings associated with a wafer process (e.g., a wafer temperature control process) for a wafer at a process chamber. The simulation data can also include predicted property data and/or predicted metrology data (e.g., virtual metrology data) of the digital replica of manufacturing equipment124(e.g., of products to be produced or that have been produced using current sensor data at data store140). The simulation data can also include an indication of abnormalities (e.g., abnormal products, abnormal components, abnormal manufacturing equipment124, abnormal energy usage, etc. and one or more causes of the abnormalities. The simulation data can further include an indication of an end of life of a component of manufacturing equipment124. The simulation data can be all encompassing, covering every mechanical and/or electrical aspect of manufacturing equipment124. As described above, training set generator172can generate training data for model190based on predictive or simulated data obtained from digital representation server160. For example, training set generator172can generate one or more sets of process recipe settings and provide the sets of process recipe settings to digital representation server160to simulate a process at a process chamber of manufacturing equipment124using process chamber model162. In some embodiments, the data output by process chamber model162can include a pressure differential between a first space of the process chamber environment and a second space of the process chamber environment. The first space of the process chamber environment can include a space between a top surface of the wafer and a ceiling (e.g., a lid, a window, etc.) of the process chamber. The second space of the process chamber environment can include a space between a bottom surface of the wafer and a top surface of a wafer support assembly that supports the wafer during the simulated wafer process. In additional or alternative embodiments, the data output by process chamber model162can include data associated with a rate of change of a temperature of the wafer between an initial period of the wafer process and a final period of the wafer process (referred to as a ramping rate). In some embodiments, the training set generator172can partition the training data (e.g., data for a physical process and/or simulated data) into a training set, a validating set, and a testing set. In some embodiments, the predictive system110generates multiple sets of training data. Server machine180can include a training engine182, a validation engine184, a selection engine186, and/or a testing engine188. An engine can refer to hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, processing device, etc.), software (such as instructions run on a processing device, a general purpose computer system, or a dedicated machine), firmware, microcode, or a combination thereof. Training engine182can be capable of training a machine learning model190. The machine learning model190can refer to the model artifact that is created by the training engine182using the training data that includes training inputs and corresponding target outputs (correct answers for respective training inputs). The training engine182can find patterns in the training data that map the training input to the target output (the answer to be predicted), and provide the machine learning model190that captures these patterns. The machine learning model190can use one or more of support vector machine (SVM), Radial Basis Function (RBF), clustering, supervised machine learning, semi-supervised machine learning, unsupervised machine learning, k-nearest neighbor algorithm (k-NN), linear regression, random forest, neural network (e.g., artificial neural network), etc. The validation engine184can be capable of validating a trained machine learning model190using a corresponding set of features of a validation set from training set generator172. The validation engine184can determine an accuracy of each of the trained machine learning models190based on the corresponding sets of features of the validation set. The validation engine184can discard a trained machine learning model190that has an accuracy that does not meet a threshold accuracy. In some embodiments, the selection engine186can be capable of selecting a trained machine learning model190that has an accuracy that meets a threshold accuracy. In some embodiments, the selection engine186can be capable of selecting the trained machine learning model190that has the highest accuracy of the trained machine learning models190. The testing engine188can be capable of testing a trained machine learning model190using a corresponding set of features of a testing set from training set generator172. For example, a first trained machine learning model190that was trained using a first set of features of the training set can be tested using the first set of features of the testing set. The testing engine188can determine a trained machine learning model190that has the highest accuracy of all of the trained machine learning models based on the testing sets. Predictive server112includes a predictive component114that is capable of providing one or more process recipe settings that correspond to a spatial profile for a current wafer to be processed at manufacturing equipment124. As described in detail below, in some embodiments, predictive component114is capable of providing data associated with a process recipe for a process to be performed for a wafer as an input to model190and obtain one or more outputs of model190. In some embodiments, the data associated with the process recipe can include an indication of one or more operations to be performed for the process recipe and a target temperature for the wafer at a final period of the wafer process. The process recipe data can include, in some embodiments, one or more target wafer process settings to be applied during the wafer process. Predictive server112can provide a set of process recipe settings that correspond to the one or more operations and/or the target temperature for the wafer based on the one or more outputs of model190. In response to determining that the set of process recipe settings satisfies a level of confidence criterion, predictive server112can cause the wafer process to be performed for the wafer at the process chamber in accordance with the determined process recipe settings. In some embodiments, predictive server112can transmit an indication of the one or more process recipe settings to client device120as a suggested modification to the one or more target wafer process recipe settings. Client device120can display the suggest modifications to the target wafer process recipe settings via a GUI of client device120. A user (e.g., an operator, an engineer, a developer, etc.) of system100can interact with one or more elements of the GUI of client device120to cause the wafer process to be initiated or not to be initiated for the wafer in accordance with the one or more process recipe settings obtained from an output of model190. The client device120, manufacturing equipment124, data store140, digital representation server160, predictive server112, server machine170, and server machine180can be coupled to each other via a network130. In some embodiments, network130is a public network that provides client device120with access to predictive server112, data store140, and other publically available computing devices. In some embodiments, network130is a private network that provides client device120access to manufacturing equipment124, data store140, digital representation server160, predictive server112, and other privately available computing devices. Network130can include one or more wide area networks (WANs), local area networks (LANs), wired networks (e.g., Ethernet network), wireless networks (e.g., an 802.11 network or a Wi-Fi network), cellular networks (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, cloud computing networks, and/or a combination thereof. It should be noted that in some other implementations, the functions of digital representation server160, server machines170and180, as well as predictive server112, can be provided by a fewer number of machines. For example, in some embodiments, digital representation server160, server machine170and/or server machine180can be integrated into a single machine, while in some other or similar embodiments, digital representation server160, server machine170and/or server machine180, as well as predictive server112, can be integrated into a single machine. In general, functions described in one implementation as being performed by digital representation server160, server machine170, server machine180, and/or predictive server112can also be performed on client device120. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. In embodiments, a “user” can be represented as a single individual. However, other embodiments of the disclosure encompass a “user” being an entity controlled by a plurality of users and/or an automated source. For example, a set of individual users federated as a group of administrators can be considered a “user.” FIG.2is a diagram of a system200for implementing process recipe creation and matching using features models (e.g., machine learning feature models), according to aspects of the present disclosure. As shown, the system200includes an unprocessed substrate or wafer210that is received by a tool/chamber220to produce a processed wafer230. More specifically, the tool/chamber220can utilize a set of process recipes (“recipes”) to produce the processed wafer230from the unprocessed wafer210. Although a wafer is shown, any suitable component can be processed in accordance with the embodiments described herein. The system200includes a recipe creation component240. The recipe creation component240models expected process behavior for a set of recipe parameters in view of a set of desired on-wafer characteristics, also referred to as a set of goals or target properties, and generates a recipe242having recipe settings based on the set of goals. The recipe creation component240can model the expected process behavior by creating feature models (e.g., machine learning model190ofFIG.1). The recipe settings can include a set of recipe parameters and a set of processes. For example, the recipe settings can include one or more relevant recipe parameters for achieving the set of goals. The recipe242can be implemented by the tool/chamber220to execute the processing of the wafer210in view of the recipe242. Accordingly, goals can be translated into the recipe242for processing the unprocessed wafer210using the tool/chamber220to obtain a processed wafer230. Further details regarding the recipe creation component240will now be described below with reference toFIG.3. FIG.3is a diagram of a system300for implementing process recipe creation, according to aspects of the present disclosure. As shown, a set of target properties310and a set of feature models320are received by a numerical optimizer component330. The set of target properties310can be received as input from a user. The set of target properties310can include multiple features and respective targets for each feature, where each target corresponds to an expected or desired value or range of values for its corresponding feature. Examples of targets include “mean,” “less than,” “greater than,” “low as possible,” “profile goal,” etc. A profile goal allows a user to specify a profile across the wafer. In this illustrative example, the set of target properties310specifies a thickness feature having a corresponding target of “mean” and expected or desired value of 1000 Angstroms (Å) (i.e., the goal for thickness is a mean thickness of 1000 Å), a resistivity feature having a corresponding target of “greater than” and a value of 2.03 Ohm-meters (i.e., the goal for resistivity is a resistivity greater than 2.03 Ohm-meters), and a stress feature having a corresponding target of “low as possible” (i.e., as close to zero as possible). The set of feature models320is shown including a number of feature models. In some implementations, the set of feature models320includes a set of regression models. For example, the feature models can include MIMO models (e.g., spatial MIMO models). Each individual feature model targets a particular feature. For example, the feature models shown in the system300include a thickness feature model322, a resistivity feature model324and a stress feature model326. In some examples, a smaller subset of the inputs, or “set of relevant inputs,” can be identified as the primary factors impacting the on-wafer characteristics, with other operations/parameters being pre or post processing operations preparing for specific actions. Thus, the feature models can be created for each feature based on the set of relevant inputs. Inputs of a feature model320may include manufacturing parameters (e.g., process parameters, hardware parameters. Output of a feature model may be metrology data or performance data. For example, inputs of a feature model may be temperature, pressure, and flow rate, and the output of the feature model may be thickness. Each feature model320may output a corresponding feature (e.g., type of metrology data, type of performance data, etc.). As will be described in further detail herein, the use of individualized feature models for respective features can enable greater control over achieving the desired characteristics. In some implementations, a design of experiment (DoE) technique is used to generate the set of feature models320. DoE techniques can be used to detect wafer sensitivity in view of changing recipe parameters. DoE is the design of any information-gathering exercise where variation is present. DoE analysis is the analysis of data generated from execution of a DoE (i.e., DoE data). In some implementations, DOE data includes recipe parameters, recipe parameter values, and measurements (e.g., wafer measurements). For example, for a DoE analysis in which five recipe parameters may be varied, a DoE can be performed by running multiple experiments where each of the five recipe parameters is varied according to predetermined values for each experiment. Wafers from each experiment may then be measured at various locations and associated with their corresponding recipe parameters. Sensitivity values may be calculated by comparing the variation in recipe parameters to the variation in measurements from each measured location, from each of the experiments. Sensitivity values are then commonly averaged to determine a wafer's average sensitivity to a particular recipe parameter. Sensitivity may be calculated corresponding to averaged radial sensitivity values across a wafer. As will be described in further detail below, each feature model of the set of feature models320can be used to generate a recipe based on the set of target properties310by capturing spatial relationships among their corresponding features. To do this, the set of feature models320can be provided to the numerical optimizer component330. The numerical optimizer component330executes numerical search and optimization routines to generate an output in view of the set of feature models320and the set of target properties310. In some implementations, the output of the numerical optimizer component330can include, or can be used to generate, at least one pre-populated recipe340. The pre-populated recipe340has recipe settings, including a set of inputs as rows and a set of processes as columns. Each entry in the recipe340(e.g., xi through x6) denotes an entry for an input needed at each process. In some implementations, the output includes at least one set of recipe parameters. Each set of recipe parameters can be paired with the desired value for each feature (as indicated by the set of target properties310) and a predicted value for each feature. In some implementations, the predicted value is be a mean value. Each set of recipe parameters can be ordered or ranked in view of how well each set of recipe parameters achieves the set of target properties310. In this illustrative example, as mentioned above, one target is that the desired thickness mean is 1000 angstroms (Å), such that it would be optimal to find a combination of recipe parameters resulting in a predicted thickness mean as close to 1000 Å as possible in view of the each constraint specified by the feature models. Another target is that the desired resistivity is greater than 2.03, such that it would be optimal to find a combination of recipe parameters resulting in a predicted resistivity greater than 2.03 in view of each constraint specified by the feature models. The number of sets of recipe parameters output by the numerical optimizer component330can be a default number and/or can be customized by a user. For example, the output can include the top 10 sets of recipe parameters, the top 25 sets of recipe parameters, a single set of recipe parameters, etc. Ideally, a set of recipe parameters will simultaneously satisfy each target property of the set of target properties310. However, it may be the case that the numerical optimizer component330cannot generate recipe solutions that simultaneously satisfy each target property of the set of target properties310. For example, the numerical optimizer component330may find sets of recipe parameters that are predicted to satisfy targets for at least one feature (at least one satisfied feature), but at the expense of the target of at least one other feature (at least one non-satisfied feature). In such cases, the output generated by the numerical optimizer component330can be a Pareto frontier or front including at least one set of Pareto efficient recipe parameters. Generally, a Pareto front is a set of Pareto efficient solutions in which no objective can be improved without sacrificing at least one other objective. That is, the Pareto front includes non-dominated solutions. Additionally or alternatively, the output can include solutions that achieve the target for, say, a non-satisfied feature, and an estimated tradeoff with respect to the satisfied feature(s) can be observed. This can be particularly useful in cases where it may be important to ensure that a feature determined to be non-satisfied by the numerical optimizer component330can be satisfied by the recipe. The numerical optimizer component330operates by inverting the feature models and performing the search in a high dimensional input and output space. For example, assume that each feature model is a spatial MIMO model of the form ƒ({right arrow over (X)})={right arrow over (Y)}. To invert a feature model, {right arrow over (X)} can be solved as follows: {right arrow over (X)}=ƒ−1({right arrow over (Y)}). In many cases, the function ƒ({right arrow over (X)}) may be complex and not readily invertible. To address this, the numerical optimizer component330can implement search routines to pseudo-invert ƒ({right arrow over (X)}). For example, the numerical optimizer component330can search for {right arrow over (X)} that minimizes the error between ƒ({right arrow over (X)}) and the desired {right arrow over (Y)}. In addition, the gradient of ƒ({right arrow over (X)}) may not be known or difficult to estimate. This means gradient based search routines may not be optimal, and, in some implementations, the numerical optimizer component330can implement gradient free searches. Moreover, finding multiple solutions or local minima may be advantageous, as some solutions may be more preferable than others. Examples of search routines that can be used by the numerical optimizer component330include swarm based search routines and/or genetic based search routines. Accordingly, the numerical optimizer can search through a continuous space satisfying multiple criteria, as compared to searching through a discrete space and attempting to manually balance multiple desired characteristics. Further details regarding the operations of the numerical optimizer component330will be described below with reference toFIGS.4and5. FIG.4is a diagram of a system400for performing numerical optimization on a single feature model, according to aspects of the present disclosure. As shown, the system400includes a set of feature models410including one or more feature models each corresponding to a feature and a set of target properties420including one or more targets (e.g., desired characteristics) corresponding to respective ones of the features. Each target is or will be associated with a cost function. The set of feature models410and the set of target properties420are received by a numerical optimizer430to generate an output440, such as that described above with reference toFIG.3. As shown, the numerical optimizer430includes a cost function component432. The cost function component432creates a cost function for each feature model and target property, and selects an optimization algorithm to minimize cost with respect to the feature model and the target property. Examples of cost functions of cost function component432include mean, min sigma, min range, etc. For example, cost, C, can be defined by the equation C=D×(Δ+∂×err), where D is the desirability of the feature (only applicable if there are multiple features), Δ is a function of the feature model that captures the difference between what is desired and what the feature model predicts for the inputs, err is the estimated error or confidence in the prediction, and ∂ is an error penalty. The error penalty helps force the optimization to favor solutions with high confidence over low confidence in the event that more than one solution exists. Customized cost functions can be created to improve the optimization process. For example, a mean cost function to calculate a mean cost, Cmean, can be defined by Cmean=D×(rMSE(ƒ({right arrow over (X)})−{right arrow over (Y)}target)+∂×err) where rMSE is root mean square error and {right arrow over (Y)}targetis a target output for ƒ({right arrow over (X)}), a minimum sigma cost function to calculate min sigma cost, Csigma, can be defined as Csigma=D×(Σ(ƒ({right arrow over (X)}))+∂×err), where sigma refers to standard deviation, and a less than cost function to calculate a less than cost, C<, can be defined by: C<=D×(m+n+∂×err) (m=0→ƒ({right arrow over (X)})mean<{right arrow over (Y)}target)Λ(m=ƒ({right arrow over (X)})mean) (n=ƒ({right arrow over (X)})sigma→ƒ({right arrow over (X)})mean<{right arrow over (Y)}target)Λ(n=0) That is, m=0 and n=ƒ({right arrow over (X)})sigmaif ƒ({right arrow over (X)})mean<{right arrow over (Y)}target, otherwise m=ƒ({right arrow over (X)})mean) and n=0. Alternatively, ƒ({right arrow over (X)})meancan be replaced with, e.g., ƒ({right arrow over (X)})max. By doing so, all output targets are forced to be less than the maximum output of ƒ({right arrow over (X)}). The m term controls the mean across the wafer and the n term controls the sigma (i.e., variability) across the wafer. If we know the solution for all desired targets is potentially present in the solution space, the cost function outputs can be merged by summing the outputs of each individual cost function at a summation component434. This new cost function can then minimized using an optimization component436to implement one or more optimization routines. Examples of methods that can be used to minimize the new cost function include particle swarm, Nelder-Mead, genetic search, etc. However, if we know that we cannot simultaneously achieve each target property of the set of target properties420, the cost functions can remain as individual cost functions and a family of genetic search algorithms can be used by an optimization component438. For example, the family of genetic search algorithms can be Multiobjective Evolutionary Algorithms (MoEA) that generate an output442. The output can include a Pareto front for the set of target properties420. Illustratively, the set of feature models410can include regression models. Linear regression can be used to find values from the experiment data, β values, that minimizes the error between a predicted output and an actual output. Each regression model can be created at a measurement location (e.g., for 49-point metrology, there will be 49 regression models). The β values can be viewed as sensitivity parameters that specify the sensitivity at each measurement location. To find recipe conditions for on-wafer targets, optimization routines can be used to find the recipe inputs (e.g., temperature, power) that minimize the error between the output of the expected performance and the output of the observed performance. Optimization can then be performed on based on the regression models to find recipe conditions for on-wafer targets by (1) employing the cost function(s) to define the difference between the predicted value for any input and the desired value and (2) using the optimization routine(s) to find the input conditions that minimize the cost function(s), hence finding the recipe settings to find the desired on-wafer targets. Referring back toFIG.2, the system200can further include a recipe matching component250. The matching component250receives a recipe model from the recipe creation component240, and process feedback from the tool/chamber220, and generates a set of recipe offsets (“offset”)252by performing matching. For example, the process feedback can include a current or predicted performance behavior of the processing performed by the tool/chamber220. In this illustrative example, the current or predicted performance behavior is current or predicted wafer performance behavior. Matching can be performed to generate the offset252by searching for offsets from the inputs defined in the recipe242that will make the current or predicted performance behavior match a desired or expected performance behavior corresponding to the recipe242. That is, the offset254corrects for any change or drift in performance behavior (e.g., on-wafer performance) resulting from mismatch between the current or predicted performance behavior and the expected or desired performance behavior. In alternative implementations, the offset252can be a new set of recipe settings for generating a recipe (e.g. using the recipe creation component240). In some implementations, the current or predicted performance can be measured by any suitable metrology technique. Examples of metrology techniques include on-board metrology, inline metrology and virtual metrology. On-board metrology can refer to measurements performed on the devices themselves within a die or on test structures having features similar to the devices. Depending on the measurement techniques used, the test structures may include, but are not limited to, structures similar to logic or memory devices that are on the wafers. On-board metrology can be based on optical measurements (e.g., collecting optical emission spectra in-situ from devices or test structures, or macro 2D mapping using optical targets) or other types of measurements. These optical or other measurements can be inside the chamber (in-situ), or outside the chamber (ex-situ), but still under vacuum, or, at the factory interface (not necessarily under vacuum) on a process platform that may have multiple chambers. Inline metrology can refer to measurements that may be performed outside of a processing chamber, but without having to take the wafer out of the production line. An example of in-line metrology is scanning electron microscope (SEM), the advanced versions of which may offer high precision and broad modality. Advanced SEMs may include back-scattered electron (BSE) sensors in addition to secondary emission detectors, and ability to measure electron emission at various tilt angles and various landing energy, ranging from hundreds of electron-volt to tens of kilo electron-volts. SEMs have the capability of creating a broad database of metrology data in a non-destructive manner. SEM-based in-line metrology customized with electron beam (“e-beam”) simulation, data collection, image characterization and feature extraction as well as statistical analysis may be referred to as “customized metrology.” An advanced SEM tool may be used as the foundation of high-precision, non-destructive three-dimensional feature level profiling which is at the heart of customized metrology. Virtual metrology can refer to predicted measurements (e.g., dimensions) of a wafer determined based on sensor data taken by various sensors in the chamber or outside the chamber, without directly measuring the wafer. VM can include time traces of various process variables, such as pressure, temperature, RF power, current, voltage, flow control position etc. In some implementations, the current or predicted performance can be estimated from a MIMO sensor based model. Further details regarding the MIMO sensor based model will now be described below with reference toFIG.5. FIG.5is a diagram of a system500including a sensor model for implementing process recipe creation and matching using feature models (e.g., machine learning feature models), according to aspects of the present disclosure. As shown, the system500includes data storage510. The data storage510stores real-time sensor data (e.g., sensor feedback data). The system500further includes a sensor model component520implementing a sensor model. The sensor model is a MIMO model (e.g., regression model) that uses the sensor data from the data storage510to generate predicted performance behavior (e.g., on-wafer performance behavior) for use by the matching component530, as described above with reference toFIG.2. The sensor model is mathematically similar to the feature model, except different inputs are used. For example, a set of recipe settings can be used as input for a feature model, while a set of sensor feedback data obtained from a tool/chamber can be used as input for the sensor model. Examples of data that can be included in the set of sensor feedback data include pressure reading, valve positions, heater power, etc. That is, the sensor model can be viewed as an implementation of virtual metrology. The sensor model can be used to indicate current behavior, and the recipe model can be used to indicate expected behavior. The matching component530computes the offsets between the current behavior (e.g., indicated by the sensor model) and the expected behavior to compute the offsets. FIG.6is a flow chart of a method600for implementing process recipe creation using feature models (e.g., machine learning feature models), according to aspects of the present disclosure. Method600is performed by processing logic that can include hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or some combination thereof. In one implementation, method600can be performed by a computer system, such as computer system architecture100ofFIG.1. In other or similar implementations, one or more operations of method600can be performed by one or more other machines not depicted in the figures. For simplicity of explanation, the methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts can be performed to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. At block610, the processing logic receives a set of feature models. For example, the set of feature models can include a machine learning feature model (e.g., the machine learning model190ofFIG.1) The set of feature models includes a number of feature models each corresponding to a feature associated with processing of a component. Examples of features include thickness, resistivity, stress, etc. For example, the component can be a wafer or substrate. In some implementations, each feature model is a regression model. For example, each feature model can be a MIMO model (e.g., spatial MIMO model). In some embodiments, a feature model may have inputs of manufacturing parameters (e.g., process parameters, hardware parameters), such as temperature, pressure, flow rate, etc. A feature model may have an output (e.g., metrology data, performance data), such as thickness, resistivity, stress, etc. Each feature model may have a different output that corresponds to a particular feature (e.g., type of metrology data, type of performance data). In some embodiments, target metrology data or target performance data can be provided to the feature model (e.g., an inverted feature model) and the predicted manufacturing parameters (e.g., to be used to obtain the target metrology data or the target performance data) is received from the feature model. In some embodiments, manufacturing parameters are provided to the feature model and predicted metrology data or predicted performance data are received from the feature model. At block620, the processing logic receives a set of target properties. For example, the set of target properties can include target metrology data or target performance data. The set of target properties can include a number of features and a number of targets corresponding to respective ones of the features. For example, a thickness feature can have a target of “equal to” and a value of “1000 Å,” such that the target for the thickness feature is a thickness equal to a mean of 1000 Å. As another example, a resistivity feature can have a target of “greater than” and a value of “2.03,” such that the target for the resistivity feature is a resistivity greater than 2.03. At block630, the processing logic determines, based on the set of feature models, one or more sets of predicted processing parameters in view of the set of target properties and, at block640, the processing logic generates one or more candidate process recipes for processing a component each corresponding to a respective one of the one or more sets of predicted processing parameters. In some implementations, the component is a wafer. Each set of predicted processing parameters includes a number of parameters related to operations performed during component processing (e.g., temperature, pressure). Blocks630and640can be performed as individual operations or as simultaneously operations. Generating the one or more candidate process recipes can include using numerical optimization to minimize a difference between a target property and a corresponding predicted property. For example, generating the one or more candidate process recipes can include obtaining at least one cost function based on the set of feature models and the set of target properties, and obtaining the set of candidate process recipes by minimizing the at least one cost function. For example, the set of candidate process recipes can include multiple candidate process recipes that are ranked based on how well they satisfy the set of target properties, and the process recipe can be selected by the user as a process recipe having a highest ranking. In some instances, no candidate process recipe exists that meets each of the conditions of the set of target properties. In such cases, a Pareto front of candidate process recipes can be generated and displayed in the GUI, and the process recipe can be selected by the user via the GUI based on characteristic preference. Further details regarding numerical optimization are described above with reference toFIG.4. At block650, the processing logic selects, from the one or more candidate process recipes, a process recipe. Selecting the process recipe can include receiving a selection of the process recipe from a user via a graphical user interface (GUI) that lists the set of candidate process recipes. At block660, the processing logic causes a process tool to process the component using the process recipe. Further details regarding blocks610-640are described above with reference toFIGS.2-4. FIG.7is a flow chart of a method700for implementing process recipe matching to generate an offset using feature models (e.g., machine learning feature models). Method700is performed by processing logic that can include hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or some combination thereof. In one implementation, method700can be performed by a computer system, such as computer system architecture100ofFIG.1. In other or similar implementations, one or more operations of method700can be performed by one or more other machines not depicted in the figures. For simplicity of explanation, the methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts can be performed to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. At block710, the processing logic receives an expected performance for processing a component, and an observed performance for processing the component using a process recipe. The expected performance corresponds to the feature model output(s) (e.g., machine learning feature model output(s)) for a current set of recipe settings. In some implementations, the component is a wafer or substrate. The observed performance can be a current performance of actual processing using the process recipe (e.g., process feedback). At block720, the processing logic determines whether a difference between the expected performance and the observed performance satisfies a threshold condition. In some embodiments, it can be determined whether the difference between the expected performance and the observed performance is greater than a threshold difference. For example, the threshold difference can be selected as a difference that is “close enough” to the expected performance so as to not require any modification of the process recipe corresponding to the observed performance. Any suitable technique can be used to determine the difference between the expected performance and the observed performance. In some embodiments, a regression technique can be used. If the threshold condition is satisfied (e.g., the difference is greater than the threshold difference), then the set of inputs do not need to be modified and the process ends. Otherwise, at block730, the processing logic generates a new process recipe for processing the component based on the difference. Generating the new process recipe can include generating an output associated with the new process recipe based on the difference, and generating the recipe based on the output. In some implementations, the output includes the new process recipe. In some implementations, the output includes a set of offsets to correct for the difference in performance. For example, the set of offsets can modify the set of inputs used to generate the process recipe previously used to process the component in an attempt to match the expected performance, thereby generating a modified set of inputs for generating the new process recipe. At block740, the processing logic obtains a new observed performance using the new process recipe. The process can revert back to block720to determine whether a difference between the expected performance and the new observed satisfies the threshold condition. FIG.8depicts a block diagram of an illustrative computing device800operating in accordance with one or more aspects of the present disclosure. In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The machine can operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), a tablet computer, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In embodiments, computing device800can correspond to predictive server112ofFIG.1or another processing device of system100. The example computing device800includes a processing device802, a main memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), etc.), a static memory806(e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory (e.g., a data storage device828), which communicate with each other via a bus808. Processing device802can represent one or more general-purpose processors such as a microprocessor, central processing unit, or the like. More particularly, the processing device802can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device802can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processing device802can also be or include a system on a chip (SoC), programmable logic controller (PLC), or other type of processing device. Processing device802is configured to execute the processing logic for performing operations discussed herein. The computing device800can further include a network interface device822for communicating with a network864. The computing device800also can include a video display unit810(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device812(e.g., a keyboard), a cursor control device814(e.g., a mouse), and a signal generation device820(e.g., a speaker). The data storage device828can include a machine-readable storage medium (or more specifically a non-transitory computer-readable storage medium)824on which is stored one or more sets of instructions826embodying any one or more of the methodologies or functions described herein. Wherein a non-transitory storage medium refers to a storage medium other than a carrier wave. The instructions826can also reside, completely or at least partially, within the main memory804and/or within the processing device802during execution thereof by the computer device800, the main memory804and the processing device802also constituting computer-readable storage media. The computer-readable storage medium824can also be used to store model190and data used to train model190. The computer readable storage medium824can also store a software library containing methods that call model190. While the computer-readable storage medium824is shown in an example embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth in order to provide a good understanding of several embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that at least some embodiments of the present disclosure can be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth are merely exemplary. Particular implementations can vary from these exemplary details and still be contemplated to be within the scope of the present disclosure. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” When the term “about” or “approximately” is used herein, this is intended to mean that the nominal value presented is precise within ±10%. Although the operations of the methods herein are shown and described in a particular order, the order of operations of each method can be altered so that certain operations can be performed in an inverse order so that certain operations can be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations can be in an intermittent and/or alternating manner. It is understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 67,293 |
11860592 | The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. The present disclosure provides systems and methods for training a reinforcement learning system configured to perform autonomous control of the movement of pallets in a manufacturing environment using a digital twin. A reinforcement learning training module simulates a manufacturing routine of a plurality of pallets and workstations and obtains state information from one or more sensors of the digital twin and computes additional state information by calculating pseudo-target states and transient states with respect to one or more routing control locations. The reinforcement learning training module determines an action at various routing control locations of the environment and consequence state information based on the actions performed at the routing control locations. The reinforcement learning training module calculates transient and steady state rewards based on the consequence states and one or more objective functions and selectively adjusts one or more reinforcement parameters of the reinforcement learning system based on the transient and steady state rewards. By selectively adjusting the reinforcement parameters, the reinforcement learning system can autonomously control the movement of the pallets between workstations and inhibit bottlenecks of the manufacturing routine and ensure various production conditions are satisfied. Referring toFIG.1, a system5for controlling and defining a sensor layout is provided and generally includes a reinforcement learning (RL) training module10and a manufacturing environment50. While the RL training module10is positioned remotely from the manufacturing environment50, it should be understood that the RL training module10may be included as part of the manufacturing environment50in other forms. In one form, the RL training module10and the manufacturing environment50are communicably coupled using a wired and/or wireless communication protocol (e.g., a Bluetooth®-type protocol, a cellular protocol, a wireless fidelity (Wi-Fi)-type protocol, a near-field communication (NFC) protocol, an ultra-wideband (UWB) protocol, among others). In one form, the manufacturing environment50includes sensors52, a neural-network-based (NN-based) control system54, and pallets56. In form, the sensors52are configured to obtain identification and/or location information associated with the pallets56and track the pallets56as they traverse the manufacturing environment50. As an example, the sensors52may be radio frequency identification (RFID) scanners configured to scan RFID tags disposed on the pallets56. As another example, the sensors52may be imaging devices (e.g., a two-dimensional (2D) camera or a three-dimensional (3D) camera) configured to obtain images of fiducial markers disposed on the pallets56(e.g., quick response (QR) tags that include a unique 2D barcode uniquely identifying the pallets56). As described below in further detail, the RL training module10is configured to define the number and positioning of the sensors52. In one form, the pallets56are configured to transport raw, semi-finished, and finished workpieces between various workstations/storage areas of the manufacturing environment50. The pallets56may be pallets that are moveable along headrails and conveyors, two-way entry pallets, four-way entry pallets, open deck pallets, solid deck pallets, and/or double face pallets. In one form, the pallets56may include fixtures58that are configured to support and secure a workpiece thereto. In one form, the pallets56are partially or fully autonomous and are configured to move to various locations within the manufacturing environment50, as instructed by the NN-based control system54. To autonomously move itself, the pallets56include a control system60to control various movement systems (e.g., propulsion systems, steering systems, and/or brake systems) based on one or more navigation sensors (e.g., a global navigation satellite system (GNSS) sensor, an imaging sensor, a local position sensor, among others) of the pallets56(not shown). To perform the functionality described herein, the control system60may include one or more processor circuits that are configured to execute machine-readable instructions stored in one or more nontransitory computer-readable mediums, such as a random-access memory (RAM) circuit and/or read-only memory (ROM) circuit. The control system60may also include other components for performing the operations described herein, such as, but not limited to, movement drivers and systems, transceivers, routers, and/or input/output interface hardware. It should be understood that the pallets56may not be autonomous and thus be moveable by a forklift, vehicle, or conveyor/headrail of the manufacturing environment50in other forms. In one form, the NN-based control system54is configured to instruct the pallets56to autonomously travel to various locations of the manufacturing environment50based on the sensor data obtained from the sensors52(i.e., the identification and/or location of the pallets56). In one form, the NN-based control system54remotely and autonomously controls the pallets56as they travel to a respective location of the manufacturing environment50(e.g., controls the merging and splitting of the pallets56at various locations within the manufacturing environment50). To perform the functionality described herein, the NN-based control system54may perform a deep reinforcement learning (DRL) routine. Accordingly, the DRL routine is trained based on actions performed by the agents and a reward associated with the action (e.g., whether the various actions of the pallets56satisfy one or more time constraints associated with the manufacturing routine). Furthermore, to train the DRL learning routine performed by the NN-based control system54(e.g., maximize the rewards of the DRL learning routine), the RL training module10may iteratively modify various reinforcement parameters of the RL training module10, as described below in further detail. In one form, the RL training module10includes an environment layout module12, a sensor layout module14, a pallet parameter module16, a workstation parameter module18, a manufacturing routine module20and a control model module42. It should be readily understood that any one of the components of the RL training module10can be provided at the same location or distributed at different locations (e.g., via one or more edge computing devices) and communicably coupled accordingly. To perform the functionality described herein, the RL training module10may include a human-machine-interface (HMI) (e.g., a touchscreen display, monitor, among others) and/or input devices (e.g., mouse, keyboard, among others) that enable an operator to generate inputs corresponding to the described functionality. In one form, the environment layout module12is configured to generate a digital twin of the manufacturing environment50. As used herein, a “digital twin” may refer to a virtual representation of various components and systems of the manufacturing environment50, such as one or more workstations (not shown), the pallets56, the sensors52, among other components and systems. The virtual representation may be, for example, a dynamic simulation model of the manufacturing environment50(e.g., a discrete event simulation model of the manufacturing environment50). As described below in further detail, the virtual representation may also include historical operation data associated with the components and systems of the manufacturing environment50when a manufacturing routine is being simulated by the manufacturing routine module20. In one form, the environment layout module12is configured to define one or more routing control locations of the digital twin. As used herein, “routing control locations” refer to locations within the digital twin in which virtual pallets perform a pallet merging operation and/or a pallet splitting operation. As used herein, “pallet merging operation” refers to an operation in which one or more first virtual pallets traveling along a first path and one or more additional virtual pallets traveling along one or more additional paths are merged such that at least a set of the first virtual pallets and a set of the additional virtual pallets travel along a new path. As used herein, “pallet splitting operation” refers to an operation in which a plurality of virtual pallets traveling along a path are split such that a first set of the plurality of pallets travels along a first new path and a second set of the plurality of pallets travels along a second new path. As an example and referring toFIG.2, which illustrates a digital twin100of the manufacturing environment50, the environment layout module12defines routing control locations102-1,102-2,102-3,102-4,102-5,102-6(collectively referred to as “routing control locations102”) at which a pallet merging operation and/or pallet splitting operation can be performed. Specifically, a pallet merging operation can be performed at the routing control locations102-1,102-2,102-3,102-5. For example, pallets104-1,104-2traveling along paths106-1,106-2can be merged at the routing control location102-2such that the pallets104-1,104-2travel along path106-4. Furthermore, a pallet splitting operation can be performed at the routing control locations102-1,102-4,102-6. For example, pallets104-1,104-2,104-3traveling along paths106-11,106-12can be split at the routing control location102-1such that the pallets104-1,104-2,104-3travel along paths106-1,106-2,106-3, respectively. Additionally, both a pallet merging operation and a pallet splitting operation can be performed at the routing control location102-1. The pallets104-1,104-2,104-3may be collectively referred to herein as “pallets104,” and paths106-1,106-2,106-3,106-4,106-5,106-6,106-7,106-8,106-9,106-10,106-11,106-12may be collectively referred to herein as “paths106.” In one form and referring toFIGS.1and3, the sensor layout module14is configured to define a sensor layout including one or more sensors108-1,108-2, . . .108-n(collectively referred to hereinafter as “sensors108”) of the digital twin100based on one or more sensor parameters. In one form, the sensor parameters include a predetermined number of sensors (e.g., a minimum and maximum number of sensors108in accordance with resource-based constraints), a type of sensor, sensor placement rules (e.g., the sensors108can only be placed at the routing control locations102), and/or feedback information. As an example, the sensor layout module14initially defines the sensor layout such that the minimum number of sensors108are placed at one or more of the routing control locations102(e.g., the routing control locations102-1,102-2,102-3,102-4represent the minimum of sensor locations of the digital twin100). In one form, the pallet parameter module16is configured to define one or more pallet parameters associated with the pallets104. The one or more pallet parameters may include a pallet merging time associated with each routing control location102, a pallet splitting time associated with each routing control location102, and/or pallet travel times between the routing control locations104. As used herein, “pallet merging time” refers to an amount of time needed to perform the pallet merging operation, and “pallet splitting time” refers to an amount of time needed to perform the pallet splitting operation. As used herein, “pallet travel time” refers to an amount of time needed for the pallet104to travel between a pair of routing control locations102. In one form, the workstation parameter module18is configured to define one or more workstations110-1,110-2,110-3,110-4,110-5,110-6,110-7,110-8,110-9(collectively referred to hereinafter as “workstations110”) of the digital twin100and one or more workstation parameters associated with the workstations110. The one or more workstation parameters may include a cycle time and/or downtime associated with each workstation110. In one form, the cycle time and/or downtime are arbitrarily or stochastically defined by an operator or defined based on empirical/historical cycle time/downtime data associated with the workstations of the manufacturing environment50. Additionally, the one or more workstation parameters may include a type of workstation110, a manufacturing operation performed at the workstation110(e.g., a pallet loading/unloading operation, a workpiece transformation operation, among others), robot parameters/mobility characteristics associated with one or more robots located at the workstation110, conveyor/fixture operational characteristics, among other workstation parameters. As an example, the workstation parameters may indicate that a pallet unloading operation is performed at the workstation110-7, a pallet loading operation is performed at the workstations110-8,110-9, and a workpiece transformation operation is performed at the remaining workstations110-1,110-2,110-3,110-4,110-5,110-6. In one form, the manufacturing routine module20is configured to simulate a manufacturing routine of the pallets104and the workstations110based on the sensor layout of the sensors108, the one or more pallet parameters, and the one or more workstation parameters. As an example, the manufacturing routine module20simulates the pallets104traversing from the routing control location102-1to a given destination, such as the workstation110-7(e.g., a storage location where the finished workpieces of the pallets104are unloaded) or the workstations110-8,110-9(e.g., a storage location where unloaded pallets104are loaded with workpieces) via the one or more of the paths106, one or more of the workstations110, and one or more additional routing control locations102where the pallet merging/splitting operations are performed. In some forms, the manufacturing routine module20may perform known manufacturing simulation routines to simulate the manufacturing routine of the pallets104and the workstations110. In one form, the RL training module10includes a state module32, an action module34, a consequence state module36, a transient production value (TPV) module38, a steady state production value (SSPV) module40, and a control model module42. In one form, the state module32, the action module34, and the consequence state module36are collectively configured to perform a discrete-time stochastic control routine, such as a Markov decision process (MDP) routine. In one form, the state module32is configured to obtain state information from the sensors108during the manufacturing simulation routine. The state information includes a number and/or type of pallets at the routing control locations102at one or more times of the manufacturing simulation routine. As an example, the state information indicates a discrete time value (T) of the manufacturing simulation, the routing control location102, a number of pallets104that are classified as the first type (x1) and a number of pallets104that are classified as the second type (x2), as shown below in Table 1. TABLE 1State Information - Routing Control Location 102-2Timex1x2T010T111T221 In one form, the action module34is configured to determine an action at the routing control locations102based on the state information. The actions include, for example, the pallet merging operation and the pallet splitting operation. In one form, the actions may be determined at each of the discrete time values, as shown below in Table 2. TABLE 2Action - Routing Control Location 102-2TimeActionT0Select and Merge Pallet 104-2 (x2)T1Select and Merge Pallet 104-1 (x1)T2Select and Merge Pallet 104-1 (x1) In one form, the consequence state module36is configured to determine a consequence state at the routing control locations102based on the action. In one form, the consequence state information includes a number and/or type of pallets at the routing control locations102at the one or more times of the manufacturing simulation routine. As an example, the consequence state information indicates a consequent routing control location102associated with the action, a number of pallets104at the consequent routing control location102that are classified as the first type (x1′) and a number of pallets104at the consequent routing control location102that are classified as the second type (x2′), as shown below in Table 3. TABLE 3Consequence State - Routing Control Location 102-2Timex1′x2′T011T121T231 While Tables 1-3 illustrate the state information, action, and consequence state for the routing control location102-2, it should be understood that tables illustrating the state information, action, and consequence state may be generated for each routing control location102. In one form, the TPV module38is configured to calculate a TPV of each routing control location102based on the consequence state and a transient objective function. The TPV corresponds to transient rewards of the simulated manufacturing routine at the given routing control location102. In one form, when more than two types of pallets104exist (e.g., the pallets104can travel along n paths106to arrive at one of the routing control locations102, where n>2), the TPV module38calculates the TPV for the routing control location102based on the following transient objective function: TPV102=-❘"\[LeftBracketingBar]"d1d1+d2+…dn-x1′x1′+x2′+…xn′❘"\[RightBracketingBar]"(1) In transient objective function (1), d1is a first target production value associated with a first type of pallets104, d2is a second target production value associated with a second type of pallets104, and dnis an nth target production value associated with an nth type of pallets104. The first, second, and nth target production values collectively form a production ratio that is based on the first production ratio and a sum of the first, second, and nth production ratios. As used herein, “target production value” refers to a number of pallets104of the given type that need to traverse from a defined origin (e.g., the routing control location102-2) and the destination (e.g., the workstation110-7) along the paths106to satisfy predefined time and/or production-based constraints. In transient objective function (1) and as described above, x1′ is the number of pallets104between the consequent routing control location102and the destination that are classified as the first type, x2′ is the number of pallets104between the consequent routing control location102and the destination that are classified as the second type, and xn′ is the number of pallets104between the consequent routing control location102and the destination that are classified as the nth type. Furthermore, in transient objective function (1), the numbers of the first and second types of pallets104(x1and x2) collectively form a transient routing control location ratio that is based on the number of pallets104that are classified as the first type of pallets104(x1) and a sum of the number of the first and second types of pallets104. Furthermore, in transient objective function (1), the numbers of the first, second, and nth types of pallets104collectively form a transient routing control location ratio that is based on the number of pallets104that are classified as the first type of pallets104(x1) and a sum of the number of the first, second, and nth types of pallets104. In one form, the SSPV module40is configured to calculate a SSPV of each routing control location102based on the consequence state and a steady state objective function. The SSPV corresponds to steady state rewards of the simulated manufacturing routine at the given routing control location and is represented as an objective function. In one form, when more than two types of pallets104exist (e.g., the pallets104can travel along n paths106to arrive at one of the routing control locations102, where n>2), the SSPV module40calculates the SSPV for the routing control location102based on the following steady state objective function: SSPV102=-❘"\[LeftBracketingBar]"d1d1+d2+…dn-c1c1+c2+…cn❘"\[RightBracketingBar]"(2) In steady state objective function (2), c1is a number of pallets104that are classified as the first type of pallets104that have passed through the given routing control location102, c2is a number of pallets104that are classified as the the second type of pallets104that have passed through the given routing control location102, and cnis a number of pallets104that are classified as the nth type of pallets104that have passed through the given routing control location102. Furthermore, in steady state objective function (2), the numbers of the first, second, and nth types of pallets104collectively form a steady state routing control location ratio that is based on the number of pallets104that are classified as the first type of pallets104(c1) and a sum of the number of the first, second, and nth types of pallets104. In one form, the control model module42is configured to selectively adjust one or more reinforcement parameters of the RL training module10based on the TPV and/or the SSPV. As an example, the control model module42is configured to determine whether the TPV and SSPV are less than a respective threshold value (i.e., a threshold TPV and a threshold SSPV). The threshold values for the TPV and SSPV may be equal or unequal. In one form, the threshold value for each of the TPV and the SSPV may be equal to (or approximately equal to) zero. Accordingly, comparing the TPV and the SSPV to the respective threshold enables the RL training module10to determine whether the reinforcement parameters of the RL training module10satisfy certain predefined production constraints. In one form, when the control model module42determines that the TPV and/or the SSPV are less than the respective threshold value, the control model module42is configured to adjust one or more reinforcement parameters of the RL training module10. The reinforcement parameters include, but are not limited to, the state information, the action, the sensor layout of the one or more sensors108, the transient objective function, and/or the steady state objective function. The control model module42may iteratively adjust the reinforcement parameters until the TPV and the SSPV are less than the respective threshold values. As an example, the control model module42is configured to iteratively adjust the sensor layout of the digital twin100(i.e., the number of sensors108and/or the placement of the sensors108) until each TPV is less than or equal to a threshold TPV and each SSPV is less than or equal to the threshold SSPV. As an additional example, the sensor layout includes a predefined minimum number of the sensors108, and the sensor layout is adjusted by adding one or more sensors108until a predefined maximum number of sensors108is reached. As another example, the control model module42is configured to adjust the threshold values of the transient objective function and/or the steady state objective function when the updated sensor layouts, state information, and/or action do not satisfy the time and/or production-based constraints associated with the transient and steady state objective functions. As an additional example, the control model module42is configured to adjust the state information when the sensor layout is adjusted by adjusting the number and/or types of pallets104provided at one or more of the routing control locations102(e.g., the control model module42may increase or decrease the number and pathways in which the pallets104arrive and depart from the routing control location102-1). As yet another example, the control model module42is configured to adjust the action performed by at least one of the pallets104at one or more of the routing control locations102. In one form, the control model module42determines that the NN-based control system54is trained when the TPV and/or the SSPV are less than the respective threshold value. When the control model module42determines that the NN-based control system54is trained, the control model module42may generate a state-action information correlation model (e.g., a deep neural network model) indicating, for each routing control location102, an action associated with various state information. As an example and as shown below in Table 4, a state-action information correlation model for the routing control location102-2is shown. While various actions are shown for given example sets of state information, it should be understood that other sets of state information and actions may be included and are not limited to the examples provided in Table 4. TABLE 4State-action Information Correlation Model -Routing Control Location 102-2Statex1x2Action10Pallet Merging Operation (PMO) from X211PMO from X121PMO from X1 In one form, the control model module42is configured to define a plurality of trained routes of the pallets104based on the state-action information correlation model of each routing control location102. Specifically, the control model module42is configured to define the trained routes based on the action and state information defined in the state-action information correlation model of each routing control location102. As an example, the control model module42identifies a first action associated with a first set of state information of routing control location102-1, a second action associated with a second set of state information of routing control location102-2, and additional actions associated with additional routing control locations102based on the state-action information correlation models. As such, the control model module42defines the route based on a path of a given pallet104associated with the first action, the second action, and the additional actions. It should be understood that the control model module42may define a trained route for each combination of state information/actions of the state-action information correlation models. Accordingly, the NN-based control system54may control the pallets56of the manufacturing environment50based on the state information obtained by the sensors52and the plurality of trained routes defined by the control model module42. Referring toFIG.4, a routine400for training the NN-based control system54is shown. At404, the RL training module10initiates a simulation of the manufacturing routine, and the RL training module10obtains state information from the sensors108of the digital twin100at408. At412, RL training module10determines an action at each routing control location102based on the state information, and the RL training module10determines a consequence state from the sensors and based on the action at416. At420, the RL training module10calculates the TPV and SSPV based on the consequence state, the transient objective function, and the steady state objective function, and the RL training module10selectively updates the reinforcement parameters based on the TPV and the SSPV at424. Additional details regarding the selective updating of the reinforcement parameters are provided below with reference toFIG.5. Referring toFIG.5, a routine500for training the NN-based control system54is shown. At504, the RL training module10initiates a simulation of the manufacturing routine, and the RL training module10obtains state information from the sensors108of the digital twin100at508. At512, RL training module10determines an action at each routing control location102based on the state information, and the RL training module10determines a consequence state from the sensors and based on the action at516. At520, the RL training module10calculates the TPV and SSPV based on the consequence state, the transient objective function, and the steady state objective function. At522, the RL training module10utilizes the tuple (i.e., the state, action, consequence state, and the sum of the transient and steady state functions for each routing control location102where the sensors108are placed) to train the NN-based control system54using a reinforcement learning routine until training is ended. At523, the RL training module10evaluates the manufacturing simulation using the trained model obtained at522. At524, the RL training module10determines whether the TPV is greater than a threshold TPV. If so, the routine500proceeds to532. Otherwise, if the TPV is less than the threshold TPV at524, the routine500proceeds to528. At528, the RL training module10determines whether the SSPV is greater than a threshold SSPV. If so, the routine500proceeds to532. Otherwise, the routine500proceeds to556, where the RL training module10determines the NN-based control system54satisfies the production constraints. At532, the RL training module10selects a reinforcement parameter to be adjusted, and at536, the RL training module10determines whether the selected reinforcement parameter corresponds to a new sensor layout. If so, the RL training module10adjusts the sensor layout at540and then proceeds to504. Otherwise, if the selected reinforcement parameter is not the sensor layout, the routine500proceeds to544. At544, the RL training module10determines whether the selected reinforcement parameter corresponds to an adjustment of one of the objective function thresholds. If so, the RL training module10adjusts the steady state and/or transient objective function thresholds at548and then proceeds to504. Otherwise, if the selected reinforcement parameter is not one of the objective function thresholds, the routine500proceeds to552, where the RL training module10adjusts the action and/or state information for at least one of the routing control locations102and then proceeds to504. Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about” or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice, material, manufacturing, and assembly tolerances, and testing capability. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” In this application, the term “controller” and/or “module” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components (e.g., op amp circuit integrator as part of the heat flux data module) that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term memory is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc). The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general-purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure. | 33,119 |
11860594 | EXAMPLES OF THE INVENTION InFIG.1, a building or group of buildings101contains a number of ventilation, heating and/or cooling devices Asset1, Asset2, Asset3. . . Asset N. deployed in individual rooms or areas of the building or group of buildings101. The Assets1,2,3. . . N have the capability of storing energy either alone or collectively. The energy storage may, for example, be in in form of a heat sink, battery, fly-wheel, up-hill pumping device or other. The ventilation, heating and/or cooling of the building or group of buildings101is controlled by a building energy management system103which switches on and off the Assets1,2,3. . . N and causes them to store energy. The Assets1,2,3. . . N draw power141from the grid105, the power draw-down for each being controlled by the building energy management system103using Ethernet or Wi-Fi connections (the individual power connections to each asset are omitted for clarity. A broadband connection131links the building energy management system103to a server or servers107which may be remote from or collocated with the building or group of buildings101. The server provides an artificial neural network to generate predictive information over time115concerning energy requirements based on known consumption patterns of the Assets1,2,3. . . N obtained from those assets through the building energy management system103. This information is stored as a profile113in respect of each Asset1,2,3. . . N for individual days of the week to reflect usage patterns, which may vary from one day to another. Predicted and spot energy cost information109is obtained from the electricity supplier and fed to the cost model for the assets. Meteorological information111, particularly temperature and humidity predictions for the immediate future in the locality of the building or group of buildings101, is downloaded to the server(s)107. The neural network on the server107is a regression-based predictive learning programme which continually updates the profiles113based on experience, in this way the profiles become “smarter” or more reflective of reality as time passes. By combining the meteorological information111with the asset profiles113, it is possible to gain a prediction on an hour by hour/minute by minute basis of the forthcoming energy needs of the assets. By combining this with the cost information109, it is possible to predict costs and programme to Building Energy Management system to prepare an energy draw-down profile to draw power from the grid105when the energy costs are at their lowest and cause the Assets1,2,3. . . N to store enough excess energy for use when energy cost are high so that the Assets1,2,3. . . N do not have to draw energy from the grid105at times of predicted higher costs. However, the embodiment shown inFIG.1goes further than this. Through the link151, the neural network on the server107identifies when there is excess power in the grid105because the frequency of the grid increases, say, by 1% above the nominal frequency (50 Hz in the UK). At that point the server107switches the building energy management system to cause the Assets1,2,3. . . N to take and store energy up to a pre-set maximum. If that draw down, plus what would be drawn down following the energy profile at a particular time would take takes the asset concerned above its available capacity, preference is given to drawing down excess energy off the grid (rather than following the pre-set profile) so that the management system always guarantees to the electricity supplier the availability of capacity to absorb excess energy up to an agreed maximum. The ability to absorb excess energy up to a maximum can be agreed with the energy supplier on a time basis, so that the capacity described is only available to the grid at certain times of day or week. FIG.2show a schematic diagram of a cooling unit, which may be one of the assets, to which the system ofFIG.1was applied. The unit comprises a duct201in which fans202are mounted driving air from a closed space, such as a room, through a heat exchanger203to a chiller. Warm air from the chiller passes through the heat exchanger203giving up heat to a fluid passing through the heat exchanger from a cold duct212from the bases of fluid storage tanks211to a duct213which takes the warmed fluid to the top of the fluid storage tanks. Warmed fluid is taken from the tops of the tanks211through warm fluid ducts224to an electric chiller221or an absorption chiller222. In the chillers the fluid is cooled and passed back to the bottom of the tanks211through cool fluid ducts223. In both the electric chiller221and the absorption chiller222energy is consumed in the pumping process within the chillers. The use of the tanks211gives the unit considerable storage capacity for cooled fluid. Thus, by allowing the chillers221or222to cool more fluid than is needed for immediate use in the heat exchanger203, a store of cooled fluid is built up for later use. In a sense the tanks211act as energy batteries in the system. By running the chillers at times of low energy cost and storing the cooled fluid for later use, considerable costs savings can be achieved over a system in which the chillers are run to meet immediate demand from the heat exchanger203. In simple known systems the heat exchanger203would be connected directly to the chillers221or222, without the tanks211. In this case the maximum demand on the chillers would occur at times of the day when external temperatures were at their highest and, probably, when similar equipment elsewhere is demanding energy resources leading to a shortage of supply in the electricity grid. By employing the present invention, energy can be taken from the grid at times of low cost and/or excess supply, and not taken when there is a supply shortfall and/or when cost is high. To heat, rather than to cool, the flows in lines212,213,223,224are reversed with the chillers acting as fluid heaters. FIGS.3A to3Eillustrate the beneficial impact of the energy management system of the invention applied to an asset illustrated inFIG.2. InFIG.3A, a typical pricing structure for the supply of electricity to commercial premises is shown. Between 06 30 and 17 30 and again between 20 30 and 03 30 a standard tariff applies 301. Between 03 30 and 06 30 the price302is low, about half the standard tariff, reflecting low demand at that time. Between 17 30 and 20 30 the price is high 303, reflecting high demand for electric energy at that time. FIG.3Bshows the energy demands of the asset ofFIG.2bars310and the energy losses from the asset bars311, primarily as a result of fluid storage in the tanks. The asset ofFIG.2in this mode is operating with a conventional building energy management system which controls energy provision to the assets based on previous patterns of requirements, meteorological information, i.e. predictions of the outside temperature. Thus, the system tends to draw energy from the grid to meet short term predictions and needs. The electrical energy taken from the grid at any time is shown by bars322, with line321showing the stored energy (in the case of the asset inFIG.2); this is in the form of chilled fluid in tanks211. By matching energy consumed with energy demanded the system maintains the energy store in the tanks at about 50% of capacity, the stored energy is represented by line324. The system has about 50% redundancy in its energy storage capacity, but the system is also taking considerable amounts of energy from the grid at the peak period between 17 30 and 20 30. FIG.3Cshows the same system, but now using energy price information. In this model the system draws energy up to its total capacity, when the price is lowest but taking account of predicted future demands. The pattern of energy consumption310in this model is the same as that of the model control by an existing standard building energy management system, The asset prioritises taking energy from the grid between 03 30 and 06 30 when the tariff is lowest, storing that energy in the tanks211as cooled fluid and not taking further energy from the system until the sored energy has reduced to about 10% of stored energy capacity about 11 30; as the tariff at that time is the standard tariff it draws sufficient energy to maintain the store at 10% of capacity, but does not draw any excess for the time being. For the exemplified asset, a time of high demand is between 17 30 and 20 30 exactly when the electricity supply tariff is at its highest. To avoid paying the highest tariff, the system anticipates the high demand and stores enough energy to meet that demand between 16 30 and 17 30 when the standard tariff applies (the standard tariff being approximately half the peak tariff). The energy stored in the system is shown by line331, which can be seen to be rising to a peak after energy is drawn from the grid for storage purposes when power is relatively cheaper and dropping as energy is taken from the tanks211and used during periods when energy is relatively more expensive. As can be seen line331drops to 10% of capacity when storage is simply matching demand. As the energy storage pattern has changed from that inFIG.3B, the pattern of energy losses from the asset represented by line311changes. The losses are higher than those inFIG.3Bimmediately after energy recharging but lower when energy storage is reduced to 10% of capacity. Overall the total losses are reduced by 44% from the previous value and running costs reduced by 17.6% compared with the conventional building energy management system ofFIG.3B. Because electricity generating companies have a requirement for take up of excess energy generated or to cut off supplies for a short time when energy demand is exceeding generation capacity, the companies have tariffs under which they will pay to have the excess energy taken. InFIG.3D, the system is organised not to demand more than 50% of the input capacity at any one time, with the remaining 50% of capacity made available to the energy in the grid. This is controlled by monitoring the frequency of the grid as described with reference toFIG.1and allowing power to flow to the until and to be stored in the tanks211up to the available capacity for a short time. The monitoring system also identifies a shortfall of generation capacity on the grid, by a frequency reduction on the grid. The system stops the asset taking power. This latter capability will become even more important as electricity supply companies increasingly move to the spot pricing of major commercial consumers, where the price relates the actual demand at any time. FIG.3Dillustrates the effect of use of an energy management system as described inFIG.1in connection with an energy consuming asset shown inFIG.2. InFIG.3Dthe energy management system limits the power take of the asset represented by bars342to 50% of capacity, the other 50% shown by bars343being available to the grid for off-load of excess power. The output of the asset at any time represented by bars310is unchanged, but the rate of replenishment of energy stored in tanks211(FIG.2) is spread over longer periods. But as these periods are at time when energy costs are below the peak costs there is no difference from the model ofFIG.3Cfor the costs for total energy supplied. However, as there now is capacity for the grid to off-load excess energy up to a total capacity of 50% of the asset, there is additional payment from the energy company for this facility. Furthermore, the asset has resilience to withstand withdrawal of supplies for short periods when demands on the grid are high, and this can be done when the frequency in the grid is detected to have dropped below a present level, say 1% below the nominal frequency (50 Hz in the UK). InFIG.3Dthe losses in the system are shown by bars311, these are a bit higher than the model inFIG.3C, but still significantly below the model ofFIG.3Bhowever the cost savings to a consumer over the standard building energy model ofFIG.3B. Table 1 below illustrated the impact of the models ofFIGS.3B,3C and3D. TABLE 1ConventionalPrior ArtPrice SensitiveBuilding energyBuildingbuilding energymanagementInput/OutputManagementmanagement -according to theCapacity100 KwhFIG. 3BFIG. 3CinventionPower usage KwH504.01488.34492.18Losses KwH70.6539.3047.0Reduction in losses44%33%Cost/day in £61.2050.4119.58Savings over10.79 (17.6%)41.62 (68.0%)Convention priorart buildingmanagement (£) The achieved using the system ofFIG.1(present invention) are, therefore, considerable. FIG.3Eshows the costs on two separate successive summer days. Day 1 is the day one which the examples 3B3C and3D were drawn up. Day 2 is the following day which was warmer. As a result of the warmer weather, more energy was consumed on day two, but the relative costs savings from the prior art building management system bars351(Day 1) and361(Day 2) show the costs using conventional building management controls, Bars371and372show the costs on Day 1 and Day 2 managing according to energy costs, and Bars381and382show the costs on Day 1 and Day 2 using a building energy management system in accordance with the invention. Line391shows the cumulating costs over Days 1 and 2 using a conventional building management system, line392the same but using a building management system controlling based on costs and line393the cumulating costs overs Days 1 and 2 using a building management system according to the present invention. Although the building asset described by way of an example is a space cooling and warming system, the principles can be applied to any heating, cooling or heating asset in a building, and indeed machinery and other powered devices provided they have an energy store associated with them. Although the energy store described is a fluid tank, other energy stores such as batteries and flywheels can be used. The main criterion for such stores is that they have sufficient capacity to store and supply energy to the asset concerned during periods in which power may be interrupted. Although in the foregoing example, the level of demand in the grid is identified by measuring the frequency of supply, it can also be identified by measuring the voltage of supply, a higher than pre-set voltage indicating low demand on the grid, and a lower than pre-set voltage excess demand. The system described in the foregoing example can also be used to provide third parties, for example energy traders, power distribution companies and energy aggregation companies information about the predicted demand from a consumer using such a system, the flexibility within his system for reducing or increasing power consumption in a given period and the ability of the third parties to seek adjustments of the consumption in the given period. For example, in the UK and elsewhere energy is a traded commodity. Supply companies buy energy on long term and shorter-term contacts for supplies in half hourly slots covering each day. Prices of these contracts vary according to the predicted demand in the half hourly slots. At pre-set time (normally 30 minutes in the UK) before the beginning of a slot trading ceases and the National Grid has rapid resources which it can make available to the energy supply companies to make up for any shortfall in between their contracted supplies and their actual demand. However, the supply companies have no control over the price of such short-term supplies, which can be very high, nor do they have visibility of the costs of such supplies for some time after the event. To avoid the need to purchase such supplies, the system of the present invention gives the energy traders in the supply companies information concerning the degree to which the consumers using the system can be flexible in their demands and to seek a reduction by the consumer of demand when the supply has fallen short in contracted energy supplies for a particular period. Likewise, if in a particular period the energy supplier has contracted for more energy than is the predicted demand, the system enables identification of consumers who can absorb more power in the period concerned. The flexibility provided by the method of the invention enables shortfalls in contracted supplies to be reduces or eliminated without recourse to high cost stand-by supply, or excess contracted supply to be absorbed. The consumer would be offered a price for reducing or increasing demand in a period in which contracted supply is predicted to be below or above actual requirement; the price, of course would be below the price the trader would have to pay to secure extra supplies on the market, or the loss incurred in having excess supplies. As another example, presently in the UK, power distribution companies (so called Distribution Network Operators) try to install enough distribution capacity to meet all demands. This is expensive. Additionally, with consumers installing micro-generation capacities such as solar power, and increasingly battery storage and, in the future, vehicle recharging facilities, predicting and installing the required distribution capacity is becoming increasingly difficulty, the sources of supply and demand become opaquer. Furthermore, supplies based on solar or wind power can vary enormously depending on weather. Rather than installing capacity to cope with every conceivable situation, the Distribution Network Operators would find the system of the present invention very advantageous for by knowing the predicted demand of a consumer and that consumer's ability to vary his demand, the Distribution Network Operator can plan on the basis that excess demand can be reduced. The consumer would be offered a price for reducing demand in a period in which the Distribution Network Operator has excess demand. In this latter case the voltage in the Distribution Network Operators system would be measured by the Distribution Network Operator or a third party service provider to the Distribution Network Operator who would request the energy consuming and storage system to reduce demand, the system or its manager would then respond. The response can be by manual over-riding the system or automatic. Energy aggregators operate similarly, by identifying the possibility of reducing or increasing demand, energy aggregators can smooth their supplies across several inputs. A similar consumer price incentive structure can be envisaged. This leads to a stacking system in which bids from more than one energy company, distribution company and aggregation company is made to reduce or increase demand. A stacking system would automatically identify, using the bid information and the other operating parameters of the system, when a bid is advantageous to the consumer. In the case of competing bids or changes in projected consumption at a particular time, which is most price advantageous or whether the bids are complementary. FIG.4illustrates one possible implementation. The energy consuming and storage system makes available to energy suppliers and distributors its available flexibility in 30-minute periods in terms of kWh available on a 24 hours day ahead basis401. It can also provide to the operator of the energy consuming and storage system a predictive curve showing the impact on, for example space temperature, caused by a reduction in the energy drawn from the grid402. In the example shown the predicted energy demand are shown as bars411and412, the lower bar411indicating a required energy demand and the upper bar412predicted demand that the user of the system is prepared to sacrifice as a trade with an energy company of distributer to reduce demand. As a result of the predicted demand the projected temperature in a building maintained by system is shown by line421. If energy or distribution company wishes the operator to forgo the flexible demand element in a half hour slot shown by bar413, the system will predict the revised projected temperature in the building, line422. In practice line423shows what occurred in this example, which was a little better than projected. By reviewing the prediction of building temperature, the system user can decide whether to permit a reduction of energy taken to the extent of the bar413or not. If as shown in this example the building operator has set a minimum air temperature in the building of 24° C., the predicted temperature even with the reduced energy supply remains above that temperature, and the operator would be prepared, for a price, to accept reduced energy input to the extent of the height of bar413, but not reducing the inflexible energy needs indicated by bar411. It is also possible for the system to project periods, in 30-minute slots, where assets in the system could absorb energy, this is indicated by the bars430. If the energy generator or distributor wanted excess energy in the system to be absorbed the bars430indicate which assets in the energy consuming and storage system have the capacity absorb and store excess supply, and this can be made available to the energy generator or distributor. As a final tool, the complexities of running the kind of contracts necessary securely to implement the payment arrangements for such system operating across multiple sites and involving multiple suppliers, distribution companies and aggregators probably requires the use of block-chain technology. Block chain technology can securely establish that a request of change in the demand was made, that the change was made, the payment due as a result of the changed demand and making the payment. The actual monitoring and billing system to be used is outside the scope of the present invention. In a further use of the invention, in times of power shortages, in the grip energy can be exported from the building to the grid, from the stored energy in the storage systems in the building. The relevant Distributed Network Operator can determine their headroom to receive energy from building employing the invention within a headroom set by the relevant Distributed Network Operator. The building management system can then respond to the request to supply energy to the grid at a price offered by the Distributed Network Operator. In a further development of the invention the building is on a site in which electric vehicle charging points are installed, and any vehicles at the charging points comprise energy storage assets. This is illustrated inFIG.5. InFIG.5, a network of electric vehicle charging points501is installed on a site adjacent to or linked to a building101already having a network102of energy storage and energy consuming assets Asset2,3,4. . . N, and connected to an alternating current electric supply grid105through an energy management system103controlled by a distributed network operator (DNO) through an energy management system103to a local electricity supply connection141is illustrated inFIG.5as a step-down transformer. The network of electric vehicle charging points501comprises electric vehicles charging points507, some of which are shown charging connected to electric cars509. The local supply connection141will normally have limited capacity to supply both the building and vehicle charging network501before becoming overloaded, with a risk that the connection would become overloaded and fail. One option would be to install a higher capacity connection. But that remedy, applied widely, would be extremely expensive and disruptive; it may also need an increase in over-all generating capacity in the grid to deal with peak loads. In this instance, the need for providing additional capacity is largely avoided by managing the combined load on the building energy network102and electric vehicle charging point network501. Furthermore, the vehicles509provide further energy storage and consumption assets which can be used in connection with the building network assets in managing energy. The building101could be a shopping complex, warehouse, airport terminal building or a railway station being examples of situations in which significant electric vehicle charge facilities may be provided in the vicinity of the building101. In a further development, the energy management system103is linked to a vehicle parking booking system510, of the kind that is common for pre-booking parking spaces for airport and hotels. Use of such a system would enable the energy management system103to be aware of forthcoming demands of the vehicle charging network501and prioritize energy requirements in the building101and vehicle charging network501according to predicted demands. The electric vehicle charging network501and network102of building Assets2,3. . . N draw power141from the grid105; the power draw-down for each is controlled by the energy management system103using Ethernet or Wi-Fi connections (the individual power connections to each asset are omitted for clarity). A broadband connection131links the energy management system5to a server or servers107which may be remote from or collocated with the site. The server provides an artificial neural network to generate predictive information over time115about energy requirements based on known consumption (or predicted) patterns energy requirements of the electric vehicle charging network501and Assets2,3. . . N obtained from those assets through the energy management system103. This information is stored as a profile113in respect of the electric vehicle charging network501and each other Asset2,3. . . N for individual days of the week to reflect usage patterns, which may vary from one day to another. Predicted and spot energy cost information109is obtained from the electricity supplier and fed to the cost model for the assets. Meteorological information111, particularly temperature and humidity predictions for the immediate future in the locality of the building or group of buildings101and the electric vehicle charging facility501, is downloaded to the server(s)107, noting that weather can impact significantly upon vehicle usage and demand on the charging network501. By combining the meteorological information111with the asset profiles113, and any information from a vehicle booking system concerning vehicle bookings system510, it is possible to gain a prediction on an hour by hour/minute by minute basis of the forthcoming energy needs of the vehicle charging network501and the other assets. By combining this with the cost information109, it is possible to predict costs and programme to Building Energy Management system to prepare an energy draw-down profile to draw power from the grid105when the energy costs are at their lowest and cause the electric vehicle charging network501and Assets2,3. . . N to store enough excess energy for use when energy cost are high so that the vehicle charging network and Assets2,3. . . N do not have to draw energy from the grid105at times of predicted higher costs or when requirements are predicted to outstrip the capacity of the local supply141. Energy can be taken from the grid at times of low cost and/or excess supply, and not taken when there is a supply shortfall and/or when cost is high. Energy can be taken from energy stores in the building to supply the high priority charging points in electric vehicle charging network and to heat or cool the building101as necessary. The predicted demand information from the vehicle charging network501is combined with that from the building energy network102and exported to an energy supply company who can use the information to approach the site management to vary their predicted demands control system to meet an anticipated short-fall or excess power in the grid105. Payment arrangements can be agreed between the power supplier and the site management which would represent a saving to the electricity supply company compared to the price that the company might have to pay on the spot market to cover for the short-fall. Although the arrangements ofFIG.5invention are described with reference to vehicle parking adjacent to or linked with buildings, such as shopping centres, airport terminals, railway stations and warehouses, it can easily be applied to any electric vehicle charging facility, for example charging facilities for electric vehicles such are fork lift trucks, and transport vehicles used within buildings. Examples of these include vehicles used for picking and moving goods within warehouses, electric transport buggies used in airports and other public areas to transport less mobile passengers, luggage transporters used in airports and railway stations. In such controlled systems, instituting priority supply can be used easily. A vehicle charging system as illustrated inFIG.5can be used in conjunction with an arrangements in which a Distributed Network Operator set headroom to receive energy as described in previously, so the system exports energy up to a pre-set headroom determined by the Distributed Network Operator, at times when there is a shortfall on the grid. In another development of the invention predictions of comfort levels within a building are used as part of the predicted energy demand profile. FIG.6shows a Center for the Built Environment (CBRE) Thermal Comfort tool (found at www.comfort.cbe.berkely.edu), illustrating, how, for various inputs a measure of comfort can be derived. In the illustrated chart, the parameters shown—air temperature, radiant temperature, air speed, humidity, with people walking in a building at relatively low speed, places the degree of comfort601in an ideal band shown by the band602, changing the parameters will move the position601into slightly less comfortable bands603, or less comfortable bands604, or to very uncomfortable bands605. For example, if the air temperature is increased or decreased by 1.5° C. and nothing else changes the comfort band moves into band604, alternatively if the air speed is increased comfort decreases. Projected clothing and activity can have a dramatic effect, a seated person needs a far high air temperature in which to be comfortable than and a person walking and clothed in clothing normally worn outside a building. FIG.7shows a Venn diagram illustrating conflicting demands on a building control system, circle621represents the target carbon footprint of the building, circle622the target cost of the energy use of the building, and circle623the target comfort zone of the building, and circle623the target comfort zone: if targets have been properly set, there should normally be an area624where all three targets can be met, and the energy management system103would be set normally to control the building energy needs to be within that target zone624. It will also follow from the analysis ofFIG.6, different parts of a building can have different control parameters needed to ensure comfort, depending on the number of people present in an area, and the activity taking place in that area, and clothing likely to be worn in the area. Further, exposure of some parts of a building, for example, to radiated heat from the sun, will impact on the energy needs of that part of the building, In the illustration inFIG.8the blocks631,632and633show different parts of a building. It is assumed that the sun is to the right of and above the building. Block631has the lowest exposure to radiated heat as it has no roof on the sunny side of the building, (its roof is in the shadow of block632), bock631will therefore naturally be the coolest part of the building. Black632, has a side and roof exposed to radiated heat and with become the hottest part of the building, block633as a smaller exposure and will probably be somewhere between the extremes. It is also clear that the number of people, their activity and clothing will also impact on the comfort levels in the three blocks and thus the energy needs of the three blocks. Using the derived information concerning comfort, temperatures and air circulation within the blocks631,632and633can be adjusted to give the best comfort to most people in those blocks. Comfort measures too, can also give the energy management system103information to enable, say temperature adjustments in blocks of building to reduce energy consumption, by reducing temperature in a block of a building but remaining within areas of the diagram ofFIG.6, which are still the good comfort or perhaps slightly less comfortable areas. Giving the building manager more flexibility to control energy costs. | 32,512 |
11860595 | While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DESCRIPTION Aspects of the present disclosure relate to systems and methods to track sources of electric power. More particular aspects relate to a method to generate, distribute, and receive electric power and a source identifier describing a type of the electric power. Some energy consumers may care about the source of their electricity; for example, users may wish to avoid electricity originating from sources they may not consider environmentally friendly such as coal or natural gas, instead preferring electricity from traditionally “green” sources such as solar, wind, hydroelectric, geothermal, etc. An example electric power grid may be divided into two levels of supply—a transmission level and a distribution level. The transmission level generally refers to transmission of electric power in the form of electric current (typically at a relatively high voltage) from power plants to substations. A power plant may generate and supply power (energy over time) and allow a substation to draw from the plant's power supply. The distribution level refers to distribution of electric power in the form of electric current (typically at a relatively lower voltage) from substations to end users. Throughout this disclosure, reference is made to “electric power,” one or more “electric power streams,” and “electric signals.” As used herein, “electric power” is essentially synonymous with “electricity,” referring to the manipulation of electric potential for the purpose of transferring energy over time (power). As an example, a coal power plant may burn coal in order to generate electric power (also referred to herein as generating electricity). An “electric power stream” is used herein to refer to a specific load, or a transfer of electricity/electric power/electric energy over time. For example, a wind power plant and a coal power plant may both generate electric power, but each generates independent electric power streams. Typically, a power plant generates electric power and transmits some or all of that electric power to one or more substations (via an electric power stream transmitted along, for example, one or more power transmission lines) to each substation. For example, a first coal power plant may generate around 500 megawatts (MW) of electric power. That first coal power plant may transmit the 500 MW to accommodate the demands of three independent substations. This may be accomplished via, for example, three electric power streams—the plant may transmit a first electric power stream including 200 MW to a first substation, a second electric power stream including 250 MW to a second substation, and third electric power stream including 50 MW to a third substation. To be clear, a second example coal power plant may generate around 550 MW of electric power and distribute that 550 MW through a single electric power stream of 550 MW to a sole substation. Note also that substations may receive electric power streams from multiple sources (e.g., power plants, other substations, etc.). Throughout this disclosure, reference is made to “source identifiers” as well as “embedding” the source identifiers into electric current. As used herein, a “source identifier” refers to a signal that contains data labeling a source of electricity. Such a signal may be embodied as an electric current. Since electric power is also distributed in the form of electric current, a source identifier current can be combined with, or “embedded within,” the electric power current. In some embodiments, the source identifier current may simply be added to the electric power current. A resulting combined electric current may have the electric energy demanded by users in addition to “carrying” the source identification data. Alternating Current (AC) power is typically transmitted as a sine wave, square wave or triangle wave. Minor variations in such a wave (such as to period, amplitude, etc.) may be added. For example, a wiring system may be configured to combine an input current (such as an AC current) with a modulated carrier signal to encode source identification information. The variations may have a negligible impact on the AC current for purposes of power transmission but may be detected by a downstream recipient and decoded to reveal source identification information. To transmit electric power, an entity (such as, for example, a power plant, substation, etc.) may acquire a first electric current and prepare a downstream electric current based on the first electric current. For example, a power plant may “acquire” a first electric current by generating it (e.g., via burning coal, a wind turbine turning, etc.). A substation may “acquire” a first electric current by receiving it (such as from a power plant or a different substation). “Preparing” a downstream electric current, as used herein, refers to, for example, filtering, modulating, or otherwise modifying the first electric current. In some embodiments, “preparing” a downstream electric current includes embedding a source identifier, as described in further detail below. A source identifier may be an “upstream” or “downstream” source identifier. As used herein, an entity (such as, for example, a substation) receives upstream source identifiers and generates/transmits downstream source identifiers. Thus, if a first entity generates a source identifier and transmits it to a second entity, that source identifier is both the first entity's downstream source identifier and the second entity's upstream source identifier. Electric power is transmitted through the use of electric current. As will be understood by one of ordinary skill in the art, electric current may be transmitted in the form of alternating current (AC) or direct current (DC), and may be subjected to one or more operations such as filtering, modulation, etc. Each electric power stream may take the form of electric current. A substation receiving multiple electric power streams may distribute incoming electric power by generating an “aggregate” electric power stream (in the form of a generated electric current). Thus, multiple inputs may be combined into a single output. The term “power plant,” as used herein, refers to an electric power generating facility (sometimes referred to in the art as “power stations,” “generating stations,” etc.). A “source” of electric power, as used herein, refers to an originating power plant of that electric power. For example, if Wind Plant X transmits 50 MW to an end user, the “source” of that 50 MW would be “Wind Plant X.” A “type” of electric power refers to a general category of the source of the power; in the preceding example, the “type” of the 50 MW may simply be “wind.” In some embodiments, “type” may be more generalized (e.g., “green” vs. “not green” or “local” vs “not local” rather than “wind” vs. “coal”), may be “unknown” or “mixed,” may be represented as a numerical value (e.g. from 0 to 1, with 1 referring to “renewable” and 0 referring to “nonrenewable”), etc. Throughout this disclosure, reference is made to one or more “source identifiers” generated and/or transmitted in the context of electric power. As used herein, a “source identifier” includes data describing the source and/or type of electric power. For example, Wind Plant X may transmit 50 MW of electric power to a substation via one or more electric power streams in the form of one or more electric signals, but may additionally transmit a source identifier “alongside” (contemporaneously with, or even embedded within) the electric signal(s). The source identifier may describe that the electric signal originates from “Wind Plant X,” that the source of the electric signal is a “wind” type of source, or both. A recipient of the electric signal (such as, for example, a substation or end user) may detect the source identifier and take one of a variety of actions (depending upon, for example, the information included and/or encoded within the source identifier, settings of the recipient's electric system(s), etc.). For example, a substation receiving solely “wind-type” power may, when distributing the power to an electric grid, generate a downstream source identifier to pass along the information that the power is from a wind source. In some embodiments, the downstream source identifier may be less specific (for example, may simply indicate that the power is “green,” not specifically that it is of “wind-type”). In some embodiments, the substation may simply relay the received source identifier. Throughout this disclosure, reference is made to “downstream” electric currents. As used herein, an entity (such as a power plant, a substation, etc.) may prepare a downstream electric current and transmit the downstream electric current to a recipient (such as a substation, an end-user, an electric grid, etc.). FIG.1illustrates a source-identifying electric power distribution system100according to several embodiments of the present disclosure. System100is depicted in the context of a simplified, linear approximation of an electric grid. System100includes power plants such as wind plant102, coal plant104and hydroelectric (hydro) plant106(collectively “power plants102-106”). Each power plant may produce electric power according to its principles of operation. Some power plants may, for example, operate by rotating a turbine within a magnetic field, which creates an electric current. InFIG.1, for example, wind plant102may include a turbine connected to one or more blades that are rotated by atmospheric wind. Similarly, coal plant104may burn coal to generate steam that pushes a turbine, and hydro plant106may connect a turbine to a wheel that is turned by naturally flowing water. Power plants102-106transmit electric power to one of three example substations, including substation A122, substation B124, and substation C126(collectively, substations122-126). As illustrated, wind plant102transmits wind power stream112to substation A122. Coal plant104transmits a first coal power stream114to substation B124and a second coal power stream115to substation C126. Finally, hydroelectric plant106transmits hydroelectric power stream116to substation C126. The electric power streams generated by power plants102-106are collectively referred to as “streams112-116.” Notably, one or all of streams112-116may be transmitted alongside source identification information. For example, prior to transmitting power to substation C126, hydroelectric plant106may modify the electric signal of its electric power stream in order to embed a source-identification signal therein. Thus, hydroelectric power stream116may include a source identifier. In some embodiments, this source identifier may be a simple alternating current sine wave, where a frequency of the sine wave is agreed to correspond to a particular source. For example, a source identifier at 25 Hz may be agreed upon to correspond to a hydroelectric plant, a source identifier at 30 Hz may be agreed to correspond to a coal plant, etc.; thus, the electric current transmitted in hydroelectric power stream116may include an embedded source identifier current at 25 Hz. In some embodiments, source identifier information may be transmitted “outside” of the electric signals themselves. For example, in some embodiments, a power plant such as wind plant102may broadcast a radio frequency signal indicating how much power it is transmitting and where it is transmitting the power to. In other words, wind plant102may broadcast a signal that describes wind power stream112as being of a “wind” type. Substation A122may receive this broadcast in addition to stream112and determine that stream112includes wind power. Notably, such a broadcast may also identify that substation A is the sole recipient (directly or indirectly; a broadcast may include an exhaustive list of recipients, which may be anonymized via an encoded identifier, etc.). Thus, even if substation B124detects the broadcast, substation B124is able to identify that the broadcast does not refer to stream114and therefore is protected from misinterpreting stream114as being of a “wind” type. In some embodiments, the source identifier may be broadcast to a predetermined group of recipients, such as a group of recipients in a geographic service region. For example, a power plant may broadcast the source identifier to every substation in a given radius of the plant itself, or to a group of homes in a suburb of a nearby city, etc. When receiving streams112-116, substations122-126are configured to distribute electric power to one of commercial user142, residential user144, and government user146(collectively, “users142-146”). For example, substation A122may transmit electric power stream132to commercial user142, substation B124may transmit electric power stream134to residential user144, and substation C may transmit electric power stream136to government user146. Electric power streams132,134and136are collectively referred to herein as “streams132-136.” One or more of substations122-126may generate and/or broadcast a downstream source identifier alongside or embedded within its corresponding electric power stream. For example, substation B124, upon receiving an upstream source identifier indicating that stream114is a coal power stream, may generate a downstream source identifier indicating that power stream134originated from a coal power plant and transmit it to residential user144. In other words, substation B124may relay the source identification information to residential user144, such that residential user144may be informed that electric power stream134is a coal-type electric power stream. In some embodiments, the downstream source identifier transmitted by substation B124may be substantially identical to the upstream source identifier that substation B124received from coal plant104. In some embodiments, substation B124may simply rebroadcast or otherwise “pass along” the upstream source identifier received from coal plant104. In some embodiments, substation B124may generate its own downstream source identifier; for example, substation B124may receive a radio frequency broadcast signal from coal plant104identifying stream114as a coal-type stream, and in response, substation B124may embed a downstream source identifier into electric power stream134to identify electric power stream134as originating from a “non-green” source. Notably, substations may receive electric power streams from multiple different sources. For example, in system100, substation C126receives stream115from coal plant104and stream116from hydroelectric plant106. Substation C126may detect multiple upstream source identifiers (for example, a first upstream source identifier from coal plant104identifying stream115as coal-type power and a second upstream source identifier from hydroelectric plant106identifying stream116as hydroelectric-type power). In response, in some embodiments substation C126may generate a downstream source identifier to be transmitted alongside power stream136to government user146based on both upstream source identifiers. In some embodiments, a downstream source identifier may include information identifying multiple different sources of an electric power stream. In some embodiments, a downstream source identifier may simply identify that an associated electric power stream “includes” electric power generated by more than one source. In other words, in some embodiments the downstream source identifier may describe stream136as “coal/hydroelectric”-type power, while in some embodiments the downstream source identifier may describe stream136as “mixed”-type power. In some embodiments, a downstream source identifier may include a proportion of different types of power or power originating from different sources. This may be based on, for example, an amount of power received from each source and corresponding upstream source identifiers. For example, substation C126may receive 80 MW from hydroelectric power plant106(e.g., stream116may transmit 80 MW of power and be accompanied by a first upstream source identifier identifying stream116as hydroelectric-type power) and substation C126may also receive 240 MW from coal plant104(e.g., stream115may transmit 240 MW of power and be accompanied by a second upstream source identifier identifying stream115as coal-type power). Substation C126may be configured to leverage this information to generate a downstream source identifier to identify that electric power stream136is “25% hydroelectric and 75% coal.” In some embodiments, some power plants may not distribute source identifiers, or they may be lost/undetected/filtered out, etc. This may occur for a variety of reasons; older power plants may not be fitted with the means for generating and/or transmitting source identifiers, an operator of the power plant may refuse to transmit a source identifier, a source identifier may be transmitted in a format unintelligible to an intended recipient, etc. Thus, in some embodiments a substation may not be able to determine the exact makeup of its sources. For example, substation C126may receive 80 MW of hydroelectric power in the form of stream116from hydroelectric plant106, along with an upstream source identifier indicating stream116as hydroelectric, but may also receive 240 MW of electric power from coal plant104without an accompanying, associated, or otherwise corresponding upstream source identifier. In such a situation, substation C126may only be able to confirm that stream116is of a hydroelectric type. Thus, in generating a downstream source identifier to transmit alongside stream136to user146, substation C126may label stream136as “25% hydroelectric, 75% unknown.” In some embodiments, stream136may simply be labeled “unknown,” “unknown mixture (at least 25% hydroelectric),” etc. While the examples described herein refer to mixtures as percentages, other formats of source identifiers are fully considered herein (such as, for example, including specific magnitudes of power streams; e.g., “80 MW hydroelectric, 240 MW unknown,” etc.). Note that the varying substations, users, plants etc. depicted in system100ofFIG.1are selected for exemplary purposes only; as will be understood by one of ordinary skill in the art, the concepts described above could be applied to different combinations and/or layouts of an electric grid. In some embodiments, a substation may distribute power to one or more other substations. In some embodiments, a power source may transmit electric power directly to an end user. FIG.2illustrates an electric power generation method200including source identification consistent with several embodiments of the present disclosure. Method200may be performed by a power source, such as a power generating station/power plant (for example, one of power plants102-106ofFIG.1). Method200includes generating electricity at operation202. Operation202includes causing a power generator to generate electric current. The specific nature of operation202may depend upon the configuration/“type” of power source performing method200. For example, operation202may include causing a furnace to burn coal in order to heat/boil water, wherein the steam from the water may push a turbine, causing it to rotate, generating electricity. As another example, operation202may include unlocking, reorienting or otherwise allowing a propeller to be rotated by atmospheric wind, turning a turbine, and thus generating electricity. Method200further includes generating a downstream source identifier at operation204. Operation204may include, for example, selecting a style of source identification to implement (such as, for example, RF broadcast, internet upload, or directly embedding a source identifier into an electric signal). For purposes ofFIG.2, the “direct embedding” style is assumed to have been selected. Operation204may further include generating data to be included in the source identifier. For example, if a system performing method200is a coal plant, operation204may include accessing a standardized database to identify an identification code corresponding to a coal power plant. In some embodiments, operation204may include generating a downstream source identifier that describes a “type” of power to be transmitted (e.g., coal, wind, “green,” “fossil fuel,” etc.). In some embodiments, operation204may include generating a downstream source identifier to specifically identify the system performing method200(e.g., “Acme Coal Plant, 123 E Broadway St.”). Combinations of the above are also considered herein. Operation204may hash, encrypt, and/or otherwise obscure the information to be transmitted in the downstream source identifier using one or more known methods (e.g., secure hash algorithm 256 (SHA 256), etc.). If the electricity is to be transmitted to more than a single recipient (as in, via multiple output streams), operation204may further include generating downstream source identification information to be embedded within each output stream. For example, if a solar power plant is to transmit 50 MW along a first path to a first recipient and 75 MW along a second path to a second recipient, operation204may include generating a first downstream source identifier to be transmitted along the first path and/or to the first recipient indicating “50 MW solar” and a second downstream source identifier to be transmitted along the second path and/or to the second recipient indicating “75 MW solar.” In some embodiments, the downstream source identifier may indicate additional information such as, for example, proportion of total power generated (e.g., “75 MW out of 125 MW total, solar”), peak possible output (e.g., “75 MW out of 125 MW current total. 175 MW maximum; 50 MW available, solar”), total energy generated (e.g., “75 MW, solar—750 GJ so far”), intended recipient (e.g., 75 MW to substation D, solar”), etc. As will be understood by one of skill in the art, combinations of the above are also considered. Method200further includes embedding the downstream source identifier into the electric current at operation206. Operation206may include, for example, preparing a downstream electric current by adding a carrier signal encoding the downstream source identifier to the electric current or otherwise combining the downstream source identifier current to/with the received electric current into a combined electric current. Thus, the electric current also carries data regarding the source of the electricity. Method200further includes transmitting the combined downstream electric current at operation208. Operation208may include transmitting some or all of the generated electric power to one or more recipients, such as substations, end users, etc., as described with reference to operation204, above. In some embodiments, operation208may further include emitting one or more radio frequency (RF) broadcasts to transmit data corresponding to the downstream source identifier(s). In some embodiments, operation208may include uploading the downstream source identifiers to the internet, such as to one or more (possibly cloud-based) servers. Combinations of the above are also considered; in some embodiments, operation208may include uploading a downstream source identifier to a server electric signal and broadcasting the same downstream source identifier via RF. In some embodiments wherein more than one output stream is transmitted at operation206, operation208may include embedding a first downstream source identifier into a first downstream electric signal, embedding a second downstream source identifier into a second downstream electric signal, and broadcasting the second source identifier via RF. In some embodiments, a power generating station may be in communicative contact with one or more recipients. A source identifier may be transferred via or based on this communicative contact. For example, a first substation may specifically “request” that source identification information be embedded in an electric signal, while a second substation may request that the source identifier be broadcast via RF. Of course, a third substation may request that the source identification information be transmitted across the same communications link used to submit the request in the first place. FIG.3illustrates a source-identifying electric power distribution method300consistent with several embodiments of the present disclosure. Method300may be performed by an “intermediary” such as an electric power substation, relay, etc. (such as, for example, one of substations122-126of system100). Method300includes receiving electricity from one or more sources at operation302. Operation302may include, for example, receiving a first electric power stream (including a first electric current) from a first power plant or even from another intermediary. Operation302may further include determining at least a magnitude of power received via the electric power stream (e.g., 200 MW, 30 MW, etc.). Method300further includes detecting a upstream source identifier(s) at operation304. Operation304may include, for example, receiving an RF broadcast, accessing an internet database (or receiving a notification from an internet-based server), or detecting, as a result of performing signal analysis, an embedded upstream identifier signal in the electric current itself. The upstream source identifier(s) detected in operation304may include information describing the source of one or more of the electric power stream(s) received at operation302. In some embodiments, a first upstream source identifier may include a magnitude of power transmitted, enabling a system performing method300to verify that the first upstream source identifier applies to a first electric power stream. Method300further includes generating a downstream source identifier at operation306. Operation306may include, for example, generating a downstream source identifier based on the upstream source identifier(s) detected at operation304. As an illustrative example, a first upstream source identifier may indicate that a first electric power source (having a magnitude of 30 MW) is “solar,” while a second and a third upstream source identifier may indicate that corresponding second and third electric power streams (having magnitudes of 100 MW and 400 MW, respectively) are “coal.” In such an example, operation306may include generating a downstream source identifier to indicate that an electric power stream to be output by the system performing method300includes 30 MW of solar-sourced power and 500 MW of coal-sourced power. If multiple outputs are to be distributed, multiple downstream source identifiers may be generated at operation306. However, while multiple downstream source identifiers may differ in some ways (“style,” formatting, magnitude, etc.), the distribution of sources may generally be the same for each downstream source identifier, as described in further detail below. In some embodiments, only a single (“first”) upstream source identifier and/or electric power stream may be received. In at least some such embodiments, operation306may simply include copying or otherwise replicating the first upstream source identifier received at operation304. In some embodiments, operation306may simply be skipped (for example, if the sole upstream source identifier is embedded in the sole electric current, then an output electric current may already include the appropriate source identification information). In some embodiments, even if only a single upstream source identifier is detected, operation306may still include generating an independent downstream source identifier. For example, in some embodiments the upstream source identifier detected at operation304may indicate that a source of the electric power is “Acme Geothermal.” However, rather than including source identification information reflecting this, operation306may include generating a downstream source identifier labeling the electric power simply as “green.” If the first upstream source identifier is embedded in the signal, in some embodiments it may be filtered out (via, for example, one or more bandpass filters, etc.). In some embodiments, operation304may not successfully detect an upstream source identifier associated with every electric power stream. If one or more upstream source identifiers are missing, operation306may include labeling or otherwise identifying at least a “portion” of an output electric current as “unknown.” Note that while a system performing method300may (re)distribute received electric power to multiple recipients via outputting multiple electric power streams (possibly having different magnitudes of power), the “mixture” of source information aggregated at operation306may be constant between them. In other words, if a substation receives 75 MW of coal and 25 MW of solar power (100 MW total), the downstream source identifier generated at operation306may label all output electric power streams as “75% coal, 25% solar”). Thus, even if the system outputs 30 MW to a first recipient, 25 MW to a second recipient, and 45 MW to a third recipient, all three outputs may be labeled as “75% coal, 25% solar” (or similar). Depending upon configuration of the system performing method300, separation of sources may not be possible. In other words, the system may not be capable of controlling the output in order to send “all 25 MW of solar” to the second recipient. However, the system may be capable of monitoring changes to the upstream source identifiers and updating the downstream source identifier accordingly in real time. For example, as circumstances change, a solar plant's output may drop dramatically (such as when the sun goes down). Thus, power consumption may be weighted far more heavily in favor of coal, etc. This may be reflected in upstream source identifiers showing shifting magnitudes of power being transmitted, and operation306may further pass this information along by updating the downstream source identifier. Method300further includes embedding the downstream source identifier into the electric current at operation308. Operation308may include, for example, preparing a downstream electric current by adding a carrier signal encoding the downstream source identifier to the electric current or otherwise combining the downstream source identifier current to/with the received electric current into a combined electric current. Thus, the electric current also carries data regarding the source of the electric power. Method300further includes distributing electricity with the embedded downstream source identifier at operation310. Note that operation310may include distributing multiple electric power streams (to multiple recipients). However, the “source” (or “makeup,” “distribution,” “proportions” etc.) of the multiple electric power streams, as identified by the downstream source identifier generated at operation306, may be constant for each output stream. Operation310may include distributing electric power via an electric grid to one or more recipients. Recipients may include, for example, a substation, an end-user (such as a residential user, commercial user, etc.), etc. As the downstream source identifier is combined into the electric current, operation310includes distributing the downstream source identifier as well. In some embodiments, a system performing method300may further transmit the downstream source identifier(s) via other means. For example, the system may emit one or more radio frequency (RF) broadcasts to transmit data corresponding to the downstream source identifier. In some embodiments, the system may upload the downstream source identifiers to the internet, such as to one or more (possibly cloud-based) servers. Combinations of the above are also considered; in some embodiments, a system performing method300may upload the downstream source identifier to a server and broadcast the same downstream source identifier via RF. In some embodiments wherein more than one output stream is transmitted at operation310, operation308may include embedding a first downstream source identifier into a first downstream electric current and embedding a second downstream source identifier into a second downstream electric current, while the system performing method300may further broadcast the second downstream source identifier associated with the second electric current via RF. In some embodiments, a system performing method300may be in communicative contact with one or more recipients. A downstream source identifier may be transferred via or based on this communicative contact. For example, a first recipient (such as a substation) may specifically request that source identification information be embedded in an electric signal, while a second recipient (such as, for example, a commercial user) may request that the downstream source identifier be broadcast via RF. Of course, a third recipient (such as, for example, a residential user) may request that the source identification information be transmitted across the same communications link used to submit the request in the first place. FIG.4illustrates a source-discriminating electric power consumption method400according to several embodiments of the present disclosure. Method400may be performed by, for example, a device connected to in an end-user's electric system, such as a “smart outlet,” “smart meter,” “smart charger,” a bank of batteries or capacitors, etc. Method400includes receiving electricity at operation402. Operation402may include, for example, receiving a first electric power stream (including a first electric signal) via, for example, an electric power transmission line. In some embodiments, operation402may further include determining at least a magnitude of power received via the first electric power stream (e.g., 5 kW, 50 W, etc.). Method400may further include determining whether the any source identifier(s) can be identified at operation404. Operation404may include, for example, performing signal analysis on the received electric signal to search for embedded source identification information, determining whether a radio frequency antenna has received an RF signal including source identification information, polling an online server to determine whether source information is included thereon, etc. If a source identifier is detected (404“Yes”), method400further includes indicating the sources of the received electricity at operation406. Operation406may include, for example, illuminating one or more lights corresponding to a breakdown of the source(s) of the received electricity, causing information regarding the source(s) of the received electricity to be depicted on one or more displays, emitting a sound, etc. As an example, in some embodiments wherein method400is being performed by a “smart outlet” of a residential home, the smart outlet may be equipped with one or more source indicator lights, such as a red, yellow, and green lights (such as light-emitting-diode, or LEDs). A system performing method400may control a state (e.g., “on,” “off,” “25% brightness,” etc.) of one or more of these source indicator lights based on the identified source. If the received source identifier indicates that the electricity received at operation402is 100% solar, the smart outlet may cause the “green” source indicator light to be illuminated. If the received source identifier indicates that the electricity received at operation402is 100% coal, the smart outlet may cause the “red” source indicator light to be illuminated. If the received source identifier indicates that the electricity received at operation402is a mixture (e.g., if portions of the electricity originate from different sources, such that the electric current received at operation402is a composition from a plurality of sources) or unknown, the smart outlet may cause the “yellow” source indicator light to be illuminated. Other configurations and settings are also possible and fully contemplated herein, as would be understood by those of ordinary skill in the art. As another example, in some embodiments method400may be performed by a smart meter. The smart meter may be equipped with a display, wherein operation406may include causing the display to depict a “breakdown” of the sources of the electricity received at operation402. As an additional example, in some embodiments operation406may include sending a signal or message to one or more devices. For example, a system performing method400may cause a notification to be sent to an end-user's mobile device, send an email, transmit an identity of the source to an Internet of Things (IoT) network such as a smart home, etc. Method400further includes determining whether the source(s) of the electricity received at operation402are “acceptable” at operation410. As used herein, a user or administrator of a system performing method400may be enabled to configure and/or specify which sources of electricity to accept vs. reject. For example, a smart charger performing method400may be configured to only accept coal-sourced electricity to charge an attached mobile device. Acceptability may be further based on other factors such as time of day (for example, rejecting non-solar sources during the day), consumption patterns (for example, enforcing a maximum percentage of non-green sourced electricity consumption), etc. If the electricity received at operation402is determined to originate from an “acceptable” source (410“Yes”), method400further includes “accepting” the electricity at operation412. Operation412essentially includes, for example, allowing the received electricity to be used to charge an attached device, power an attached structure, etc. In some embodiments, operation412may include closing one or more electric circuits. Method400may then end at operation416. If the electricity received at operation402is determined to originate from an “unacceptable” source (410“No”), method400further includes rejecting the electricity at operation414. Operation414may include, for example, preventing usage and/or consumption of the received electricity (such as by, for example, utilizing a circuit breaker included in a system performing method400to open or otherwise break the electric circuit). In some embodiments, operation414may include “throttling” or otherwise limiting usage of the electricity. In some embodiments, operation414may include enforcing a maximum amount of energy permitted from the unacceptable source, beyond which the circuit may be throttled and/or broken. Method400may then end at operation416. This may advantageously enable a user to control electricity consumption based on source of the electricity. If no source identifier(s) can be identified (404“No”), method400may indicate that no source identifier(s) could be identified (such as by illuminating a light, pushing a notification to a user's mobile device, emitting a sound, etc.). In some embodiments, lack of a source identifier may result in accepting the electricity at operation412, and then ending at operation416. However, this may depend upon configuration of the system performing method400; in some embodiments, only electricity which is identified as coming from an acceptable source may be accepted, in which case operation408may proceed to rejecting the electricity at operation414instead. In essence, method400may operate on a “blacklist” (as shown inFIG.4) system, wherein default behavior is to accept the electricity, or on a “whitelist” system (wherein operation408may proceed to operation414), wherein default behavior is to reject electricity unless it is explicitly deemed and identified as acceptable. Referring now toFIG.5, shown is a high-level block diagram of an example computer system500that may be configured to perform various aspects of the present disclosure, including, for example, methods200,300and400. The example computer system500may be used in implementing one or more of the methods or modules, and any related functions or operations, described herein (e.g., using one or more processor circuits or computer processors of the computer), in accordance with embodiments of the present disclosure. In some embodiments, the major components of the computer system500may comprise one or more CPUs502, a memory subsystem508, a terminal interface516, a storage interface518, an I/O (Input/Output) device interface520, and a network interface522, all of which may be communicatively coupled, directly or indirectly, for inter-component communication via a memory bus506, an I/O bus514, and an I/O bus interface unit512. The computer system500may contain one or more general-purpose programmable central processing units (CPUs)502, some or all of which may include one or more cores504A,504B,504C, and504D, herein generically referred to as the CPU502. In some embodiments, the computer system500may contain multiple processors typical of a relatively large system; however, in other embodiments the computer system500may alternatively be a single CPU system. Each CPU502may execute instructions stored in the memory subsystem508on a CPU core504and may comprise one or more levels of on-board cache. In some embodiments, the memory subsystem508may comprise a random-access semiconductor memory, storage device, or storage medium (either volatile or non-volatile) for storing data and programs. In some embodiments, the memory subsystem508may represent the entire virtual memory of the computer system500and may also include the virtual memory of other computer systems coupled to the computer system500or connected via a network. The memory subsystem508may be conceptually a single monolithic entity, but, in some embodiments, the memory subsystem508may be a more complex arrangement, such as a hierarchy of caches and other memory devices. For example, memory may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors. Memory may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures. In some embodiments, the main memory or memory subsystem804may contain elements for control and flow of memory used by the CPU502. This may include a memory controller510. Although the memory bus506is shown inFIG.5as a single bus structure providing a direct communication path among the CPU502, the memory subsystem508, and the I/O bus interface512, the memory bus506may, in some embodiments, comprise multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface512and the I/O bus514are shown as single respective units, the computer system500may, in some embodiments, contain multiple I/O bus interface units512, multiple I/O buses514, or both. Further, while multiple I/O interface units are shown, which separate the I/O bus514from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices may be connected directly to one or more system I/O buses. In some embodiments, the computer system500may be a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface but receives requests from other computer systems (clients). Further, in some embodiments, the computer system500may be implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smart phone, mobile device, or any other appropriate type of electronic device. It is noted thatFIG.5is intended to depict the representative major components of an exemplary computer system500. In some embodiments, however, individual components may have greater or lesser complexity than as represented inFIG.5, components other than or in addition to those shown inFIG.5may be present, and the number, type, and configuration of such components may vary. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electric signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 52,765 |
11860596 | In the appended figures, similar components and/or features may have the same numerical reference label. Further, various components of the same type may be distinguished by following the reference label with a letter or by following the reference label with a dash followed by a second numerical reference label that distinguishes among the similar components and/or features. If only the first numerical reference label is used in the specification, the description is applicable to any one of the similar components and/or features having the same first numerical reference label, irrespective of the suffix. DETAILED DESCRIPTION OF THE INVENTION The present disclosure relates generally to methods and systems related to smart window systems. More particularly, embodiments of the present invention provide smart window devices, smart home systems that include smart window devices, and user devices and applications for control of such devices and systems. Some embodiments of smart windows may include the integration of photovoltaics, power electronics, power storage, sensors, and/or a wireless communication system into an insulated glass unit (IGU) and/or a window frame assembly for installation in a home or building. While many embodiments are described in reference to windows for use in a home, embodiments are widely applicable to any building or structure in which a window or window-like apparatus may be installed, including various applications in residential, commercial, or industrial settings. As used herein, the terms “smart window”, “photovoltaic window”, “photovoltaic smart window”, “smart window device”, “smart window system”, and “photovoltaic window system” may be used interchangeably and may generally refer to an apparatus having a visible portion that separates an interior environment from an exterior environment and having one or more of the described components installed therein (e.g., photovoltaics, power electronics, power storage, sensors, wireless communication system, etc.), in accordance with the various embodiments of the present invention. As used herein, the terms “smart home system”, “smart system”, “home automation system”, and “automation system” may be used interchangeably and may generally refer to a wirelessly connected system of a smart window and at least one other device being either another smart window, a smart home hub, or a user device, in accordance with the various embodiments of the present invention. As such, the above terms may refer to a system having at least two smart windows, a system having at least a single smart window and a smart home hub, or a system having at least a single smart window and a user device, among other possibilities. As used herein, the terms “smart home hub”, “hub device”, and “home automation hub” may be used interchangeably and may generally refer to a device (or base station) that exists within the smart home system that is wirelessly connected to at least one smart window and that is capable of receiving data from the smart window and/or transferring data to the smart window, in accordance with various embodiments of the present invention. As used herein, the terms “user device” and “control device” may be used interchangeably and may generally refer to a device that exists within the smart home system that is wirelessly connected to at least one smart window either directly or via the smart home hub, in accordance with the various embodiments of the present invention. FIG.1illustrates an example of a smart home system100having various smart windows102, according to some embodiments. Alternatively, smart home system100may be referred to as “home automation system100” and smart windows102may be referred to as “photovoltaic windows102”. In the illustrated example, smart home system100is deployed in a residential house with various rooms, doors, windows, and furniture. Within smart home system100, smart windows102may be communicatively coupled directly to each other or via a smart home hub134, which in the illustrated example is a device situated on the kitchen countertop and receiving electrical power through the home's electrical system. Further illustrated inFIG.1is a user104of smart home system100holding a user device120, which in the illustrated example is a mobile phone having an application program (or “app”) installed thereon providing connectivity to smart home system100. Each of smart windows102may be self-powered using photovoltaics108that are integrated with the glass or visible area of the windows. For example, photovoltaics108may be integrated with the glass of one or both of the upper and lower panes of a vertical sliding window. Photovoltaics108may include organic transparent photovoltaics, luminescent solar concentrators (LSC), or other solar technologies having transparent properties. In some instances, photovoltaics108may include a number of visibly transparent photovoltaic devices that absorb optical energy at wavelengths outside the visible wavelength band of 450 nm to 650 nm, for example, while substantially transmitting visible light inside the visible wavelength band. In such embodiments, photovoltaics108may be configured to absorb ultraviolet (UV) and/or near-infrared (NIR) wavelengths in the layers and elements of the devices while visible light is transmitted therethrough. In some embodiments, photovoltaics108may be considered to be visibly transparent, at least partially visibly transparent, substantially visibly transparent, and the like. In some embodiments, photovoltaics108may be considered to be visibly transparent when they are characterized by an average visible transmittance (AVT) of at least 30%. In various embodiments, photovoltaics108may be characterized by an AVT of at least 10%, at least 20%, at least 30%, at least 40%, at least 50%, at least 60%, at least 70%, at least 80%, at least 90%, or approximately 100%. Each of smart windows102may include various electrical loads that are solely powered by photovoltaics108, without receiving any power from the home's electrical system. For example, smart windows102may include various sensors128and window functions122that are powered by the solar energy harvested by photovoltaics108. In the illustrated example, sensors128include a camera facing the exterior side of the window, which may be used, in some embodiments, as part of the home's security system to monitor and detect movement occurring on the exterior of the home. Further in the illustrated example, window functions122include electric blinds that may open (e.g., retract up and/or rotate open) or close (e.g., extend down and/or rotate close) is response to receiving a control signal to do so. Further in the illustrated example, window functions122may include an electric mechanism for opening or closing the window (e.g., a motorized track). Smart home system100may include various home functions124that are powered separately from smart window102using the home's electrical system or some other power source. In the illustrated example, home functions124include room lighting and exterior lighting that may be turned on or off (or dimmed) in response to receiving a control signal to do so. Further in the illustrated example, home functions124include an audio system that may be turned on or off, or may be controlled in a more specific manner (e.g., to play a particular song at a particular volume, etc.). User104may interact with smart home system100and smart windows102in a number of ways. For example, user104may use an application program running on user device120to connect to smart home system100to display information about smart windows102and/or to transmit control data to modify an operation of smart windows102. Alternatively or additionally, user104may use smart home hub134to interact with smart windows102. For example, in the illustrated example, user104provides the audible command “Close the windows if it starts to rain”. This command may be received by a microphone installed on either user device120or smart home hub134. Upon receiving this command, smart home system100may create a conditional mapping between data detected by sensors128and a window action to be performed by window functions122such that smart windows102may be caused to close in response to determining by sensors128that it is raining on the exterior side of smart windows102(e.g., using a camera, moisture sensor, etc.). FIG.2illustrates a block diagram of an example smart home system200, according to some embodiments. Smart home system200may include one or more (e.g., N) photovoltaic windows202, which may each be separate, self-contained units capable of being self-powered. In the illustrated example, photovoltaic window202-1includes photovoltaics208, power electronics210, a power storage212(e.g., a battery), and one or more electrical loads236(including a wireless communication system216, sensors228, window functions222, and a power outlet226). Photovoltaic windows202-2to202-N may include similar components. Smart home system200may further include a smart home hub234, home functions224, and a user device220. The components of smart home system200may be interconnected via various power and/or data signals as shown inFIG.2, with solid lines denoting power signals and dashed lines denoting data-carrying signals, which may include data signals292, control signals294, and the like. In various embodiments, components of smart home system200may be more or less integrated than that shown inFIG.2. For example, in some implementations, power electronics210, power storage212, and wireless communication system216may be packaged together on a single or multiple circuit boards on what is referred to herein as an electronics package240. As another example, in some implementations, sensors228may include two separate modules, including an exterior sensor module positioned at and/or oriented toward an exterior side of the window and an interior sensor module positioned at and/or oriented toward an interior side of the window. In some embodiments, photovoltaics208may generate and send electrical power to power electronics210, which can control and regulate the manner, including the voltage and/or current, in which the electrical power is fed into power storage212. Typically, power storage212(which may alternatively be referred to as “energy storage212”) may include one or more batteries and electronics for power conditioning. In some instances, power electronics210is able to maximize the power delivered from photovoltaics208to power storage212by matching the voltage of photovoltaics208to that of power storage212. In some embodiments, power electronics210conditions the variable output of photovoltaics208(variable voltage and current, depending on the lighting) and controls the output to a desired voltage/current acceptable for charging the batteries or powering the various sensors. This may be accomplished using an appropriate combination of buck converters, boost converters, and/or buck/boost converters, along with various active and/or passive circuit components, such as resistors, capacitors, inductors, transistors, transformers, and diodes, among other possibilities. In some instances, power electronics210may employ maximum power point tracking (MPPT) which may include adjusting the load to operate close to the maximum power point on the current-voltage curve of photovoltaics208, which changes based on lighting condition. Other functions of power electronics210include, but are not limited to: managing battery charging/battery draw, conditioning input/output from batteries according to battery specs and safety requirements, and implementing a microcontroller integrated circuit (IC) to run algorithms, such as the MPPT. The power held by power storage212can be used to power each of electrical loads236. AlthoughFIG.2illustrates smart home system200as driving all of the powered elements using power from power storage212, this is not required by the present invention and a combination of power provided directly from photovoltaics208, directly from power electronics210, and/or power provided directly from power storage212can be utilized to power the various system components. In typical operation, power generated by photovoltaics208will be characterized by low current level over an extended period of time while power drawn by devices will be characterized by high current levels for short periods of time. Thus, in some embodiments, power storage212may be continually topped off by power delivered through power electronics210and may be drained by one or more of electrical loads236to meet the power requirements of the various devices. Each of photovoltaic windows202may include a wireless communication system216that serves as the wireless interface for communicating between the electronics at photovoltaic windows202and external components, such as smart home hub234, user device220, and home functions224. WhileFIG.2shows wireless communication system216as communicating with user device220indirectly via smart home hub234, in some embodiments direct wireless communication between wireless communication system216and user device220may be enabled. Each of wireless communication system216, smart home hub234, user device220, and home functions224may comply with one or more wireless standards, including IEEE 802.11 standards, Bluetooth standards, Zigbee standards, 3G, 4G/LTE, WiFi, and the like. Each of photovoltaic windows202may also include one or more sensors228for capturing various types of sensor data. Without limitation, sensors228may include an interior- and/or exterior-facing camera, an interior- and/or exterior-facing light sensor, an interior- and/or exterior-facing motion sensor, an interior and/or exterior temperature sensor, an interior and/or exterior humidity sensor, an interior and/or exterior accelerometer, an interior and/or exterior contact sensor, an interior and/or exterior audio sensor, an interior and/or exterior moisture sensor, an interior and/or exterior air quality sensor, an interior and/or exterior smoke sensor, a leak sensor for detecting argon or krypton gas leaking from within the IGU, a parts per million (PPM) gas sensor, and the like. Each of photovoltaic windows202may also include one or more window functions222, which may be devices configured to consume the electrical power generated at the smart window to perform a particular action at the window (or “window action”). Without limitation, window functions222may include a window opening/closing mechanism, a window locking/unlocking mechanism, electric blinds, an electrochromic device integrated with the window glass, a polymer-dispersed liquid crystals (PDLC) film, a speaker, a microphone, lighting such as LED strip or edge lighting, a transparent organic light-emitting diode (OLED) display integrated with the window glass, and the like. As an example operation of photovoltaic window202utilizing window functions222in conjunction with sensors228, a light sensor integrated into and powered by photovoltaic window202could detect that light over a brightness threshold is passing through the smart window. In order to decrease the light passing through photovoltaic window202, which could potentially heat up the room in which photovoltaic window202is installed, and the home as a result in residential applications, the window shades could be lowered to reduce the light passing through the smart window system and reduce the cooling costs of the home. Each of photovoltaic windows202may also include one or more power outlets226for transferring electrical power. For example, power outlet226may serve as a port for providing power (e.g., charging) to various devices from power storage212or power electronics210. In some embodiments, power outlet226may include a USB receptacle that provides USB charging functionality to various devices. In some embodiments, power outlet226can be used to charge the batteries of power storage212by connecting an external power source to power outlet226, thereby causing photovoltaic window202to receive electrical power from an external source. In one example, on cloudy days with little sunlight, an external power source (e.g., a portable charger such as a USB power bank) can be connected to power outlet226to charge the batteries contained in power storage212. In addition to power used to power local sensors228and other electrical loads236, a dedicated power outlet226in one of many different form factors can be provided to power window functions222or other components installed onto photovoltaic window202. As an example, a USB outlet can be provided that can provide power to operate window shades that are mounted on photovoltaic window202. Data from sensors228as well as photovoltaics208, power electronics210, and power storage212can be used to implement control of the various features and functions described herein. For example, such data may be provided to a central processing unit (CPU) (not shown) of photovoltaic window202to be processed to provide control, for example, in conjunction with wireless communication system216, for the devices implementing the various features and functions described herein. In some embodiments, such data may be sent to smart home hub234and/or user device220(via wireless communication system216) in a data signal292. These devices may receive data signal292and may generate control signals294to implement the various features and functions described herein. The above-referenced data that may be included in data signal292may include data captured by sensors228, which may be referred to as “sensor data”, as well as data provided by photovoltaics208, power electronics210, and/or power storage212, which may be referred to as “power data”, which can include data on the state of photovoltaics208, including current levels, voltage levels, and the like. The power data can then be used to track energy output as a function of time that can be used by various system components. The power data and the sensor data may be analyzed by the onboard processor of photovoltaic window202, by smart home hub234, and/or by user device220. In some instances, smart home hub234may receive, through data signal292, the power data from the batteries of power storage212themselves. Such data may indicate a state of charge of the batteries, a charging status of the batteries, or a current output of photovoltaics208, among other possibilities. The data received by smart home hub234and/or user device220can be used to control one or more home functions224, which may be devices configured to perform particular actions within the home (or “home actions”). Without limitation, home functions224may include room lighting, exterior lighting, home heating system, home cooling system, home appliances, door locks, audio systems, and the like. Corresponding home actions may include, for example, turning on, off, or dimming the room or exterior lighting, turning on or off the home heating or cooling system, locking or unlocking a door, turning on, off, or controlling the audio system in a more specific manner, and the like. As an example operation of smart home system200utilizing home functions224in conjunction with sensors228, on warm summer days, as the light intensity measured at the smart window system increases or a temperature measured at the smart window system increases (as measured by photovoltaics208and/or sensors228and communicated to smart home hub234via wireless communication system216), the home cooling system could be turned on in anticipation of increased cooling demand before the temperature in the home begins to increase. Alternatively, if clouds begin to decrease the light intensity or temperature measured at the smart window system, the home cooling system can be turned off in response to this decrease in measured light intensity or temperature, providing additional inputs to the home cooling system that will enable finer control of the home cooling system and resulting reductions in energy consumption. Similar functionality can be provided in relation to a home heating system. Embodiments of the present invention are particularly applicable to residential window applications, although commercial applications are also included within the scope of the present invention. As described herein, power that is generated by the smart window can be utilized by the smart window and by components, for example, window shades, that are mounted on or in proximity of the smart window. Thus, in addition to generating power that can be fed into the power grid and utilized to offset energy consumption in the building that includes smart home system200, the smart window itself can utilize generated power to power features that are not available in conventional windows. The features that can be provided by embodiments of the present invention span a wide variety of functions, including electrochromic control to modify the tint state of the IGU, surveillance functions enabled by cameras, temperature control functions enabled by temperature sensors, window shade control functions enabled by light sensors, and the like. Thus, smart home systems described herein enable internet-of-things (TOT) functionality without the need to provide power to one or more of the TOT devices. As described herein, smart window systems are provided that include a number of self-powered features, both interior and exterior, including, but not limited to camera function, motion sensor function, light sensor function, temperature sensor function, humidity sensor function, contact function, for example, intrusion detection using an accelerometer that can alert a user to people or items making contact with the smart window system, communication and indication functions, for example, LED indicators to provide information to a user on the status of various system elements, and the like. Embodiments of the present invention provide functions and features that are not found in conventional windows, because conventional windows do not include a power source that can be used to power devices providing these functions and features. As a result, embodiments of the present invention provide features and functions that can be integrated into a smart window system while also being powered by power generated by the smart window system. In contrast with a conventional window that would require an external power source to provide these features, embodiments of the present invention, but using power generated by photovoltaics208disposed in the IGU, do not need any external wiring, which can result in lack of mechanical integrity, breaking of atmospheric seals, and the like if an attempt to integrate such external wiring into an IGU was attempted. The integration of a power source and sensors228inside the IGU enables functionality not available using conventional systems. As an example, in addition to intrusion detection, an accelerometer can be used to detect interaction between a user and the window, including the lites and the frame. Tapping on the lite in accordance with a predetermined pattern could be used to generate sensor data that sends a notification, causes a window action to be performed by a window function222(e.g., open a window), or causes a home action to be performed by a home function224(e.g., open a lock), and the like. FIG.3Aillustrates front (left) and angled (right) views of an exterior side of a photovoltaic window302having a frame354and an IGU352, according to some embodiments. In the illustrated examples, photovoltaic window302is a vertical sliding window. Photovoltaic window302includes an exterior sensor module360that is shown coupled to IGU352. FIG.3Billustrates front (left) and angled (right) views of an interior side of photovoltaic window302having frame354and IGU352, according to some embodiments. Photovoltaic window302includes an interior sensor module362that is shown coupled to IGU352, as well as a power outlet326, which is a USB-C outlet in the illustrated examples. FIG.3Cillustrates zoomed-in views of portions of photovoltaic window302, according to some embodiments. InFIG.3C, a zoomed-in version of elements shown in FIGS.3A and3B is provided that shows the individual sensors of exterior sensor module360and interior sensor module362, which collectively form sensors328of photovoltaic window302. Each of these sensor modules includes an accelerometer364, a temperature sensor366, a humidity sensor368, a camera370, and a light sensor372. These sensors are merely exemplary and other sensors and combinations of sensors can be utilized within the scope of the present invention. As described herein, embodiments of the present invention utilize power generated by photovoltaics208, either directly or via power storage212to power various sensors and other electrical loads. FIG.4illustrates a photovoltaic window402having an IGU452, according to some embodiments.FIG.4further illustrates (via insets) certain components of photovoltaic window402shown separated from IGU452. In the illustrated example, photovoltaic window402includes an electronics package440that includes a power outlet426, power storage412(comprising a bank of batteries), power electronics410, exterior sensor module460, interior sensor module462, wireless communication system416, and photovoltaic input474. Photovoltaic input474receives power generated by the photovoltaic coatings present on the lites. Additional description related to organic photovoltaic coatings is provided in commonly assigned U.S. Patent Application No. 2019/0036480, the disclosure of which is hereby incorporated by reference in its entirety for all purposes. In the various embodiments described herein, the term “electronics package” may refer to the group of electrical components contained in the electronics package (e.g., the circuit board and components attached thereto) as well as the casing, packaging, coverings, and/or box in which the group of electrical components are contained. In some implementations, the electronics package can include a box with a cover that provide insulation and waterproofing for the electrical components. The cover may further provide access to the electrical components for maintenance and/or replacement of the electrical components. Electronics package440can be implemented on a printed circuit board (PCB) that is mounted in the photovoltaic window, as described more fully in relation toFIGS.8A-9F. In contrast with conventional windows, embodiments of the present invention integrate power and electronic devices, for example, electronics package440, inside IGU452to provide a self-contained photovoltaic window system that provides both electronic and optical functionality. By integrating the electronics into the IGU, embodiments of the present invention can be utilized with a wide variety of framing systems, typically requiring no modification of the framing system. As a result, the IGU with electronics can be used as a drop-in replacement for conventional IGUs in standard window frames. Therefore, embodiments of the present invention provide augmented IGUs that can include batteries, circuits, sensors, antennas, and the like that can be mounted in standard window frames to form the photovoltaic window system. In the embodiment illustrated inFIG.4, electronics package440is mounted in the upper portion of IGU452and is sealed via a cover476to provide a controlled environment. In order to provide for integration with IGU452, the form factor of electronics package440may correspond to the shape of the upper portion of IGU452, in this case, a width on the order of 10 mm and a length on the order of 50 cm. In various implementations, electronics package440may have a wide variety of sizes and form factors. For example, electronics package440(or the casing, packaging, or box in which the electronics package is contained) may have a width similar to the width of the IGU's spacer, i.e., between 0.25 to 0.5 inches. In some implementations, electronics package440may have a width similar to the width of the entire IGU, i.e., between 0.5 to 1.0 inches. The length of electronics package440may be based on the length of the IGU, which may vary from window to window (e.g., 2 feet, 3 feet, 4 feet, etc.). In some embodiments, each of exterior sensor module460and interior sensor module462includes a connector478(for transferring power and data to/from other components of electronics package440), a light480, a camera470, a humidity sensor468, a temperature sensor466, an accelerometer464, and LEDs482. Although exterior sensor module460and interior sensor module462share common elements in this embodiment, this is not required by the present invention and exterior sensor module460and interior sensor module462can include different elements, sensors, and the like. Accordingly, a camera may be included on the exterior sensor module, but not on the interior sensor module, whereas a temperature sensor may be included in both the exterior sensor module and the interior sensor module. In some embodiments, exterior sensor module460and interior sensor module462may be bonded to an exterior side and an interior side of the glass of IGU452, respectively. Ribbon cables or other suitable connectors can be utilized to connect exterior sensor module460and interior sensor module462to other elements of electronics package440. FIG.5Aillustrates a perspective view of photovoltaic window502having an IGU552integrated with an electronics package540, according to some embodiments. Photovoltaic window502further includes an interior sensor module562positioned on the interior side of IGU552and an exterior sensor module (barely visible through glass of IGU552inFIG.5A) positioned on the exterior side of IGU552. In the illustrated example, interior sensor module562includes a camera570for monitoring the interior of a home or building. FIG.5Billustrates a zoomed-in version ofFIG.5Ashowing photovoltaic window502, IGU552, and electronics package540, according to some embodiments. As illustrated inFIG.5B, interior sensor module562mounts on the interior side of the interior glass of IGU552and is electrically connected to the electronics package via an electronic cable that extends upward toward electronic package540. Electronics package540is shaped to seamlessly integrate with the top portion of IGU552, allowing embodiments of the present invention to be used with existing IGUs. In the illustrated example, photovoltaic window502further includes a power outlet526positioned on the interior side of electronics package540to provide transfer of the power generated at photovoltaic window502to one or more window functions, home functions, or to any remote electrical device (e.g., a user's smart phone). FIG.5Cillustrates an exploded, perspective view of photovoltaic window502including IGU552and electronics package540, according to some embodiments. Components of IGU552are more clearly shown inFIG.5C, and include an interior glass584, an exterior glass586, and a seal544positioned between interior glass584and exterior glass586. When photovoltaic window502is assembled, an exterior sensor module560is mounted on an exterior side of exterior glass586and interior sensor module562is mounted to an interior side of interior glass584. Also shown inFIG.5Care electrical leads588connecting the photovoltaics of photovoltaic window502to electronics package540. Electrical leads588may be positioned such that they can be connected to a photovoltaic input positioned on the bottom side of electronics package540when electronics package540is mounted in the top side of IGU552. In this example, electronics package540includes a portion disposed between the lites and a portion above the lites to provide access to the USB power port of power outlet526. FIG.6Aillustrates a perspective view of an electronics package640of a photovoltaic window, according to some embodiments. Electronics package640illustrated inFIG.6Amay be similar to electronics packages described previously in reference toFIGS.4-5C. For example, electronics package640includes an interior sensor module662, an exterior sensor module660, and a power outlet626. FIG.6Billustrates an exploded, perspective view of electronics package640, according to some embodiments. Specifically, a cover676is removed from the top side of electronics package640as well as covers for interior sensor module662and exterior sensor module660. Visible inFIG.6Bare a power storage612(comprising a bank of batteries), wireless communication system616, among other components. FIG.6Cillustrates a perspective view of electronics package640with all covers and casings removed, according to some embodiments. Further visible inFIG.6Care power electronics610, as well as various data processing and storage elements placed throughout electronics package640. FIG.6Dillustrates an exploded, perspective view of electronics package640with all covers and casings removed, according to some embodiments.FIG.6Dshows how certain components of electronics package640, such as interior sensor module662, exterior sensor module660, and wireless communication system616, can be fabricated on a separate circuit board form other components, such as power electronics610and power storage612. Such separation can simplify the manufacturing process by allowing components that are more user-customizable (e.g., sensor modules) to be fabricated separately from components that are fairly standard across products (e.g., power electronics and storage). In some embodiments, the lower board inFIG.6Dmay be referred to as the “power electronics board” and the upper board may be referred to as the “smart board”. FIG.7Aillustrates a side view of an IGU752and a window frame assembly750in an unassembled state, according to some embodiments. IGU752includes a seal744, a spacer746, a photovoltaic708, an interior glass784, and an interior glass784. In the illustrated example, photovoltaic708is coupled to exterior glass786, and each of seal744and spacer746are positioned between and coupled to exterior glass786and interior glass784. Window frame assembly750includes a window frame754and a glazing stop756(alternatively referred to as a “frame stop”). FIG.7Billustrates a side view of IGU752and window frame assembly750in an assembled state, according to some embodiments. After IGU752is inserted into the window frame754, glazing stop756is used to secure IGU752in place. FIG.8Aillustrates a side view of a photovoltaic window802A that includes (1) an IGU852integrated with components of a photovoltaic window system and (2) a window frame assembly850in an assembled state, according to some embodiments. IGU852includes a seal844, a spacer846, an interior glass884, and an exterior glass886. Window frame assembly850includes a window frame854and a glazing stop856. The photovoltaic window system includes an electronics package840and a photovoltaic808. In the illustrated example, electronics package840is disposed above IGU852such that electronics package840is disposed above each of seal844, interior glass884, and exterior glass886. Electronics package840can be considered to be an appendage of IGU852and can have a similar width as IGU852. One advantage of photovoltaic window802A is that electronics package840can be easily accessed by removing glazing stop856. In some instances (such as in the illustrated example), spacer846and a portion of seal844may be partially visible in the vision area (the visible portion) of photovoltaic window802A. In other examples, window frame854may be configured to extend further downward vertically so as to cover spacer846and/or seal844. FIG.8Billustrates a side view of a photovoltaic window802A that includes (1) IGU852integrated with components of a photovoltaic window system and (2) window frame assembly850in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed within IGU852at a position between interior glass884and exterior glass886and adjacent to seal844, which can be reduced in thickness in comparison to conventional seals. During manufacturing, seal844can be thinner on one edge in order to receive electronics package840while being thicker (e.g., a standard thickness) on the other edges. Thus, this design is easily integrated into standard manufacturing processes. Electronics package840can be considered to be a top load of IGU852and can have a similar width as seal844and/or spacer846. In some instances (such as in the illustrated example), a portion of spacer846may be partially visible in the vision area (the visible portion) of photovoltaic window802B, while in other embodiments window frame854may extend further downward vertically. FIG.8Cillustrates a side view of a photovoltaic window802C that includes (1) IGU852integrated with components of a photovoltaic window system and (2) window frame assembly850in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed partially above IGU852and partially within IGU852. The portion of electronics package840that is disposed above IGU852is disposed above each of seal844, interior glass884, and exterior glass886while the portion of electronics package840that is disposed within IGU852is disposed at a position between interior glass884and exterior glass886and adjacent to seal844. In some instances (such as in the illustrated example), a portion of spacer846may be partially visible in the vision area (the visible portion) of photovoltaic window802C, while in other embodiments window frame854may extend further downward vertically. FIG.8Dillustrates a side view of a photovoltaic window802D that includes (1) IGU852integrated with components of a photovoltaic window system and (2) window frame assembly850in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed asymmetrically above a portion of IGU852at a position that is above exterior glass886and seal844but not interior glass884. Thus, the area of the exterior glass886is larger than the area of interior glass884, resulting in a cavity on the interior side of the IGU. This design provides for a larger seal on the exterior side of the frame. Removal of glazing stop856enables access to electronics package840without removing the IGU from the frame. This can be useful if batteries need to be replaced, electronics need to be serviced, or the like during the life of the photovoltaic window system. In some instances (such as in the illustrated example), spacer846and a portion of seal844may be partially visible in the vision area (the visible portion) of photovoltaic window802D, while in other embodiments window frame854may extend further downward vertically. FIG.8Eillustrates a side view of a photovoltaic window802E that includes (1) IGU852integrated with components of a photovoltaic window system and (2) window frame assembly850in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed within IGU852at a position between interior glass884and exterior glass886and between seal844and spacer846. Electronics package840can have a similar width as seal844and/or spacer846. In some instances (such as in the illustrated example), spacer846and a portion of electronics package840may be partially visible in the vision area (the visible portion) of photovoltaic window802E, while in other embodiments window frame854may extend further downward vertically. FIG.8Fillustrates a side view of a photovoltaic window802F that includes (1) IGU852integrated with components of a photovoltaic window system and (2) window frame assembly850in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed within IGU852at a position that is within and/or internal to spacer846(e.g., embedded in spacer846). In some instances (such as in the illustrated example), none of seal844, spacer846, and electronics package840may be partially visible in the vision area (the visible portion) of photovoltaic window802F, while in other embodiments window frame854may extend further upward vertically. FIG.8Gillustrates a side view of a photovoltaic window802G that includes (1) IGU852and (2) window frame assembly850integrated with components of a photovoltaic window system and in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed within window frame854at a position that is within and/or internal to window frame854(e.g., embedded in window frame854). In some instances, none of seal844, spacer846, and electronics package840may be visible in the vision area (the visible portion) of photovoltaic window802G, while in other embodiments window frame854may extend further upward vertically. In some embodiments, window frame854may include a removable portion (e.g., on the interior side of window frame854) that provides an access point for electronics package840. FIG.8Hillustrates a side view of a photovoltaic window802H that includes (1) IGU852and (2) window frame assembly850integrated with components of a photovoltaic window system and in an assembled state, according to some embodiments. IGU852includes seal844, spacer846, interior glass884, and exterior glass886. Window frame assembly850includes window frame854and glazing stop856. The photovoltaic window system includes electronics package840and photovoltaic808. In the illustrated example, electronics package840is disposed within glazing stop856at a position that is within and/or internal to glazing stop856(e.g., embedded in glazing stop856). In some instances, none of seal844, spacer846, and electronics package840may be visible in the vision area (the visible portion) of photovoltaic window802H, while in other embodiments window frame854may extend further downward vertically. In some embodiments, glazing stop856may include a removable portion (e.g., on the interior side of glazing stop856) that provides an access point for electronics package840. Many variations and modifications to the examples described inFIGS.8A-8Hexist and are considered to be within the scope of the present disclosure. For example, the electronics package cross section can be positioned linearly along one of the window or spacer edges. As another example, the electronics package cross section can be positioned in a corner of the IGU occupying two edges (either as triangle or rectangle). As yet another example, for embodiments in which the electronics package occupies the spacer volume, such as that shown inFIG.8F, the electronics package can take the form of either being within the spacer itself or within a connector that joins the spacer together. While not explicitly shown inFIGS.8A-8H, it is to be understood that electrical wires may traverse between different components of photovoltaic windows802, such as between photovoltaics808and electronics packages840. FIG.9Aillustrates a side view of a photovoltaic window902A that includes (1) an IGU952integrated with components of a photovoltaic window system and (2) a window frame assembly950in an assembled state, according to some embodiments. IGU952includes a seal944, a spacer946, an interior glass984, and an exterior glass986. Window frame assembly950includes a window frame954and a glazing stop956. The photovoltaic window system includes an electronics package940and a photovoltaic908. Photovoltaic window902A is similar to photovoltaic window802A and additionally shows a possible positioning of an exterior sensor module960and an interior sensor module962. In the illustrated example, exterior sensor module960is mounted to an outer surface of exterior glass986and interior sensor module962is mounted to an outer surface of interior glass984. Neither exterior sensor module960nor interior sensor module962further obscure the visible region of photovoltaic window902A. FIG.9Billustrates a side view of a photovoltaic window902B that includes (1) IGU952integrated with components of a photovoltaic window system and (2) window frame assembly950in an assembled state, according to some embodiments. IGU952includes seal944, spacer946, interior glass984, and exterior glass986. Window frame assembly950includes window frame954and glazing stop956. The photovoltaic window system includes electronics package940and photovoltaic908. Photovoltaic window902B is similar to photovoltaic window802B and additionally shows a possible positioning of an exterior sensor module960and an interior sensor module962. In the illustrated example, exterior sensor module960is mounted to an outer surface of exterior glass986and interior sensor module962is mounted to an outer surface of interior glass984. FIG.9Cillustrates a side view of a photovoltaic window902C that includes (1) IGU952integrated with components of a photovoltaic window system and (2) window frame assembly950in an assembled state, according to some embodiments. IGU952includes seal944, spacer946, interior glass984, and exterior glass986. Window frame assembly950includes window frame954and glazing stop956. The photovoltaic window system includes electronics package940and photovoltaic908. Photovoltaic window902C is similar to photovoltaic window802C and additionally shows a possible positioning of an exterior sensor module960and an interior sensor module962. In the illustrated example, exterior sensor module960is mounted to an outer surface of exterior glass986and interior sensor module962is mounted to an outer surface of interior glass984. FIG.9Dillustrates a side view of a photovoltaic window902D that includes (1) IGU952integrated with components of a photovoltaic window system and (2) window frame assembly950in an assembled state, according to some embodiments. IGU952includes seal944, spacer946, interior glass984, and exterior glass986. Window frame assembly950includes window frame954and glazing stop956. The photovoltaic window system includes electronics package940and photovoltaic908. Photovoltaic window902D is similar to photovoltaic window802D and additionally shows a possible positioning of an exterior sensor module960. In the illustrated example, exterior sensor module960is mounted to an inner surface of exterior glass986. Exterior sensor module960may be positioned so as to not further obscure the visible region of photovoltaic window902D. An interior sensor module can be utilized as well. FIG.9Eillustrates a side view of a photovoltaic window902E that includes (1) IGU952integrated with components of a photovoltaic window system and (2) window frame assembly950in an assembled state, according to some embodiments. IGU952includes seal944, spacer946, interior glass984, and exterior glass986. Window frame assembly950includes window frame954and glazing stop956. The photovoltaic window system includes electronics package940and photovoltaic908. Photovoltaic window902E is similar to photovoltaic window802E and additionally shows a possible positioning of an exterior sensor module960and an interior sensor module962. In the illustrated example, exterior sensor module960is mounted to an inner surface of exterior glass986and interior sensor module962is mounted to an inner surface of interior glass984. Neither exterior sensor module960nor interior sensor module962further obscure the visible region of photovoltaic window902E. FIG.9Fillustrates a side view of a photovoltaic window902F that includes (1) IGU952integrated with components of a photovoltaic window system and (2) window frame assembly950in an assembled state, according to some embodiments. IGU952includes seal944, spacer946, interior glass984, and exterior glass986. Window frame assembly950includes window frame954and glazing stop956. The photovoltaic window system includes electronics package940and photovoltaic908. Photovoltaic window902F is similar to photovoltaic window802F and additionally shows a possible positioning of an exterior sensor module960and an interior sensor module962. In the illustrated example, exterior sensor module960is mounted to an inner surface of exterior glass986and interior sensor module962is mounted to an inner surface of interior glass984. Many variations and modifications to the examples described inFIGS.9A-9Fexist and are considered to be within the scope of the present disclosure. For example, while not explicitly shown inFIGS.9A-9F, it is to be understood that electrical wires may traverse between different components of photovoltaic windows902, such as between photovoltaics908and electronics packages940, between interior sensor modules962and electronics packages940, and between exterior sensor modules960and electronics packages940. Certain embodiments, such as those shown inFIGS.9D and9E, may have exterior sensor modules960(and also interior sensor module962forFIG.9E) integrated with electronics packages940(or the container of electronics packages940) and may beneficially lack electrical wires connecting to these components. FIG.10illustrates an example of a user interface screen1011, according to some embodiments. As described herein, a user device may provide a user interface to facilitate user access and interaction with the home automation system. WhileFIG.10describes a graphical user interface, it is to be understood that other user interfaces can also be substituted. In some embodiments, user interface screen1011can occupy the entire display area of the user device (e.g., if user device is a mobile phone or other device with a relatively small display). In other embodiments, user interface screen1011can occupy a portion of the display area (e.g., a window or pane on a virtual desktop displayed on a desktop or laptop computer). The user interface can incorporate various graphical control elements that the user can select in order to invoke functionality of the application program that generates the interface screens. For example, if user interface screen1011is presented on a touchscreen display, the user can touch a control element to select it. As another example, if the user interface is presented on a display that is not a touchscreen, the user can operate a pointing device (e.g., mouse, trackpad, etc.) to position a cursor over a control element, then select the control element by tapping or clicking. User interface screen1011can be a starting screen displayed when the user first launches an application program to control the home automation system. In the examples herein, the automated environment is assumed to be a home, but it is to be understood that other automated environments can be configured and controlled using similar interfaces. User interface screen1011can provide a set of menus1013with graphical control elements that, when selected, cause the application program to display information in accordance with the selected menu. For example, selecting “HOME” may cause user interface screen1011to display a home screen, selecting “WINDOW” may cause user interface screen1011to display a representation1015of a photovoltaic window (e.g., a graphical representation), selecting “3D FLOORPLAN” may cause user interface screen1011to display the three-dimensional (3D) floor plan of the home, and the like. In some embodiments, selecting “WINDOW” may cause user interface screen1011to display an information panel1017that may include information regarding the photovoltaic window contained the data signal. For example, information panel1017may include sensor data captured by the sensors of the photovoltaic window including an outside temperature, an inside temperature, an amount of detected light, and the like. Information panel1017may also include power data such as an amount of electrical power stored in the photovoltaic window, an amount of electrical power currently being generated by the photovoltaic window, and the like. Information panel1017may further include a video stream1019shown in real time as captured by an exterior-facing camera of the selected photovoltaic window. In various embodiments, user interface screen1011may provide buttons and other graphical control elements through which a user may interact with the home automation system. For example, in some embodiments the app can receive user input to configure the model of the home automation system, e.g., by assigning photovoltaic windows having certain window functions to certain rooms or locations within the home, by assigning home functions to certain rooms or locations within the home, by defining window actions, home actions, or mappings between sensor data, power data, window actions, and home actions, and the like. In some embodiments, user interface screen1011can provide a set of home monitoring and control features1021. These features can be selected to modify what information is displayed on user interface screen1011and what interactable graphical elements may be provided. In the illustrated example, home monitoring and control features1021include “Window Status”, “Temperature”, “Lights”, “Perimeter Check”, “HVAC”, and “Media”. In some instances, selecting one of home monitoring and control features1021may cause various graphical elements to be overlaid onto the 3D floor plan of the home, as will be shown in reference toFIGS.11A-11H. FIG.11Aillustrates an example of a user interface screen1111A, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show the status of each photovoltaic window, i.e., whether the window is opened or closed, locked or unlocked, or whether the blinds are up or down, etc. In some instances, a user may select a particular photovoltaic window on user interface screen1111A to control a window function to perform a particular window action, i.e., causing an opened window to close, causing an unlocked window to lock, etc. FIG.11Billustrates an example of a user interface screen1111B, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show the status of the home's perimeter. In some instances, a solid line can be overlaid onto the outer walls of the home to show that all doors and windows are closed and locked. FIG.11Cillustrates an example of a user interface screen1111C, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show a 360° security view. In some instances, objects detected by exterior-facing cameras (or optionally interior-facing cameras) on the photovoltaic windows can be indicated on user interface screen1111C. Furthermore, in some embodiments, images and/or videos captured by the exterior-facing cameras of multiple windows may be stitched together to form a real-time 360° view of the exterior of the home (as a bird's eye or first-person perspective). Alternatively or additionally, in a similar manner, images and/or videos captured by the interior-facing cameras of multiple windows may be stitched together to form a real-time view of the interior of the home (as a bird's eye or first-person perspective). User interface screen1111C may be configured to display one or both of these interior and exterior real-time views separately or simultaneously for security purposes or any other purpose. FIG.11Dillustrates an example of a user interface screen1111D, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show any vibrations in the home as detected using vibrations sensors equipped at the photovoltaic windows. In some instances, such detected vibrations can serve as a break-in detection system. FIG.11Eillustrates an example of a user interface screen1111E, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show dynamic temperature monitoring for local HVAC control. In some instances, temperatures detected by interior temperature sensors on the photovoltaic windows can be displayed in corresponding rooms. In some instances, the user may select a displayed temperature to open up a window through which a new room temperature may be entered. In response, the home automation system may control one or more window functions or home functions to modify the local temperature toward the entered room temperature. FIG.11Fillustrates an example of a user interface screen1111F, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show automated blinds control. In some instances, a user may select certain photovoltaic windows to allow (or disallow) access to control the window's blinds by other home monitoring and control features. In some instances, a user may select certain photovoltaic windows to toggle the status of the window's blinds to cause the blinds to open or close. FIG.11Gillustrates an example of a user interface screen1111G, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show automated window control. In some instances, a user may select certain photovoltaic windows to allow (or disallow) access to control the window's opening/closing mechanism by other home monitoring and control features. In some instances, a user may select certain photovoltaic windows to toggle the status of the window to cause the window to open or close. FIG.11Hillustrates an example of a user interface screen1111H, according to some embodiments. In the illustrated example, graphical elements are overlaid onto the 3D floor plan of the home that show automated lighting or other smart home features based on illumination. In some instances, a user may view which lighting devices (window functions or home functions) are currently turned on and/or which room contain such devices. In some instances, a user may select certain lighting or other devices to change their status (e.g., a user may turn off interior or exterior lighting). FIG.12Aillustrates an example mapping1290A that may be implemented within a home automation system, according to some embodiments. In some instances, mappings1290may relate sensor data captured by sensors1228of the photovoltaic windows to certain window actions performed by window functions1222of the photovoltaic windows and/or certain home actions performed by home functions1224. In the illustrated example, in response to motion sensors at the photovoltaic windows detecting objects outside of the home, video can be recorded and audio can be triggered to scare potential intruders away (e.g. dog barking), thereby implementing a security system within the home automation system. FIG.12Billustrates an example mapping1290B that may be implemented within a home automation system, according to some embodiments. In the illustrated example, in response to exterior temperature sensors at the photovoltaic windows detecting that the outside temperature is below a predetermined threshold, the windows may be closed to prevent the inside temperature from dropping. As such, opening and/or closing of the windows can be caused for ventilation based on temperature/weather. FIG.12Cillustrates an example mapping1290C that may be implemented within a home automation system, according to some embodiments. In the illustrated example, in response to exterior light sensors at the photovoltaic windows detecting that the exterior light is above a predetermined threshold, the window blinds may be opened, the lighting at the window may be turned off, and/or the exterior lighting may be turned off. Alternatively or additionally, the tint state of the window may be modified using electrochromic control. As such, lighting and window tint can be controlled based on external illumination. FIG.12Dillustrates an example mapping1290D that may be implemented within a home automation system, according to some embodiments. In the illustrated example, in response to exterior motion sensors at the photovoltaic windows detecting an exterior object (or interior motion sensors at the photovoltaic windows detecting an interior object), the interior or exterior lighting may be modified to illuminate the detected object. As such, lighting can map the movement of individuals around and inside of a home or building. FIG.12Eillustrates an example mapping1290E that may be implemented within a home automation system, according to some embodiments. In the illustrated example, in response to exterior temperature sensors at the photovoltaic windows detecting certain combinations of temperature and humidity and interior temperature sensor at the photovoltaic windows detecting certain interior temperatures, the windows, blinds, and HVAC may be modified in a particular manner to conserve energy using a localized model. FIG.12Fillustrates an example mapping1290F that may be implemented within a home automation system, according to some embodiments. In the illustrated example, in response to the home's heating or cooling system being activated, the photovoltaic windows may be closed to preserve energy. Many variations and modifications to the examples described inFIGS.12A-12Fexist and are considered to be within the scope of the present disclosure. For example, any cause and effect mapping between the illustrated tables are possible, including, for example, any mapping from the Environment (Outside) table and/or the Environment (Inside) table to the Sensors table, any mapping from the Sensors table to the Window Functions/Actions table and/or the Home Functions/Actions table, or any mapping from the Home Functions/Actions table to the Window Functions/Actions table, among other possibilities. FIG.13illustrates a method1300of operating a photovoltaic window, according to some embodiments. One or more steps of method1300may be omitted during performance of method1300, and steps of method1300may be performed in any order and/or in parallel. One or more steps of method1300may be performed by one or more processors. Method1300may be implemented as a computer-readable medium or computer program product comprising instructions which, when the program is executed by one or more computers, cause the one or more computers to carry out the steps of method1300. At step1302, electrical power is generated using a photovoltaic (e.g., photovoltaics108,208,808,908) of a photovoltaic window (e.g., photovoltaic windows102,202,302,402,502,802,902). The electrical power may be generated from incident light onto the photovoltaic window. The photovoltaic may be disposed in parallel with an interior glass (e.g., interior glass584,784,884,984) or an exterior glass (e.g., exterior glass586,786,886,986) of a glass unit (e.g., IGUs352,452,552,752,852,952) of the photovoltaic window. At step1304, the electrical power is sent from the photovoltaic to an electronics package (e.g., electronics packages240,440,540,640,840,940) of the photovoltaic window. The electronics package may be coupled with the glass unit. For example, the electronics package may be coupled directly with the glass unit or coupled indirectly with the glass unit via one or more intermediate components. At step1306, the electrical power is stored at the electronics package. The electrical power may be stored at a power storage (e.g., power storages212,412,612) of the electronics package. At step1308, the electrical power is distributed from the electronics package to at least one electrical load (e.g., electrical load236) of the photovoltaic window. The at least one electrical load may include a wireless communication system (e.g., wireless communication systems216,416,616), one or more sensors (e.g., sensors128,228,328,1228), one or more window functions (e.g., window functions122,222,1222), and/or a power outlet (e.g., power outlets226,326,426,526,626). At step1310, the electrical power is consumed by the at least one electrical load. FIG.14illustrates a method1400of operating a home automation system, according to some embodiments. One or more steps of method1400may be omitted during performance of method1400, and steps of method1400may be performed in any order and/or in parallel. One or more steps of method1400may be performed by one or more processors. Method1400may be implemented as a computer-readable medium or computer program product comprising instructions which, when the program is executed by one or more computers, cause the one or more computers to carry out the steps of method1400. At step1402, electrical power is generated using a photovoltaic (e.g., photovoltaics108,208,808,908) of a photovoltaic window (e.g., photovoltaic windows102,202,302,402,502,802,902). The photovoltaic window may be one of one or more photovoltaic windows of a home automation system (e.g., home automation systems100,200). The electrical power may be generated from incident light onto the photovoltaic window. At step1404, the electrical power is sent from the photovoltaic to a wireless communication system (e.g., wireless communication systems216,416,616) of the photovoltaic window. Receiving the electrical power may enable wireless communication between the wireless communication system and a hub device (e.g., hub devices134,234) of the home automation system. The wireless communication system may be solely powered by the electrical power generated by the photovoltaic. At step1406, a data signal (e.g., data signal292) is sent from the wireless communication system to the hub device. The data signal may include information regarding the photovoltaic window. The data signal may include sensor data and/or power data. At step1408, a control signal (e.g., control signal294) is sent from the hub device to the wireless communication system. At step1410, one or more window actions are performed by one or more window functions (e.g., window functions122,222,1222) at the photovoltaic window in accordance with the control signal. FIG.15illustrates a method1500of controlling a photovoltaic window from a user device, according to some embodiments. One or more steps of method1500may be omitted during performance of method1500, and steps of method1500may be performed in any order and/or in parallel. One or more steps of method1500may be performed by one or more processors. Method1500may be implemented as a computer-readable medium or computer program product comprising instructions which, when the program is executed by one or more computers, cause the one or more computers to carry out the steps of method1500. At step1502, a representation (e.g., representation1015) of a photovoltaic window (e.g., photovoltaic windows102,202,302,402,502,802,902) is generated. The representation may be a graphical representation. The representation may be generated at a user device (e.g., user devices120,220,1920). The representation of may generated by an application program running (or executing) on the user device. At step1504, a user input is received via a user interface (e.g., user interface1929) of the user device. The user input may indicate a selection of the representation of the photovoltaic window. The user input may correspond to a user (e.g., user104) interacting with a graphical element on the user interface and/or a display (e.g., display1937) of the user device causing the representation of the photovoltaic window to be displayed, or causing the selection of the already-displayed representation of the photovoltaic window. At step1506, a control signal (e.g., control signal294) for modifying an operation of the photovoltaic window is generated. The control signal may be generated in response to receiving the user input. The control signal may be generated at the user device. At step1508, the control signal is sent to a wireless communication system (e.g., wireless communication systems216,416,616) of the photovoltaic window. The control signal may be sent by the user device. The control signal may be sent using a communication interface (e.g., communication interface1935) of the user device. The wireless communication system may be powered solely by electrical power generated by a photovoltaic (e.g., photovoltaics108,208,808,908) of the photovoltaic window. The electrical power may be generated from incident light onto the photovoltaic window. At step1510, a data signal (e.g., data signal292) is received. The data signal may include information regarding the photovoltaic window. The data signal may be generated by the photovoltaic window in response to receiving the control signal. The data signal may be received by the user device. The user device may display the information regarding the photovoltaic window on the display. FIG.16illustrates various plots showing daily photovoltaic energy generation, according to some embodiments. The illustrated data corresponds to the daily energy generated from a 3 feet by 5 feet photovoltaic window at 1% efficiency facing four different directions: North, East, South, and West. Different data points correspond to different regions of the United States. It can be observed that the daily energy generated is lowest when the photovoltaic window is facing North, and that some of the highest amounts of energy are generated when the photovoltaic window is facing South. The lower bounds for the data are also shown in each of the four plots. FIG.17illustrates a table showing daily photovoltaic energy generation, according to some embodiments. The data in the illustrated table is calculated based on the data shown in the plots inFIG.16. For example, the table shows that the lower bound daily energy consumptions for all four directions is 0.0025 kWh/day (and 0.0075 kWh/day when only considering climate zones 1-3), for East/South/West is 0.0075 kWh/day (and 0.0175 kWh/day when only considering climate zones 1-3), and for South only is 0.0150 kWh/day. FIG.18illustrates a plot showing the daily energy consumption for various devices, according to some embodiments. The devices include sensors, cameras, smart home devices, speakers, displays, lights, and other window functions such as motorized blinds and electrochromic devices. Also shown inFIG.18are the lower bound daily energy consumptions fromFIGS.16and17. It can be observed that the photovoltaic windows can power many different electrical components and devices as currently configured. It should be noted that more and more devices will become compatible with the photovoltaic windows as the energy efficiencies increase for both the devices as well as the photovoltaic windows. FIG.19illustrates s a simplified block diagram of a user device1920, according to some embodiments. User device1920can implement any or all of the functions, behaviors, and capabilities described herein, as well as other functions, behaviors, and capabilities not expressly described. User device1920can include processing subsystem1902, storage device1933, user interface1929, communication interface1935, and display1937. User device1920can also include other components (not explicitly shown) such as a battery, power controllers, and other components operable to provide various enhanced capabilities. In various embodiments, user device1920can be implemented in a desktop computer, laptop computer, tablet computer, smart phone, other mobile phone, wearable computing device, or other systems having any desired form factor. Further, user device1920can be implemented partly in a base station and partly in a mobile unit that communicates with the base station and provides a user interface. Storage device1933can be implemented, e.g., using disk, flash memory, or any other non-transitory storage medium, or a combination of media, and can include volatile and/or non-volatile media. In some embodiments, storage device1933can store one or more application and/or operating system programs to be executed by processing subsystem1902, including programs to implement various operations described above. For example, storage device1933can store an application program for presenting the user interface screens described inFIGS.10-11Hon display1937and/or user interface1929. User interface1929can include input devices such as a touch pad, touch screen, scroll wheel, click wheel, dial, button, switch, keypad, microphone, or the like, as well as output devices such as a video screen, indicator lights, speakers, headphone jacks, or the like, together with supporting electronics (e.g., digital to analog or analog to digital converters, signal processors, or the like). A user can operate input devices of user interface1929to invoke the functionality of user device1920and can view and/or hear output from user device1920via output devices of user interface1929(and/or via display1937, which may be integrated with user interface1929). Processing subsystem1902can be implemented as one or more integrated circuits, e.g., one or more single core or multi core microprocessors or microcontrollers, examples of which are known in the art. In operation, processing subsystem1902can control the operation of user device1920. In various embodiments, processing subsystem1902can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processing subsystem1902and/or in storage media such as storage device1933. Through suitable programming, processing subsystem1902can provide various functionality for user device1920. For example, in some embodiments, processing subsystem1902can implement various processes (or portions thereof) described above as being implemented by a user device. Processing subsystem1902can also execute other programs to control other functions of user device1920, including application programs that may be stored in storage device1933. Communication interface1935can provide voice and/or data communication capability for user device1920. In some embodiments communication interface1935can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, data network technology such as 3G, 4G/LTE, WiFi, other IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), components for short range wireless communication (e.g., using Bluetooth and/or Bluetooth LE standards, NFC, etc.), and/or other components. In some embodiments communication interface1935can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. FIG.20illustrates an example computer system2000comprising various hardware elements, according to some embodiments. Computer system2000may be incorporated into or integrated with devices described herein and/or may be configured to perform some or all of the steps of the methods provided by various embodiments. For example, in various embodiments, computer system2000may be incorporated into photovoltaic windows, hub devices, or user devices. It should be noted thatFIG.20is meant only to provide a generalized illustration of various components, any or all of which may be utilized as appropriate.FIG.20, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner. In the illustrated example, computer system2000includes a communication medium2002, one or more processor(s)2004, one or more input device(s)2006, one or more output device(s)2008, a communications subsystem2010, and one or more memory device(s)2012. Computer system2000may be implemented using various hardware implementations and embedded system technologies. For example, one or more elements of computer system2000may be implemented as a field-programmable gate array (FPGA), such as those commercially available by XILINX®, INTEL®, or LATTICE SEMICONDUCTOR®, a system-on-a-chip (SoC), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a microcontroller, and/or a hybrid device, such as an SoC FPGA, among other possibilities. The various hardware elements of computer system2000may be communicatively coupled via communication medium2002. While communication medium2002is illustrated as a single connection for purposes of clarity, it should be understood that communication medium2002may include various numbers and types of communication media for transferring data between hardware elements. For example, communication medium2002may include one or more wires (e.g., conductive traces, paths, or leads on a printed circuit board (PCB) or integrated circuit (IC), microstrips, striplines, coaxial cables), one or more optical waveguides (e.g., optical fibers, strip waveguides), and/or one or more wireless connections or links (e.g., infrared wireless communication, radio communication, microwave wireless communication), among other possibilities. In some embodiments, communication medium2002may include one or more buses connecting pins of the hardware elements of computer system2000. For example, communication medium2002may include a bus that connects processor(s)2004with main memory2014, referred to as a system bus, and a bus that connects main memory2014with input device(s)2006or output device(s)2008, referred to as an expansion bus. The system bus may itself consist of several buses, including an address bus, a data bus, and a control bus. The address bus may carry a memory address from processor(s)2004to the address bus circuitry associated with main memory2014in order for the data bus to access and carry the data contained at the memory address back to processor(s)2004. The control bus may carry commands from processor(s)2004and return status signals from main memory2014. Each bus may include multiple wires for carrying multiple bits of information and each bus may support serial or parallel transmission of data. Processor(s)2004may include one or more central processing units (CPUs), graphics processing units (GPUs), neural network processors or accelerators, digital signal processors (DSPs), and/or other general-purpose or special-purpose processors capable of executing instructions. A CPU may take the form of a microprocessor, which may be fabricated on a single IC chip of metal-oxide-semiconductor field-effect transistor (MOSFET) construction. Processor(s)2004may include one or more multi-core processors, in which each core may read and execute program instructions concurrently with the other cores, increasing speed for programs that support multithreading. Input device(s)2006may include one or more of various user input devices such as a mouse, a keyboard, a microphone, as well as various sensor input devices, such as an image capture device, a pressure sensor (e.g., barometer, tactile sensor), a temperature sensor (e.g., thermometer, thermocouple, thermistor), a movement sensor (e.g., accelerometer, gyroscope, tilt sensor), a light sensor (e.g., photodiode, photodetector, charge-coupled device), and/or the like. Input device(s)2006may also include devices for reading and/or receiving removable storage devices or other removable media. Such removable media may include optical discs (e.g., Blu-ray discs, DVDs, CDs), memory cards (e.g., CompactFlash card, Secure Digital (SD) card, Memory Stick), floppy disks, Universal Serial Bus (USB) flash drives, external hard disk drives (HDDs) or solid-state drives (SSDs), and/or the like. Output device(s)2008may include one or more of various devices that convert information into human-readable form, such as without limitation a display device, a speaker, a printer, a haptic or tactile device, and/or the like. Output device(s)2008may also include devices for writing to removable storage devices or other removable media, such as those described in reference to input device(s)2006. Output device(s)2008may also include various actuators for causing physical movement of one or more components. Such actuators may be hydraulic, pneumatic, electric, and may be controlled using control signals generated by computer system2000. Communications subsystem2010may include hardware components for connecting computer system2000to systems or devices that are located external to computer system2000, such as over a computer network. In various embodiments, communications subsystem2010may include a wired communication device coupled to one or more input/output ports (e.g., a universal asynchronous receiver-transmitter (UART)), an optical communication device (e.g., an optical modem), an infrared communication device, a radio communication device (e.g., a wireless network interface controller, a BLUETOOTH® device, an IEEE 802.11 device, a Wi-Fi device, a Wi-Max device, a cellular device), among other possibilities. Memory device(s)2012may include the various data storage devices of computer system2000. For example, memory device(s)2012may include various types of computer memory with various response times and capacities, from faster response times and lower capacity memory, such as processor registers and caches (e.g., L0, L1, L2), to medium response time and medium capacity memory, such as random-access memory (RAM), to lower response times and lower capacity memory, such as solid-state drives and hard drive disks. While processor(s)2004and memory device(s)2012are illustrated as being separate elements, it should be understood that processor(s)2004may include varying levels of on-processor memory, such as processor registers and caches that may be utilized by a single processor or shared between multiple processors. Memory device(s)2012may include main memory2014, which may be directly accessible by processor(s)2004via the memory bus of communication medium2002. For example, processor(s)2004may continuously read and execute instructions stored in main memory2014. As such, various software elements may be loaded into main memory2014to be read and executed by processor(s)2004as illustrated inFIG.20. Typically, main memory2014is volatile memory, which loses all data when power is turned off and accordingly needs power to preserve stored data. Main memory2014may further include a small portion of non-volatile memory containing software (e.g., firmware, such as BIOS) that is used for reading other software stored in memory device(s)2012into main memory2014. In some embodiments, the volatile memory of main memory2014is implemented as RAM, such as dynamic random-access memory (DRAM), and the non-volatile memory of main memory2014is implemented as read-only memory (ROM), such as flash memory, erasable programmable read-only memory (EPROM), or electrically erasable programmable read-only memory (EEPROM). Computer system2000may include software elements, shown as being currently located within main memory2014, which may include an operating system, device driver(s), firmware, compilers, and/or other code, such as one or more application programs, which may include computer programs provided by various embodiments of the present disclosure. Merely by way of example, one or more steps described with respect to any methods discussed above, may be implemented as instructions2016, which are executable by computer system2000. In one example, such instructions2016may be received by computer system2000using communications subsystem2010(e.g., via a wireless or wired signal that carries instructions2016), carried by communication medium2002to memory device(s)2012, stored within memory device(s)2012, read into main memory2014, and executed by processor(s)2004to perform one or more steps of the described methods. In another example, instructions2016may be received by computer system2000using input device(s)2006(e.g., via a reader for removable media), carried by communication medium2002to memory device(s)2012, stored within memory device(s)2012, read into main memory2014, and executed by processor(s)2004to perform one or more steps of the described methods. In some embodiments of the present disclosure, instructions2016are stored on a computer-readable storage medium (or simply computer-readable medium). Such a computer-readable medium may be non-transitory and may therefore be referred to as a non-transitory computer-readable medium. In some cases, the non-transitory computer-readable medium may be incorporated within computer system2000. For example, the non-transitory computer-readable medium may be one of memory device(s)2012(as shown inFIG.20). In some cases, the non-transitory computer-readable medium may be separate from computer system2000. In one example, the non-transitory computer-readable medium may be a removable medium provided to input device(s)2006(as shown inFIG.20), such as those described in reference to input device(s)2006, with instructions2016being read into computer system2000by input device(s)2006. In another example, the non-transitory computer-readable medium may be a component of a remote electronic device, such as a mobile phone, that may wirelessly transmit a data signal that carries instructions2016to computer system2000and that is received by communications subsystem2010(as shown inFIG.20). Instructions2016may take any suitable form to be read and/or executed by computer system2000. For example, instructions2016may be source code (written in a human-readable programming language such as Java, C, C++, C #, Python), object code, assembly language, machine code, microcode, executable code, and/or the like. In one example, instructions2016are provided to computer system2000in the form of source code, and a compiler is used to translate instructions2016from source code to machine code, which may then be read into main memory2014for execution by processor(s)2004. As another example, instructions2016are provided to computer system2000in the form of an executable file with machine code that may immediately be read into main memory2014for execution by processor(s)2004. In various examples, instructions2016may be provided to computer system2000in encrypted or unencrypted form, compressed or uncompressed form, as an installation package or an initialization for a broader software deployment, among other possibilities. In one aspect of the present disclosure, a system (e.g., computer system2000) is provided to perform methods in accordance with various embodiments of the present disclosure. For example, some embodiments may include a system comprising one or more processors (e.g., processor(s)2004) that are communicatively coupled to a non-transitory computer-readable medium (e.g., memory device(s)2012or main memory2014). The non-transitory computer-readable medium may have instructions (e.g., instructions2016) stored therein that, when executed by the one or more processors, cause the one or more processors to perform the methods described in the various embodiments. In another aspect of the present disclosure, a computer-program product that includes instructions (e.g., instructions2016) is provided to perform methods in accordance with various embodiments of the present disclosure. The computer-program product may be tangibly embodied in a non-transitory computer-readable medium (e.g., memory device(s)2012or main memory2014). The instructions may be configured to cause one or more processors (e.g., processor(s)2004) to perform the methods described in the various embodiments. In another aspect of the present disclosure, a non-transitory computer-readable medium (e.g., memory device(s)2012or main memory2014) is provided. The non-transitory computer-readable medium may have instructions (e.g., instructions2016) stored therein that, when executed by one or more processors (e.g., processor(s)2004), cause the one or more processors to perform the methods described in the various embodiments. The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims. Specific details are given in the description to provide a thorough understanding of exemplary configurations including implementations. However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure. Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the technology. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bind the scope of the claims. As used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to “a user” includes reference to one or more of such users, and reference to “a processor” includes reference to one or more processors and equivalents thereof known to those skilled in the art, and so forth. Also, the words “comprise,” “comprising,” “contains,” “containing,” “include,” “including,” and “includes,” when used in this specification and in the following claims, are intended to specify the presence of stated features, integers, components, or steps, but they do not preclude the presence or addition of one or more other features, integers, components, steps, acts, or groups. It is also understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims. The disclosures of the following patent applications are incorporated by reference in their entirety for all purposes: U.S. Patent Application Ser. No. 62/836,161, U.S. Patent Application Ser. No. 63/086,923, U.S. patent application Ser. No. 13/358,075 (which is U.S. Patent Application Publication No. 2012/0186623), U.S. patent application Ser. No. 13/495,379 (which is U.S. Patent Application Publication No. 2013/0333755), PCT Patent Publication No. WO 2020/056361, and PCT Patent Publication No. WO 2018/232358. | 93,259 |
11860597 | DETAILED DESCRIPTION Various embodiments of the present invention will now be described. In the following description, some specific details, such as example circuits and example values for these circuit components, are included to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the present invention can be practiced without one or more specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, processes or operations are not shown or described in detail to avoid obscuring aspects of the present invention. Throughout the specification and claims, the term “coupled,” as used herein, is defined as directly or indirectly connected in an electrical or non-electrical manner. When an element is described as “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, or there could exist one or more intermediate elements. In contrast, when an element is referred to as “directly connected” or “directly coupled” to another element, there is no intermediate element. The terms “a,” “an,” and “the” include plural reference, and the term “in” includes “in” and “on”. Reference to “one embodiment”, “an embodiment”, “an example” or “examples” means: certain features, structures, or characteristics are contained in at least one embodiment of the present invention. These “one embodiment”, “an embodiment”, “an example” and “examples” are not necessarily directed to the same embodiment or example, although it may. Furthermore, the features, structures, or characteristics may be combined in one or more embodiments or examples. In addition, it should be noted that the drawings are provided for illustration, and are not necessarily to scale. The term “or” is an inclusive “or” operator, and is equivalent to the term “and/or” herein, unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. The term “circuit” means at least either a single component or a multiplicity of components, either active and/or passive, that are coupled together to provide a desired function. The term “signal” means at least one current, voltage, charge, temperature, data, or other signal. Where either a field effect transistor (“FET”) or a bipolar junction transistor (“BJT”) may be employed as an embodiment of a transistor, the scope of the words “gate”, “drain”, and “source” includes “base”, “collector”, and “emitter”, respectively, and vice versa. Those skilled in the art should understand that the meanings of the terms identified above do not necessarily limit the terms, but merely provide illustrative examples for the terms. FIG.1illustrates a block diagram of a system100for connecting a power supply to a load in accordance with an embodiment of the present invention. In the example ofFIG.1, the system100may comprise a controller101and one or more monolithic or single-chip integrated circuit (IC) switching devices103(i), where i is a positive integer to index and distinguish the switching devices103(i). In the embodiments of the present disclosure, it is illustrated as including three IC switching devices, respectively labeled with103(1),103(2),103(3), that is, the values of i are 1, 2, and 3 respectively for marking and distinguishing the three IC switching devices. Those skilled in the art should understand that the number of the IC switching devices included in the system100can be changed according to actual application requirements (such as the number of loads), and is not limited to three. By extension, the system100may include a plurality of N monolithic or single-chip IC switching devices103(i), where N is a positive integer, and the value of i is traversal from 1 to N, so as to mark and distinguish the N monolithic or single-chip IC switching devices. Those of ordinary skill in the art should understand that the term “a plurality of” herein used is not intended to be exclusively limited to “more than one”, but is intended to include “one”. It can be considered thatFIG.1illustrates an example of N=3. Each switching device103(i) can be used as a “load switch”, that is to say, the switching device103(i) is controllable (for example, it can be controlled by the controller101), and is integrated with a driving circuit for driving a power switch and a monitoring circuit used to provide the controller with the status of the switching device and the power supply. In the exemplary embodiment ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the switching device103(i) may have a plurality of pins, which may include an IN pin and a BUS pin. The IN pin may be coupled to a system power supply bus SYSPOL for receiving a power supply voltage VIN. The BUS pin may be used to connect to a load. Each switching device103(i) (i=1, N) may include a power switch (for example, the power switch102shown inFIG.1), such as a field effect transistor (FET). The power switch may have a first terminal (such as a drain terminal) coupled to the IN pin and a second terminal (such as a source terminal) coupled to the BUS pin. Each switching device103(i) (i=1, N) may further include a power switch gate driving circuit to drive a gate of the power switch to control the power switch to perform on and off switching in a controlled manner. When the power switch is turned on, the power switch electrically couple the IN pin to the BUS pin to transmit the power supply voltage VIN received by the IN pin to the load connected to the BUS pin. When the power switch is turned off, the power switch electrically decouple the IN pin from the BUS pin to stop transmitting the power supply voltage VIN from the IN pin to the load connected to the BUS pin. In the example ofFIG.1, the power supply voltage VIN is illustrated as of 5V, which can provide a power supply current of 8 A. However, those skilled in the art should understand that this is only an example, and the system100can also be used to couple other power supplies that provide different voltages and currents to the load. In the example ofFIG.1, each switching device103(i) (i=1, N; the illustrated embodiment shows an example of N=3) may further comprise: a FAULT pin that may be configured to indicate system faults (for example: over-temperature fault, over-current fault, over-voltage fault, system short-circuit fault, power switch short-circuit fault, etc.); a GND pin that may be used to couple the switching device103(i) to a reference ground potential; a EN pin that may be used to enable or disable the switching device103(i); an IMON pin that may be configured to provide an indication signal indicating an amount of an output current of the switching device103(i) (for example, an amount of DC current flowing from the input pin IN to the output pin BUS); and an ILIM pin (e.g. a switching current limit setting pin) that may be used to receive or set a current-limiting reference signal. In the example ofFIG.1, the controller101may enable or disable the switching device103(i) based on the status of the switching device103(i) (i=1, N; the illustrated embodiment shows an example of N=3). The controller101may be configured to receive status indication signals (for example, temperature indication signals, current sensing indication signals, fault indication signals, etc.) from the switching device103(i) (i=1, N; the illustrated embodiment shows an example of N=3). The controller101may be implemented by any general-purpose microprocessor or other single-chip processor, with integrated input/output pins, configurable firmware, and data acquisition or data processing functions. The microprocessor101being “general-purpose” may mean that the microprocessor does not need to be specifically designed for and to work with the switching device103(i), but may include a general-purpose microprocessor or microcontroller elements, such as processors and memory. One of the advantages may include that the switching device103(i) can be controlled by a general-purpose microcontroller, and there is no need to specially design a dedicated external controller to provide an interface to realize the interaction between the switching device103(i) and the microcontroller. The switching device103(i) can be directly controlled by the general-purpose controller101. In the example ofFIG.1, the controller101may be configured to receive the status indication signals provided by the switching device103(i) (for each i=1, . . . , N; the illustrated embodiment shows an example of N=3) and to control operation of the switching device103(i) based on these indication signals. More specifically, the IMON pin and the FAULT pin of the switching device103(i) (for each i=1, . . . , N) may be coupled to the controller101to allow the controller101to receive the status indication signals from the aforementioned pins and process these indication signals. For instance, the controller101may receive a status indication signal (e.g., an output current indication signal or a switching current indication signal) from a pin (for example, IMON pin) of the switching device103(i), and send the status indication signal to an analog-to-digital converter ADC to perform analog-to-digital conversion, and then send a digital equivalent of the status indication signal to a control and logic programming circuit for processing. The controller101may also be configured and coupled to each switching device103(i) to detect the power supply voltage VIN and an output voltage VBUS(i) of the switching device103(i) in a similar manner, wherein i=1, . . . , N and the illustrated embodiment shows an example of N=3. In the example ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the FAULT pin of the switching device103(i) may output a status indication signal in digital form, which may be received by a digital input pin of the controller101and then sent to the control and logic programming circuit for processing. In such an embodiment, the status indication signal output by the FAULT pin of the switching device103(i) is a digital signal, and thus can be processed by the control and logic programming circuit without analog-to-digital conversion. In the example ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the controller101may have a FLT(i) pin that may be coupled to the FAULT pin of the switching device103(i). The fault indication signal from the FAULT pin of the switching device103(i) may indicate whether the switching device103(i) is operating normally or is malfunctioning. The controller101receives and processes the fault indication signal from the FAULT pin of the switching device103(i) to control the switching device103(i). For example, when the fault indication signal indicates that the switching device103(i) has a fault, the controller101may control the switching device103(i) to enter a pull-down mode or disable the switching device103(i). In the example ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the controller101may have an EN(i) pin that may be coupled to the EN pin of the switching device103(i). The controller101may be configured to control the switching device103(i) to be enabled by providing an enable signal to the EN pin of the switching device103(i). In one embodiment, when the enable signal is active at the EN pin of the switching device103(i), the switching device103(i) is enabled and can be operated to couple the input power supply VIN to the load. When the enable signal is not effective at the EN pin of the switching device103(i), the switching device103(i) is disabled to disconnect the input power supply VIN from the load. In one embodiment, when the enable signal remains at a predetermined voltage level for a predetermined duration, the switching device103(i) returns to the pull-down mode, in which the switching device103(i) pull the output voltage VBUS(i) down (e.g. to the reference ground potential). In the example ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3)(expandable to the case where i traverses the integers from 1 to N, N is a positive integer, the illustrated embodiment inFIG.1shows an example of N=3), the control and logic programming circuit may be configured to acquire or determine a current limit value of the switching device103(i), including a soft-start current limit value and a current limit value during normal operation after startup (hereinafter referred to as an operation current limit value and labeled with ICC). Generally, the soft-start current limit value may be set higher than the operation current limit value ICC. For example, in an embodiment, the soft-start current limit value may be twice the operation current limit value ICC. The following mainly discusses how to set and adjust the operation current limit value ICC. The control and logic programming circuit may further be configured to adjust the operation current limit value ICCat any time according to detected values of the power supply voltage VIN and the output voltage VBUS(i) and/or based on system load regulation requirements. The operation current limit value ICCmay be converted into an analog current limit reference signal by, for example, a digital-to-analog converter or converted into a digital current limit reference signal and then converted into an analog current limit reference signal by a discrete component. The controller101may output the current limit reference signal (in analog or digital form) at its ILIM(i) pin. The switching device103(i) may be configured to correspondingly receive the current limit reference signal provided by the ILIM(i) pin of the controller101at the ILIM pin of the switching device103(i). Optionally and alternatively, the operation current limit value ICCmay be set or adjusted by coupling, for example, a resistive element having a resistance value of Rlim(i) to the ILIM pin of the switching device103(i), as shown in the example ofFIG.6. In the example ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the controller101may be configured to receive a supply current indication signal from the switching device103(i). The supply current indication signal may be a current sensing signal IS output by the IMON pin of the switching device103(i). In an embodiment, the current sensing signal IS may be a current signal proportional to the output current of the switching device103(i) or proportional to a switching current flowing through the power switch (e.g. the power switch102shown inFIG.1) in the switching device103(i), and may be used for current sharing and over current protection control. The output current or the switching current flowing through the power switch may be sensed by coupling a current detection circuit to the power switch (e.g., the power switch102shown inFIG.1) in the switching device103(i) to provide the current sensing signal IS to the IMON pin. The IMON pin of each switching device103(i) may also be used to receive a system current indication signal indicative of a real-time total output current of the system100(or indicative of a sum of the switching currents flowing through the power switches in the switching devices103(i), e.g. i=1, 2, 3 in the example ofFIG.1). In one embodiment, the system current indication signal may be a current signal ISUM, and the current signal ISUM may be a sum of the current sensing signals IS of all the switching devices103(i) in the system100. In another embodiment, the system current indication signal may be a voltage signal VMON, and the voltage signal VMON may be obtained by converting the sum of the current sensing signals IS of all the switching devices103(i) in the system100into a voltage signal. For example, in the example ofFIG.1, the IMON pin of each switching device103(i) may be coupled to a resistive device104, which has a resistance value RMON, and the current sensing signal IS of each switching device103(i) applying on the resistive device104generates a voltage signal VMON that represents the total real-time output current of the system or the sum of the switching currents flowing through the power switches of all the switching devices103(i) in the system100. Hereinafter, the sum of the switching currents flowing through the power switches of all the switching devices103(i) in the system100is referred to as a total switching current. In the example ofFIG.1, the voltage signal VMON at the IMON pin represents the total output current of all the three switching devices103(i), i=1, 2, 3, or the total switching current of all the three switching devices103(i), i=1, 2, 3, i.e., VMON=3*IS*RMON. Those skilled in the art should understand that in an example extended to the system100comprising N switching devices103(i), i traverses the integers from 1 to N, and N is a positive integer, the voltage signal VMON at the IMON pin may be indicative of a total output current of the N switching devices103(i), i=1, . . . , N or a total switching current (i.e., a sum of the switching currents flowing through the power switches of all the N switching devices103(i)), i.e., for this situation, VMON=N*IS*RMON. The current sensing signal IS can provide the controller101with real-time output current information of each switching device103(i) or current monitoring information of the switching current flowing through the power switch in each switching device103(i), and the voltage signal VMON can provide real-time monitoring information of the total output current or total switching current of a plurality of (for example, N) switching devices103(i) in the system100. In the exemplary embodiment ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the switching device103(i) may further have a SYSLIM pin that may be configured to receive or set a system total current limit value ISYSLIM of the system100comprising the switching devices {103 (i), i=1, . . . , N}. A total system current ISYS of the system100may refer to the total current provided to the system100by the system power supply bus SYSPOL, and the system total current limit value ISYSLIM may be set according to a current supply capacity of the system power supply bus SYSPOL. For example, if the current supply capacity of the system power supply bus SYSPOL can reach 8 A, the system total current limit value ISYSLIM of the system100may be set to 8 A. For each i from 1 to N (the illustrated embodiment shows an example of N=3), the SYSLIM pin of each switching device103(i) may be coupled to an external circuit or component. For example, in one embodiment, the SYSLIM pin of each switching device103(i) may be coupled to the controller101(for example, coupled to a SYSLIM pin of the controller101) to receive a system current limit signal to set the system total current limit value ISYSLIM. In an embodiment, the system current limit signal may be a voltage signal. In one embodiment, the system current limit signal may be a current signal. In one embodiment, the SYSLIM pin of the switching device103(i) may not receive the system current limit signal from the controller101, but may simply be coupled to a resistive device105for setting the system total current limit value ISYSLIM of the system100, as shown in the example illustrated inFIG.5. According to an embodiment of the present invention, each switching device103(i) may be configured to adaptively adjust the operation current limit value ICCof the switching device103(i) based on the voltage signal VMON at its IMON pin and the system total current limit value ISYSLIM received or set at the SYSLIM pin. In an alternative embodiment, the ILIM pin of the switching device103(i) may not receive the current limit reference signal from the controller101. The switching device103(i) may be configured to adaptively adjust the operation current limit value ICCof the switching device103(i) to change in a same direction as the system total current limit value ISYSLIM, namely: the operation current limit value ICCof the switching device103(i) increases as the system total current limit value ISYSLIM increases, and decreases as the system total current limit value ISYSLIM decreases. The switching device103(i) may further be configured to adaptively adjust the operation current limit value ICCof the switching device103(i) to change in an opposite direction to the system current indication signal received at the IMON pin (e.g. the system current indication signal may in an example comprise the current signal ISUM or in another example as shown inFIG.1comprise the voltage signal VMON), namely: the operation current limit value ICCof the switching device103(i) increases as the system current indication signal decreases and decreases as the system current indication signal increases. In another alternative embodiment, the current limit reference signal received by the ILIM pin of the switching device103(i) may be used to set or determine a maximum operation current limit value ICCmaxof the switching device103(i), for example, the maximum operation current limit value ICCmaxmay be set based on a rated operation current of the switching device103(i) or a maximum current allowed to flow through the switching device103(i) without causing the switching device103(i) being damaged. When the operation current limit value ICCadaptively adjusted by the switching device103(i) reaches the maximum operation current limit value ICCmax, the switching device103(i) may further be configured to stop adaptively adjusting the operation current limit value ICCto continue to increase based on the voltage signal VMON at the IMON pin and the system total current limit value ISYSLIM at the SYSLIM pin. That is, for this situation (i.e., when the operation current limit value ICCadaptively adjusted by the switching device103(i) reaches the maximum operation current limit value ICCmax), if the system total current limit value ISYSLIM continues to increase and/or the voltage signal VMON at the IMON pin continues to decrease, the switching device103(i) no longer adaptively adjusts the operation current limit value ICCto increase but to keep or maintain the operation current limit value ICCat the maximum operation current limit value ICCmax, however, the switching device103(i) may still be able to adaptively adjust the operation current limit value ICCto decrease if the system total current limit value ISYSLIM decreases and/or the voltage signal VMON at the IMON pin increases. In the exemplary embodiment ofFIG.1, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the switching device103(i) may be configured to receive/obtain the system current limit signal at its SYSLIM pin. The system current limit signal may be a voltage signal in one embodiment and may have a limiting voltage value VSYSLIM indicative of the system total current limit value ISYSLIM. The switching device103(i) may comprise an adaptive adjustment module configured to adaptively adjust a current-limiting threshold voltage Vth, which is indicative of the operation current limit value ICCof the switching device103(i), to change in a same direction as the limiting voltage value VSYSLIM, namely: the current-limiting threshold voltage Vth increases as the limiting voltage value VSYSLIM increases, and decreases as the limiting voltage value VSYSLIM decreases. The switching device103(i) may further be configured to adaptively adjust the current-limiting threshold voltage Vth, for example by the adaptive adjustment module, to change in an opposite direction to the voltage signal VMON at the IMON pin, that is: the current-limiting threshold voltage Vth decreases when the voltage signal VMON increases, and increases when the voltage signal VMON decreases. In this fashion, the switching device103(i) can realize adaptive adjustment of the operation current limit value ICCto change in the same direction as the system total current limit value ISYSLIM and to change in the opposite direction to the voltage signal VMON at the IMON pin. In one embodiment, the current-limiting threshold voltage Vth may be proportional to the operation current limit value ICCof the switching device103(i) with a predetermined first coefficient K1, namely: Vth=K1*ICC, wherein the predetermined first coefficient K1is greater than zero. In one embodiment, the switching device103(i) may be configured to adaptively adjust the current-limiting threshold voltage Vth to satisfy: Vth=K2*VSYSLIM−K3*VMON, wherein K2is a predetermined second coefficient and is greater than zero, and K3is a predetermined third coefficient greater than zero. In one embodiment, as shown in the exemplary system200illustrated inFIG.2, difference from the example shown inFIG.1is that the system current limit signal received at the SYSLIM pin of each switching device103(i) may be a current signal, and may have the system total current limit value ISYSLIM. For this situation, in one embodiment, a resistive device105having a resistance value of Rsyslim may be coupled between the SYSLIM pin and the reference ground GND to convert the system current limit signal into a voltage signal, which may have a limiting voltage value VSYSLIM indicative of the system total current limit value ISYSLIM, wherein VSYSLIM=ISYSLIM*Rsyslim. Similar to the embodiment illustrated inFIG.1, according to the embodiment illustrated inFIG.2, for each i from 1 to N (the illustrated embodiment shows an example of N=3), each switching device103(i) may be configured to adaptively adjust the current limiting threshold voltage Vth, for example by the adaptive adjustment module, to change in a same direction as the limiting voltage value VSYSLIM, namely: the current-limiting threshold voltage Vth increases as the limiting voltage value VSYSLIM increases, and decreases as the limiting voltage value VSYSLIM decreases. The switching device103(i) may further be configured to adaptively adjust the current-limiting threshold voltage Vth, for example by the adaptive adjustment module, to change inversely with the voltage signal VMON at the IMON pin, that is: the current-limiting threshold voltage Vth decreases when the voltage signal VMON increases, and increases when the voltage signal VMON decreases. In this fashion, the switching device103(i) can realize adaptive adjustment of the operation current limit value ICCto change in the same direction as the system total current limit value ISYSLIM and to change in the opposite direction to the voltage signal VMON at the IMON pin. In one embodiment, the current-limiting threshold voltage Vth may be proportional to the operation current limit value ICCof the switching device103(i) with a predetermined first coefficient K1, namely: Vth=K1*ICC, wherein the predetermined first coefficient K1is greater than zero. In one embodiment, the switching device103(i) may be configured to adaptively adjust the current-limiting threshold voltage Vth to satisfy: Vth=K2*VSYSLIM−K3*VMON, wherein K2is a predetermined second coefficient and is greater than zero, and K3is a predetermined third coefficient greater than zero. In an alternative one embodiment, as shown in the exemplary system300illustrated inFIG.3, the system current limit signal received at the SYSLIM pin of each switching device103(i) may still be a current signal. Difference from the example ofFIG.2is that in the example ofFIG.3, the resistive device105may not be coupled to the SYSLIM pin of each switching device103(i), the system current limit signal received by the SYSLIM pin of each switching device103(i) may have the system total current limit value ISYSLIM, and consequently the controller101needs to provide a current signal having a current value of N times the system total current limit value ISYSLIM (i.e. N*ISYSLIM) by a single SYSLIM pin of the controller101for being equally divided and distributed to the N switching device103(i), N=3 in the example illustrated inFIG.3. Then each switching device103(i) may adjust the operation current limit value ICCof the switching device103(i) to change in the same direction as the system total current limit value ISYSLIM and to change in the opposite direction to the voltage signal VMON at the IMON pin through, for example, an adaptive adjustment module. In one embodiment, the switching device103(i) may be configured to adaptively adjust the operation current limit value ICCto satisfy: ICC=K4*ISYSLIM-K5*VMON, where K4is a predetermined fourth coefficient and is greater than zero, K5is a predetermined fifth coefficient and is greater than zero. In one embodiment, as shown in the exemplary system400illustrated inFIG.4, as an alternative example with slightly modification to the example ofFIG.3, when the system current limit signal is a current signal, the controller101may have N pins SYSLIM(i) respectively connected to the SYSLIM pin of the N switching devices103(i), i traverses the integers from 1 to N (FIG.4illustrates an example of N=3). For this situation, for each i from 1 to N, the SYSLIM(i) pin of the controller101may provide a system current limit signal having the system total current limit value ISYSLIM to the SYSLIM pin of the switching devices103(i). Then each switching device103(i) may adjust the operation current limit value ICCof the switching device103(i) to change in the same direction as the system total current limit value ISYSLIM and to change in the opposite direction to the voltage signal VMON at the IMON pin through, for example, the adaptive adjustment module. In one embodiment, the switching device103(i) may be configured to adaptively adjust the operation current limit value ICCto satisfy: ICC=K4*ISYSLIM-K5*VMON, where K4is a predetermined fourth coefficient and is greater than zero, K5is a predetermined fifth coefficient and is greater than zero. In one embodiment, as shown in the exemplary system500illustrated inFIG.5, difference from the examples ofFIG.1toFIG.4is that in the example ofFIG.5, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the SYSLIM pin of each switching device103(i) may not receive the system current limit signal from the controller101, but may simply be coupled to a resistive device105having a resistance value of Rsyslim for setting the system total current limit value of ISYSLIM. In one embodiment, an internal current source with a first predetermined current value Iint1(for example, the first predetermined current value may be 20 μA in one example) may be used to drive the SYSLIM pin of the switching device103(i), then Iint1*Rsyslim defines the limiting voltage value VSYSLIM (i.e., VSYSLIM=Iint1*Rsyslim for this situation) which is indicative of the system total current limit value ISYSLIM. The internal current source is “internal” in that it is integrated in the switching device103(i). In practical applications, the resistance value Rsyslim of the resistive device105may be reasonably selected according to the system total current limit value ISYSLIM required for example by application specifications. The resistance value Rsyslim of the resistive device105should satisfy: Rsyslim=RMON*ISYSLIM/Iint1*ACS, wherein ACSis a current sensing gain of the current detection circuit of each switching device103(i). For example, if Iint1=20 μA, ACS=10 μA/A, RMON=20 kΩ, and the system total current limit value ISYSLIM needs to be set to 7 A, then Rsyslim should be selected as 70 kΩ. Similar to the embodiment illustrated inFIG.1, in the embodiment illustrated inFIG.5, for each i from 1 to N (the illustrated embodiment shows an example of N=3), each switching device103(i) may be configured to adjust the current limiting threshold voltage Vth that is indicative of the operation current limit value ICCof the switching device103(i) to change in the same direction as the limiting voltage value VSYSLIM and to change in the opposite direction to the voltage signal VMON at the IMON pin, for example, through the adaptive adjustment module. In one embodiment, the current-limiting threshold voltage Vth may be proportional to the operation current limit value ICCof the switching device103(i) with a predetermined first coefficient K1, namely: Vth=K1*ICC, wherein the predetermined first coefficient K1is greater than zero. In one embodiment, the switching device103(i) may be configured to adaptively adjust the current-limiting threshold voltage Vth to satisfy: Vth=K2*VSYSLIM−K3*VMON, wherein K2is a predetermined second coefficient and is greater than zero, and K3is a predetermined third coefficient greater than zero. In one embodiment, as shown in the exemplary system600illustrated inFIG.6, difference from the example ofFIG.5is that in the example ofFIG.6, for each i from 1 to N (the illustrated embodiment shows an example of N=3), the ILIM pin of each switching device103(i) may not receive the current limit reference signal from the ILIM(i) pin of the controller101, but may instead be coupled to an ithresistive element having a resistance value Rlim(i) to set or adjust the maximum operation current limit value ICCmaxof the switching device103(i). Although the example shown inFIG.6is based on the circuit architecture ofFIG.5by modifying the connection of the ILIM pin of the switching device103(i), those skilled in the art should understand that this modification may be applied to other embodiments of the present invention. In one embodiment, each switching device103(i) may comprise an internal current source having a second predetermined current value Iint2(for example, the second predetermined current value Iint2may be 10 μA in one example) configured to drive the ILIM pin of the switching device103(i), then Iint2*Rlim(i) defines a maximum operating current limit voltage value VLIM(i) (i.e., VLIM(i)=Iint2*Rlim(i) for this situation) that represents the maximum operation current limit value ICCmaxof the switching device103(i). In accordance with an exemplary embodiment of the present invention, referring to the examples illustrated inFIG.1toFIG.6, for each i from 1 to N (the illustrated embodiments show an examples of N=3), the switching device103(i) may further comprise a priority pin PRI, which may be configured or used to set a turn-off sequence/turn-off priority of the switching device103(i) when over-current occurs in the system (e.g.,100or200or300or400or500or600) or in other words when the total output current of the N switching devices {103(i), i=1, N} or the sum of the switching currents flowing through the power switches of the N switching devices {103(i), i=1, N} reaches the system total current limit value ISYSLIM. For example, in one embodiment, the priority pin PRI of the switching device (e.g. the switching device103(1) in the examples inFIGS.1to6) which is to be turned off at first when an over-current occurs in the system (e.g.,100or200or300or400or500or600) may be coupled to a first predetermined potential P(1) to set the switching device (e.g. the switching device103(1) in the examples inFIGS.1to6) to have a first turn-off priority. The switching device which is set to have the first turn-off priority may also be referred to as a first turn-off priority switching device in the following. If an over-current occurs in the system (e.g.,100or200or300or400or500or600), the switching device with the first turn-off priority (that is, the switching device of which the PRI pin is coupled to the first predetermined potential P(1), e.g., the switching device103(1) shown in the examples inFIGS.1to6) is turned off first, and at the same time, the switching device with the first turn-off priority (e.g., the switching device103(1) shown in the examples inFIGS.1to6) is further configured to set its FAULT pin at a second predetermined potential P(2) in response to its turn-off. The second predetermined potential P(2) may be equal to or different from the first predetermined potential P(1). The exemplary embodiments inFIGS.1to6show the case where both the first predetermined potential P(1) and the second predetermined potential P(2) are set at the reference ground potential. The FAULT pin of the switching device with the first turn-off priority (e.g., the switching device103(1) illustrated in the embodiments ofFIGS.1to6) may be coupled to the PRI pin of the switching device (shown as the switching device103(2) in the examples ofFIGS.1to6for ease of understanding) which is to be set to have a second turn-off priority, so as to transmit the second predetermined potential P(2) to the PRI pin of the switching device (e.g., the switching device103(2)) to be set to have the second turn-off priority. The switching device which is set to have the second turn-off priority may be referred to as a second turn-off priority switching device in the following. In other words, it may be considered as that after the switching device having the first turn-off priority (for example, the switch device103(1)) is turned off, its turn-off priority is passed to the switching device having the second turn-off priority (for example, the switching device103(2)). It can also be understood as that a turn-off priority handshaking between the first turn-off priority switching device (such as switching device103(1)) and the second turn-off priority switching device (such as switching device103(2)) may be established by coupling the FAULT pin of the first turn-off priority switching device (such as the switching device103(1)) to the PRI pin of the second turn-off priority switching device (such as the switching device103(2)) so that the turn-off priority may be passed to the second turn-off priority switching device (such as the switching device103(2)) when the first turn-off priority switching device is turned off. Then, after the first turn-off priority switching device (for example, the switching device103(1)) is turned off, if the over-current remains in the system (e.g.,100or200or300or400or500or600) or another over-current occurs (i.e.: if the total output current of those switching devices that are not turned off in the N switching devices {103(i), i=1, . . . , N} or the sum of the switching currents flowing through the power switches of those switching devices that are not turned off in the N switching devices {103(i), i=1, . . . , N} reaches the system total current limit value ISYSLIM), then the switching device of which the PRI pin has the second predetermined potential of P(2) (that is, the switching device which is set to the second turn-off priority, for example, the switching device103(2) in the examples ofFIG.1toFIG.6) will be turned off. That is to say, the turn-off sequence of the switching device having the second turn-off priority is one later than that of the switching device having the first turn-off priority, or the turn-off priority is one stage lower. The second turn-off priority switching device (that is, the switching device whose turn-off sequence is the second with respect to the switching device whose turn-off sequence is the first, for example, the switching device103(2)) may further be configured to set its FAULT pin (that is, the fault indication pin) at a third predetermined potential P(3) in response to its turn-off. The third predetermined potential P(3) may be equal to or different from the first predetermined potential P(1) and/or second predetermined potential P(2). The exemplary embodiments inFIGS.1to6show the case where the first predetermined potential P(1), the second predetermined potential P(2) and the third predetermined potential P(3) are all set to the reference ground potential). The FAULT pin of the switching device with the second turn-off priority (e.g., the switching device103(2) illustrated in the embodiments ofFIGS.1to6) may be coupled to the PRI pin of the switching device (shown as the switching device103(3) in the examples ofFIGS.1to6for ease of understanding) which is to be set to have a third turn-off priority, so as to transmit the third predetermined potential P(3) to the PRI pin of the switching device (e.g., the switching device103(3)) to be set to have the third turn-off priority. The switching device which is set to have the third turn-off priority may also be referred to as a third turn-off priority switching device in the following. In other words, it may be considered as that after the switching device having the second turn-off priority (for example, the switch device103(2)) is turned off, its turn-off priority is passed to the switching device having the third turn-off priority (for example, the switching device103(3)). It can also be understood as that a turn-off priority handshaking between the second turn-off priority switching device (such as switching device103(2)) and the third turn-off priority switching device (such as switching device103(3)) may be established by coupling the FAULT pin of the second turn-off priority switching device (such as the switching device103(2)) to the PRI pin of the third turn-off priority switching device (such as the switching device103(3)) so that the turn-off priority may be passed to the third turn-off priority switching device (such as the switching device103(3)) when the second turn-off priority switching device is turned off. By analogy, those skilled in the art should understand that it can be extended to the case where the system (e.g.,100or200or300or400or500or600) includes N switching devices103(i), where i traverses the integers from 1 to N, and N is a positive integer. The PRI pin of the switching device which is to be turned off at first (referred to as the first turn-off priority switching device) when an over-current occurs in the system may be coupled to a first predetermined potential P(1) (e.g. the first predetermined potential P(1) is exemplarily set at the reference ground potential in the examples fromFIG.1toFIG.6). If an overcurrent occurs in the system, the switching device with the first turn-off priority is turned off first, and the switching device with the first turn-off priority set its FAULT pin (that is, the fault indication pin) at a second predetermined potential P(2) in response to its turn-of. The second predetermined potential P(2) may be the same as or different from the first predetermined potential P(1), for example, the first predetermined potential P(1) and the second predetermined potential P(2) are both set to the reference ground potential in the examples fromFIG.1toFIG.6. If N is greater than 1, for each j=2, . . . , N, the PRI pin of a switching device which is to be turned off at jth(referred to as the jthturn-off priority switching device) may be coupled to the FAULT pin of a switching device that is turned off at (j−1)th(referred to as the (j−1)thturn-off priority switching device), and the jthturn-off priority switching device may be configured to set its FAULT pin at a (j+1)thpredetermined potential P(j+1) in response to its turn-off (i.e. in response to the moment when the jthturn-off priority switching device is turned off). Therefore, the system (e.g.,100or200or300or400or500or600) may include N predetermined potentials {P(j), j=1, . . . , N}, and the N predetermined potentials P(j) (j traverses from 1 to N) may be equal to each other or different from each other. It can also be understood as that a turn-off priority handshaking between the jthturn-off priority switching device and the (j+1)thturn-off priority switching device may be established by coupling the FAULT pin of the jthturn-off priority switching device to the PRI pin of the (j+1)thturn-off priority switching device so that the turn-off priority may be passed to the (j+1)thturn-off priority switching device when the jthturn-off priority switching device is turned off, for each j from 1 to (N−1). The present disclosure provides single-chip integrated circuit switching devices and systems including monolithic (or single-chip) integrated circuit switching devices for current limiting and turn-off priority setting when the system is over-current. Although some embodiments of the present invention are described in detail, it should be understood that these embodiments are only used for exemplary description, and are not used to limit the scope of the present invention. Various modifications may be made without deviating from the technology. The advantages of the various embodiments of the present invention are not confined to those described above. These and other advantages of the various embodiments of the present invention will become more apparent upon reading the whole detailed descriptions and studying the various figures of the drawings. Many of the elements of one embodiment may be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the present invention is not limited except as by the appended claims. | 45,173 |
11860598 | DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments will be described in detail with reference to the Drawings. It should be noted that each of the embodiments described below shows a general or specific example. The numerical values, shapes, materials, structural components, the arrangement and connection of the structural components, steps, the processing order of the steps, etc., shown in the following embodiments are mere examples, and therefore do not limit the scope of the present invention. Therefore, among the structural components in the following embodiments, those not recited in any one of the independent claims are described as optional structural components. It should be noted that the respective figures are schematic diagrams and are not necessarily precise illustrations. Furthermore, in the figures, structural components that are substantially the same are assigned the same reference sign, and overlapping description may be omitted or simplified. Embodiment Configuration of Control System First, the configuration of a control system according to an embodiment will be described.FIG.1is a block diagram illustrating the functional configuration of the control system according to the embodiment. Control system100according to the embodiment is a system capable of controlling multiple devices20according to voice uttered by user U. Control system100is thus a voice-based device control system. As shown inFIG.1, control system100includes control apparatus10, devices20, electricity meter30, voice recognition server40, device control server50, and mobile terminal60. Each of these components will be described below. First, control apparatus10will be described. Control apparatus10is, for example, a home energy management system (HEMS) controller having energy management functions. Located in building80, control apparatus10manages the amount of electricity used (in other words, the amount of power consumed) by devices20installed in building80. Control apparatus10also controls, according to the user's voice, devices20installed in building80or in the grounds of building80. Control apparatus10is not limited to a HEMS controller but may be some other home controller or gateway apparatus without energy management functions. Specifically, control apparatus10includes operation input interface11, first communication unit12, second communication unit13, controller14, storage15, display16, voice capturer17, and voice output unit18. Operation input interface11is a user interface device that receives hand-inputted operation inputs (hereinafter also referred to simply as operation inputs) of the user. Operation input interface11is implemented as a touch panel, for example, but may be implemented as hardware keys such as push buttons. First communication unit12is a communication module (communication circuit) for control apparatus10to communicate with components such as devices20and electricity meter30over a local area communication network. First communication unit12is a wireless communication circuit for performing wireless communication, for example, but may be a wired communication circuit for performing wired communication. First communication unit12communicates according to a communication standard such as, for example but not limited to, ECHONET Lite®. First communication unit12may communicate according to different communication standards for different devices. Second communication unit13is a communication circuit for control apparatus10to communicate with components such as voice recognition server40and device control server50over wide area communication network70such as the Internet. Second communication unit13is a wireless communication circuit for performing wireless communication, for example, but may be a wired communication circuit for performing wired communication. Second communication unit13may communicate according to any communication standard. Controller14performs control related to control apparatus10. Controller14is implemented as a microcomputer, for example, but may be implemented as a processor or a special-purpose circuit. Storage15is a storage device that stores items such as control programs executed by controller14. Storage15is implemented as semiconductor memory, for example. Display16displays images based on control performed by controller14. Display16is implemented as a display panel, for example a liquid crystal panel or an organic electroluminescent (EL) panel. Voice capturer17captures the user's voice. Voice capturer17is implemented as a microphone, for example. Voice output unit18outputs voice and/or sound based on control performed by controller14. Voice output unit18is implemented as a speaker, for example. Devices20will now be described. Devices20are installed in building80. Devices20are controlled according to control commands sent by first communication unit12in control apparatus10. Devices20are target devices to be controlled in control system100. Devices20includes air conditioner21, lighting22, heater23, electric shutter24, hot-water supply system25, electric lock26, cooking device27(such as a range hood or an IH cooking heater), and exterior lighting28. Electricity meter30will now be described. Electricity meter30measures electricity usage in building80. The electricity usage measured by electricity meter30is managed by control apparatus10. Electricity meter30is a distribution switchboard having the function of measuring the electricity usage for each branch circuit in building80, but may be what is called a smart meter. Voice recognition server40will now be described. Voice recognition server40is a cloud server that performs voice recognition processing for a voice signal received from control apparatus10. A provider of a voice recognition service uses voice recognition server40to provide the voice recognition service. For example, voice recognition server40converts a voice signal received from control apparatus10into text information and sends the text information to device control server50. Device control server50will now be described. Device control server50is a cloud server that generates control commands based on the text information received from the voice recognition server and sends the generated control commands to control apparatus10. The control commands are received by devices20via control apparatus10. Device control server50includes communication unit51, information processor52, and storage53. Communication unit51is a communication module (communication circuit) for device control server50to communicate with components such as control apparatus10, voice recognition server40, and mobile terminal60over wide area communication network70such as the Internet. Communication unit51communicates through wire, for example, but may communicate wirelessly. Communication unit51may communicate according to any communication standard. Information processor52performs information processing related to control on devices20. Information processor52is implemented as a microcomputer, for example, but may be implemented as a processor. Information processor52includes registration unit54and executor55. Storage53is a storage device that stores sending history information on control commands, control programs executed by information processor52, and other items. Storage53is implemented as a hard disk drive (HDD), for example. Mobile terminal60will now be described. Mobile terminal60may specifically be a smartphone or a tablet terminal. Mobile terminal60may be carried by a user living in building80or by a user living away from building80. Basic Operations Basic operations of control system100will now be described.FIG.2is a sequence diagram of the basic operations of control system100. In response to the user uttering a voice intended to control a device, voice capturer17in control apparatus10captures the voice (S11). Controller14causes second communication unit13to send a voice signal of the captured voice to voice recognition server40(S12). Voice recognition server40receives the voice signal and performs voice recognition processing (S13). Specifically, voice recognition server40converts the received voice signal into text information and sends the text information to device control server50(S14). Communication unit51in device control server50receives the text information from voice recognition server40. Based on the text information received by communication unit51, information processor52generates a control command (S15). For example, if the text information indicates text “Turn on the air conditioner,” information processor52generates a control command for turning on air conditioner21. Information processor52causes communication unit51to send the generated control command to control apparatus10(S16). Second communication unit13in control apparatus10receives the control command from device control server50. Controller14causes first communication unit12to send the received control command to device20(in this case, air conditioner21) (S17). Device20receives the control command and operates (or stops) according to the control command (S18). Meanwhile, after the control command is sent from device control server50to control apparatus10, information processor52in device control server50updates the sending history information on control commands in storage53(S19).FIG.3is a diagram illustrating an example of the sending history information on control commands. As shown inFIG.3, the sending history information includes the sending time of each control command, the target device of the control command, and the content of control, which are associated with each other. As described above, control system100can control device20by sending a control command according to the user's voice. Operations of Registering Standard Action Based on the sending history information as inFIG.3, control system100can register a standard action for collectively operating two or more of devices20. Operations of registering the standard action will be described below.FIG.4is a flowchart of the operations of registering the standard action. The standard action may also be referred to as scene control or a routine. Based on the sending history information on control commands in storage53, registration unit54in device control server50identifies concentrated control in which two or more devices are sequentially controlled at time intervals less than a specific period (e.g., five minutes) (S21). In the example inFIG.3, three sequences of concentrated control A to C are identified. Registration unit54determines whether the identified concentrated control sequences satisfy a consecutiveness condition (S22). Specifically, registration unit54determines whether the concentrated control sequences A to C were performed in the same time slot on at least a predetermined number of consecutive days (e.g., three consecutive days) and further the concentrated control sequences A to C all have the same set of two or more control operations performed on at least the predetermined number of consecutive days. In the example inFIG.3, three control operations, i.e., the control of turning on air conditioner21, the control of turning on lighting22, and the control of opening electric shutter24, were performed in the time slot from 8:00 to 9:00 on three consecutive days. The concentrated control sequences A to C thus satisfy the condition that they were performed in the same time slot on at least a predetermined number of consecutive days and they all have the same set of two or more control operations performed on at least the predetermined number of consecutive days. If registration unit54determines that the identified concentrated control sequences A to C do not satisfy the consecutiveness condition (No at S22), the process terminates. If registration unit54determines that the identified concentrated control sequences A to C satisfy the consecutiveness condition (Yes at S22), registration unit54registers a standard action for collectively performing the control operations performed in all of the concentrated control sequences A to C on at least the predetermined number of consecutive days (S23).FIG.5is a diagram illustrating an example of registered information indicating the registered standard action. As shown inFIG.5, executing the standard action registered based on the sending history information as inFIG.3allows collectively performing the three control operations performed on all of at least the predetermined number of consecutive days, i.e., the control of turning on air conditioner21, the control of turning on lighting22, and the control of opening electric shutter24. As shown inFIG.5, the standard action is assigned a name. That is, registration unit54sets the name of the standard action (S24). Specifically, registration unit54refers to name candidate information stored in advance in storage53and sets, as the name of the standard action, a name associated with the time slot in which the concentrated control sequences were performed.FIG.6is a diagram illustrating an example of the name candidate information. In the example inFIG.6, the time slot from 8:00 to 9:00 in which the concentrated control sequences were performed has two modes associated therewith: “good-morning mode” and “morning mode”. Registration unit54therefore sets either “good-morning mode” or “morning mode” as the name of the standard action. At this point, registration unit54may ask the user which name to set. For this purpose, registration unit54may cause communication unit51to send inquiry information to control apparatus10, which may then ask the user which name to employ by displaying video on display16of control apparatus10or by outputting voice from voice output unit18. The answer to the inquiry can be identified from the user's operation input to operation input interface11or from the user's voice input to voice capturer17. The name of the standard action may also be set based on the type of a device to be controlled in the standard action. For example, the name of the standard action may be set based on whether the devices to be controlled in the standard action includes a particular device. In the example inFIG.6, if the devices to be controlled in the standard action includes cooking device27, the name “cooking mode” may be set. The name of the standard action may also be set based on whether the control involved in the standard action includes a particular control operation on a particular device. In the example inFIG.6, if the control involved in the standard action includes the control of filling a bathtub with hot water by hot-water supply system25, the name “bath mode” may be set. If the control involved in the standard action includes the control of unlocking electric lock26, the name “return-home mode” may be set. If the control involved in the standard action includes the control of turning off exterior lighting28, the name “morning mode” may be set. If the control involved in the standard action includes the control of turning on exterior lighting28, the name “evening mode” may be set. Registration unit54notifies the user of the completion of the standard action registration and the name of the registered standard action (S25). Specifically, registration unit54causes communication unit51to send notification information to control apparatus10. Control apparatus10receives the notification information and causes display16to display a notification screen as inFIG.7.FIG.7is a diagram illustrating an example of the notification screen for providing a notification of the completion of the standard action registration. Control apparatus10may also cause voice output unit18to output voice to notify the user of the completion of the standard action registration and the name of the registered standard action. As described above, control system100can automatically register the standard action based on sending history information of control commands, without receiving operation input or voice input aimed at registering the standard action from the user. One of the determining factors in the above standard action registration is whether the concentrated control sequences were performed in the same time slot on at least a predetermined number of consecutive days. However, the time slot does not need to be used as a determining factor. For example, registration unit54may register the standard action based on the control sequences irrespective of time slots.FIG.8is a diagram illustrating another example of the sending history information on control commands. In the example inFIG.8, the control of unlocking electric lock26on each of three consecutive days was followed by three sequential control operations: the control of turning on lighting22, the control of turning on air conditioner21, and the control of opening electric shutter24. However, the sequences of these three control operations were performed in different time slots on the respective days. In this case, registration unit54may register, irrespective of the time slots in which the control operations were performed, a standard action for collectively performing the control of turning on lighting22, the control of turning on air conditioner21, and the control of opening electric shutter24. Operation Example 1 of Executing Standard Action Now, operation example 1 of executing the standard action in control system100will be described.FIG.9is a sequence diagram of operation example 1 of executing the standard action in control system100. In response to the user uttering a voice instructing to execute the standard action, voice capturer17in control apparatus10captures the voice (S31). For example, the user says “Good-morning mode,” which is then captured by voice capturer17. Controller14causes second communication unit13to send a voice signal of the captured voice to voice recognition server40(S32). Voice recognition server40receives the voice signal and performs voice recognition processing (S33). Specifically, voice recognition server40converts the received voice signal into text information and sends the text information to device control server50(S34). Communication unit51in device control server50receives the text information from voice recognition server40. Based on the text information received by communication unit51, executor55generates control commands for executing the standard action (S35). For example, if the standard action is “good-morning mode,” executor55generates a control command for turning on air conditioner21, a control command for turning on lighting22, and a control command for opening electric shutter24. Executor55causes communication unit51to send the generated control commands to control apparatus10(S36). Second communication unit13in control apparatus10receives the control commands from device control server50. Controller14causes first communication unit12to send the received control commands to devices20(in this case, air conditioner21, lighting22, and electric shutter24) (S37). Devices20receive the respective control commands and operate (or stop) according to the control commands (S38). Meanwhile, after the control commands are sent from device control server50to control apparatus10, executor55in device control server50updates the sending history information on control commands in storage53(S39).FIG.10is a diagram illustrating an example of the sending history information on control commands including a standard action execution history. As described above, control system100can use the standard action to collectively control two or more devices. In the foregoing basic operations, controlling two or more devices requires sending two or more voice signals from control apparatus10to voice recognition server40. By contrast, the standard action enables controlling two or more devices by sending a single voice signal. This can reduce voice-signal data traffic. Operation Example 2 of Executing Standard Action Control system100may execute the standard action after modifying the content of the standard action according to the utterer of the voice. Operation example 2 of executing the standard action in such a case will be described below.FIG.11is a sequence diagram of operation example 2 of executing the standard action in control system100. Processing at steps S31to S34is the same as in operation example 1. After step S34, voice recognition server40estimates, based on the voice signal, the age of the user who has uttered the voice instructing to execute the standard action. Voice recognition server40sends estimated age information indicating the user's estimated age to device control server50(S41). Communication unit51in device control server50receives the text information and the estimated age information from voice recognition server40. Based on the estimated age information, executor55determines whether the standard action determined from the text information can be executed without modification. For example, if the user is a child, it may be unpreferable, from the viewpoint of safety, to operate a heat-generating device such as heater23, or a rotationally driven device such as a dryer (not shown). As such, if the age indicated by the estimated age information is less than or equal to a predetermined age (e.g.,12), executor55determines that the user does not satisfy a predetermined condition (in this case, the age condition) (S42). Executor55then modifies the content of the standard action (S43). Specifically, if the standard action involves control such as operating heater23or a dryer, executor55excludes such control from the operations to be performed. If the standard action does not involve such control, there are cases where the content of the standard action is not modified even if the user does not satisfy the predetermined condition. It is to be noted that the processing at step S43does not modify the registered content of the standard action, but rather modifies, as an exception at the time of executing the standard action, the operations to be performed. Executor55then generates control commands for executing the modified standard action (S44). Subsequent processing is the same as in operation example 1. As described above, before executing the standard action, control system100can modify the content of the standard action in view of safety. Although operation example 2 estimates the user's age based on the user's voice, this age estimation is merely exemplary. For example, control apparatus10may include a camera (not shown) to estimate the user's age based on the user's face image taken by the camera. In this case, the user's age is estimated by control apparatus10, which then sends the estimated age information to device control server50. Alternatively, control apparatus10may send the face image to device control server50, which may then estimate the user's age based on the face image. The basis for determining whether to modify the content of the standard action is not limited to whether the user satisfies the age condition. For example, whether to modify the content of the standard action may be determined based on whether information on the user's other attributes (such as sex) satisfies a predetermined condition. In execution operation example 2, instead of modifying the content of the standard action, executor55may ask (confirm with) the user whether the user really wants to execute the standard action. Operation Example 3 of Executing Standard Action Control system100may execute the standard action after modifying the content of the standard action according to the electricity usage in building80. Operation example 3 of executing the standard action in such a case will be described below.FIG.12is a sequence diagram of operation example 3 of executing the standard action in control system100. As mentioned above, control apparatus10manages electricity usage for each branch circuit (in other words, for each device20) measured by electricity meter30. Control apparatus10periodically sends, to device control server50, electricity usage information indicating the amount of electricity used by each device20(S51). Once communication unit51in device control server50receives the electricity usage information, electricity usage information in storage53is updated (S52). While device control server50manages the electricity usage in building80in the above manner, the processing at steps S31to S34is performed as in operation example 1. After step S34, based on the electricity usage information, executor55in device control server50determines whether executing, without modification, the standard action determined from the text information would cause the electricity usage in building80to exceed a predetermined value. Specifically, executor55refers to the current electricity usage determined based on the latest electricity usage information, and an increase in the electricity usage expected to result from executing the standard action, determined based on the past electricity usage information. Executor55can then determine whether executing the standard action without modification would cause the electricity usage in building80to exceed a predetermined value. The predetermined value may be manually set by the user or may be determined based on a request to reduce the electricity usage (such as a demand response signal) provided from a server of an electric power company. If executor55determines that the electricity usage would exceed the predetermined value (S53), executor55modifies the content of the standard action (S54). Specifically, executor55excludes at least one of the control operations involved in the standard action. Alternatively, executor55may modify the content of at least one of the control operations involved in the standard action (e.g., the set temperature of air conditioner21) so that the electricity usage is reduced. Executor55then generates control commands for executing the modified standard action (S55). Subsequent processing is the same as in operation example 1. As described above, before executing the standard action, control system100can modify the content of the standard action in view of the electricity usage. Operations of Modifying Registered Content of Standard Action The sending history information inFIG.10above indicates that the execution of the standard action “good-morning mode” is always followed by changing the set temperature of air conditioner21. If, as in this case, a control command is sent during the execution of the standard action on at least a predetermined number of consecutive days, registration unit54may modify the registered content of the standard action based on that control command.FIG.13is a flowchart of operations of modifying the registered content of the standard action. Registration unit54in device control server50refers to the sending history information in storage53to identify a control command sent during the execution of the standard action (S61). Registration unit54determines whether the identified control command has been sent in the same manner on at least a predetermined number of consecutive days (e.g., three consecutive days) (S62). If registration unit54determines that the identified control command has not been sent in the same manner on at least the predetermined number of consecutive days (No at S62), registration unit54does not modify the registered content of the standard action. If registration unit54determines that the identified control command has been sent in the same manner on at least the predetermined number of consecutive days (Yes at S62), registration unit54modifies the registered content of the standard action (S63). That is, registration unit54rewrites the registered information (such as the one inFIG.5) in storage53. For example, a change command may be sent during the execution of the standard action, as shown inFIG.10. The change command is a control command for changing the set temperature of air conditioner21among the target devices of the standard action. In this case, at step S63, the set temperature in the control command to be sent to air conditioner21in response to an instruction to execute the standard action is changed from the default value to a temperature specified by the change command (27° C. inFIG.10). Registration unit54thus modifies, based on the change command, the content of the control command to be sent to air conditioner21in response to an instruction to execute the standard action. FIG.14is a diagram illustrating another example of the sending history information including the standard action execution history. The sending history information inFIG.14indicates that the execution of the standard action “good-morning mode” is always followed by turning on exterior lighting28. In this case, at step S63, exterior lighting28is added to the devices to which control commands are to be sent in response to an instruction to execute the standard action. FIG.15is a diagram illustrating yet another example of the sending history information including the standard action execution history. The sending history information inFIG.15indicates that the execution of the standard action “good-morning mode” is always followed by turning off air conditioner21turned on by the execution of the standard action. In this case, at step S63, air conditioner21is excluded from the devices to which control commands are to be sent in response to an instruction to execute the standard action. As described above, control system100can automatically change the registered content of the standard action based on sending history information of control commands, without receiving operation input or voice input aimed at changing the registered content of the standard action from the user. Operations for Standard Action not Executed as Usual The sending history information inFIG.10indicates that the standard action “good-morning mode” is executed in the same time slot every day. For such a routine standard action, further consideration may be given to processing that could be performed by control system100when the standard action is not executed as usual. Operations including such processing will be described below.FIG.16is a flowchart of operations including processing performed for the standard action not executed as usual. Executor55in device control server50refers to the sending history information in storage53to identify the time slot in which the standard action was executed in the past (S71). Based on the sending history information as inFIG.10, executor55can determine that the standard action was usually executed in the time slot from 8:00 to 8:30 in the past. Executor55determines whether the standard action has been executed today in the time slot from 8:00 to 8:30 (i.e., the identified time slot) (S72). If executor55determines that the standard action has not been executed in the identified time slot (No at S72), executor55causes communication unit51to send confirmation information to control apparatus10(S73). The confirmation information is information for confirming whether the standard action needs to be executed. Control apparatus10receives the confirmation information and causes display16to display a confirmation screen for confirming whether the standard action needs to be executed.FIG.17is a diagram illustrating an example of the confirmation screen for confirming whether the standard action needs to be executed. Executor55then causes communication unit51to send notification information to mobile terminal60(S74). The notification information is information for notifying a user outside building80(specifically, the user of mobile terminal60) that the standard action was not executed. Mobile terminal60receives the notification information and causes its display to show a notification screen for providing a notification that the standard action was not executed.FIG.18is a diagram illustrating an example of the notification screen for providing a notification that the standard action was not executed. For example, if the user living in building80is an elderly person, a failure to execute the standard action as usual may suggest the occurrence of an abnormality in the user. A notification may then be provided to mobile terminal60carried by the user's relative living apart from the user, so that the relative can address the user's abnormality. If executor55determines that the standard action has been executed in the identified time slot (Yes at S72), executor55sends no confirmation information or notification information. As described above, for the standard action not executed as usual, control system100can ask the user in building80whether the standard action does not need to be executed. In addition, for the standard action not executed as usual, control system100can notify a user outside building80that the standard action was not executed as usual. It is to be noted that, for the standard action not executed as usual, not both of the notification information and the confirmation information need to be sent. Rather, at least one of the notification information and the confirmation information may be sent. Variations In the above embodiment, devices20are controlled according to the user's voice inputs, and the standard action is registered based on the sending history information on control commands resulting from the user's voice inputs. Alternatively, devices20may be controlled according to the user's hand-inputted operation inputs to operation input interface11, and the standard action may be registered based on sending history information resulting from the user's hand-inputted operation inputs to operation input interface11. Further, devices20may be controlled according to the user's voice inputs or hand-inputted operation inputs, and the standard action may be registered based on sending history information resulting from both the user's hand-inputted operation inputs and the user's voice inputs. The sending history information is a mere example of control history information on devices20. Control system100may also register the standard action based on control history information other than the sending history information. For example, control system100may register the standard action based on the consecutiveness of control on devices20determined from history information on operation inputs for controlling devices20received by operation input interface11. Control system100may also register the standard action based on the consecutiveness of control on devices20determined from history information on voice inputs for controlling devices20captured by voice capturer17. The control history information may thus include the sending history information on control commands, the history information on operation inputs for controlling devices20, and the history information on voice inputs for controlling devices20. In the above embodiment, voice capturer17provided in control apparatus10is used as a voice input user interface. Alternatively, a device separate from control apparatus10may be used as a voice input user interface. For example, a smart speaker or a mobile terminal may be used as a voice input user interface. The functions of the voice recognition server and the device control server described in the above embodiment may be integrated into control apparatus10. For example, controller14in control apparatus10may have the voice recognition function, the functions of registration unit54, and the functions of executor55. This allows control apparatus10to control devices20and register the standard action without communicating over wide area communication network70. Advantageous Effects, Etc As described above, control system100includes: communication unit51that sends, to each of a plurality of devices20, a control command for controlling such device20; registration unit54that registers a standard action for collectively controlling at least two devices included in the plurality of devices20, based on device (20) control consecutiveness which is determined by control history information of the plurality of devices20; and executor55that causes communication unit51to send a control command to each of the at least two devices when execution of the standard action registered is instructed. In the above-described embodiment, the control command is sent from communication unit51to each of the at least two devices via control apparatus10. Such a control system100can automatically register the standard action without receiving operation input or voice input aimed at registering the standard action from the user. Furthermore, for example, registration unit54registers the standard action if, based on the control history information, the at least two devices were controlled in a specific time slot on at least a predetermined number of consecutive days. Such a control system100can automatically register the standard action based on the consecutiveness of control on device20which takes the time slot into consideration. Furthermore, for example, registration unit54registers the standard action if, based on the control history information, the at least two devices were controlled within a predetermined period after a specific device included in the plurality of devices20was controlled, on at least a predetermined number of consecutive days. Such a control system100can automatically register the standard action based on the consecutiveness of control on devices20which is determined according to the operating sequence of devices20. Furthermore, for example, registration unit54sets a name of the standard action registered, based on the specified time slot. Such a control system100can set the name of the standard action based on the time slot in which the at least two devices are to be controlled. Furthermore, for example, registration unit54sets a name of the standard action registered, based on types of the at least two devices. Such a control system100can set the name of the standard action based on the types of the at least two devices (i.e., the kind of devices20that are included in the at least two devices). Furthermore, for example, when a control command is sent to each of the at least two devices during execution of the standard action, registration unit54modifies a registered content of the standard action based on the control command. Such a control system100can automatically change the registered content of the standard action without receiving operation input or voice input aimed at changing the registered content of the standard action from the user. Furthermore, for example, in the modifying of the registered content, when a change command which is a control command for changing an operation state of a first device among the at least two devices is sent during execution of the standard action, a content of the control command to be sent to the first device when execution of the standard action is instructed is modified based on the change command. Such a control system100can change the device control content in the standard action without receiving operation input or voice input aimed at changing the registered content of the standard action from the user. Furthermore, for example, in the modifying of the registered content, when a control command is sent to a second device included in the plurality of devices20other than the at least two devices during execution of the standard action, the second device is included among devices to which a control command is to be sent when execution of the standard action is instructed. Such a control system100can add a device to be controlled in the standard action without receiving operation input or voice input aimed at changing the registered content of the standard action from the user. Furthermore, for example, in the modifying of the registered content, when a stop command which is a control command for stopping operation of a third device among the at least two devices is sent during execution of the standard action, the third device is excluded from devices to which a control command is to be sent when execution of the standard action is instructed, based on the stop command. Such a control system100can remove a device to be controlled in the standard action without receiving operation input or voice input aimed at changing the registered content of the standard action from the user. Furthermore, for example, when execution of the standard action is not instructed in a time slot in which the standard action was executed in the past, executor55further causes communication unit51to send confirmation information for confirming whether the standard action needs to be executed. Such a control system100can, when the standard action is not being performed as per usual, confirm with a user inside building80if the standard action need not be executed. Furthermore, for example, when execution of the standard action is not instructed in a time slot in which the standard action was executed in the past, executor55further causes communication unit51to send notification information for notifying that the standard action is not being executed. Such a control system100can, when the standard action is not being performed as per usual, notify a user outside building80that the standard action is not being performed as per usual. Furthermore, for example, when it is determined that a user instructing execution of the standard action registered does not satisfy a predetermined condition, executor55(i) modifies a content of the standard action to prevent communication unit51from sending a control command to a predetermined device among the at least two devices and (ii) executes the standard action. Such a control system100can, when executing the standard action, change the content of the standard action according to an attribute, etc., of the user instructing the execution of the standard action. For example, when the user is a child, control system100can, taking safety into consideration, prevent some of the devices from operating. Furthermore, for example, the execution of the standard action registered is instructed by voice by the user, and whether or not the user satisfies the predetermined condition is determined based on a voice of the user. Such a control system100can, when executing the standard action, change the content of the standard action according to the voice of the user instructing the execution of the standard action. Furthermore, for example, when it is determined that an amount of electricity used in a building in which the plurality of devices20are installed will exceed a predetermined amount when the standard action registered is executed, executor55modifies a content of the standard action and executes the standard action. Such a control system100can, when executing the standard action, change the content of the standard action according to the amount of electricity used in building80. For example, control system100can prevent some of the devices from operating in order to make the amount of electricity used in building80less than or equal to a predetermined value. A control method executed by a computer such as control system100includes: sending, to each of a plurality of devices20, a control command for controlling such device20; registering a standard action for collectively controlling at least two devices included in the plurality of devices20, based on device20control consecutiveness which is determined by control history information of the plurality of devices20; and sending a control command to each of the at least two devices when execution of the standard action registered is instructed. Such a control method can automatically register the standard action without receiving operation input or voice input aimed at registering the standard action from the user. Other Embodiments Although an embodiment has been described up thus far, the present invention is not limited to the foregoing embodiment. For example, the control system is implemented by a plurality of devices in the foregoing embodiment but may be implemented by a single device. When the control system is implemented by a plurality of devices, the structural components included in the respective systems may be allocated to the devices in any way. Furthermore, for example, there is no particular limitation as to the communication method between devices in the foregoing embodiment. Furthermore, the communication between devices may involve intervention by a relay device that is not shown in the figures. Moreover, the transmission path of information described in the foregoing embodiment is not limited to the transmission path shown in the sequence diagrams. Furthermore, in the foregoing embodiment, a process executed by a particular processing unit may be executed by a different processing unit. Moreover, the order of multiple processes may be changed, and multiple processes may be executed in parallel. Furthermore, in the foregoing embodiment, respective structural components may be realized by executing a software program suited to such structural components. The respective structural components may be implemented by a program executor such as a CPU or a processor reading out and executing the software program recorded in a recording medium such as a hard disk or a semiconductor memory. Furthermore, the respective structural components may be realized by hardware. For example, the respective structural components may be a circuit (or an integrated circuit). These circuits as a whole may compose a single circuit or may be individual circuits. Moreover, each of the circuits may be implemented by a general-purpose circuit or a dedicated circuit. Furthermore, general or specific aspects of the present invention may be implemented as a system, an apparatus, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be implemented as any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium. For example, the present invention may be implemented as a control apparatus according to the foregoing embodiment or a control system corresponding thereto. Furthermore, the present invention may be implemented as a control method executed by a computer of the control system, or the like, and may be implemented as a program for causing a computer to execute such a program. The present invention may be implemented as a non-transitory computer-readable recording medium having such a program recorded thereon. Forms obtained through various modifications to the respective embodiments conceived by those skilled in the art, as well as forms obtained by any combination of structural components and functions in the respective embodiments within the essence of the present invention are included in the present invention. REFERENCE SIGNS LIST 20device51communication unit54registration unit55executor80building100control system | 47,279 |
11860599 | DETAILED DESCRIPTION One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions are made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The present disclosure is generally directed towards industrial automation-related systems and methods that use redundant, concurrently operating power supplies to feed multiple, adjustable backplane power rails that supply downstream devices with power. Use of concurrent and redundant power supplies may provide the industrial automation system with an increased capability to respond to changes in component-level operations and electrical faults, which may result in improved reliability and higher system availabilities. High availability may be manifested as, for example, a high mean time to failure (MTTF) combined with a low mean time to repair (MTTR). To elaborate, redundant power supplies may power two or more power conditioners. Each power conditioner may supply respective amounts of power to different backplanes that distribute the power to a shared load, such as one or more input/output (IO) modules. A load may couple to at least two backplanes via power ORing circuitry. For example, a secondary power conditioner may back up a primary power conditioner, such that the secondary power conditioner and the primary power conditioner may concurrently supply the shared load. Since the power conditioners provide voltage to the shared load at the same time, if the primary power supply and/or primary power conditioner were to go offline, then the load may be switched to being powered by the secondary power conditioner without causing interruption to operation of the load. Indeed, if the primary power supply and/or the primary power conditioner become unavailable, the secondary power supply and the secondary power conditioner become the power source of the load. When coupled to the backplanes via power ORing circuitry, the load may lose power when both power supplies and power conditioners are unavailable but may remain powered on if the primary power supply and/or the primary power conditioner is lost. Consequently, common cause misoperation may occur less since there is less opportunity for a single fault to make both power supplies unavailable during distribution to the load when power ORing occurs at the load. Furthermore, since the secondary power supply provides power to the load concurrently with the primary power supply, even if the primary power supply were to become unavailable, the load would not have to be switched to the secondary power supply to receive power. Thus, the load may experience fewer signal transient noises than typically introduced during the switching process. Additional redundancy systems are also described herein to further improve industrial automation reliability and availability, which may further reduce a likelihood of the load going offline. Although redundant power conditioners may improve operation, it may be difficult or undesirable to perform physical tests on redundant power conditioners concurrently providing power to the same load to test the outputs. Since both of the power conditioners continuously provide the same voltage amount to the shared load, the load may be seamlessly switched between the primary power supply and secondary power supply without causing any interruption. This may also make it difficult or undesirable to take either power supply offline for validation testing. Indeed, taking one power supply offline compromises availability since the remaining power supply may fail while the other is offline. To remedy this, systems and methods described herein relate to online validation testing that do not disrupt concurrent power supply operations. To verify that a primary power conditioner is capable of supplying a full load current to a load, a primary power conditioner may adjust (e.g., increase, decrease) its voltage output provided to the load, such that the load may be fully supplied by the primary power conditioner, thereby confirming that the primary power conditioner is capable of supplying the full load in the event of the secondary power conditioner being unavailable, such as during a future normal operation. Independent operation of different portions of the industrial automation system, including the power distribution system, may enhance diagnostic capabilities of the industrial automation system. System-wide diagnostic operations may be performed based on the monitored operation of the different individual IO modules (e.g., loads) of the industrial automation system. The power ORing circuitry may also include sensing circuitry to obtain measurements of current, voltage, temperature, or any metric, of portions of the power conditioners, of the load, or the like for diagnostic operations. The measurements may enhance in-field control determinations. For example, a local control system of one device may adjust its operation based on a measurement made in another device without intervention of a system-wide control system. Health monitoring provided by the diagnostic capabilities may enable detection of backplane, power conditioner, and/or power supply inability to support a full load in the event that one of the power supplies is unavailable. Health monitoring at the IO module level (e.g., load module level) may also enable detection of a load module that unable to support its own load in the event of the failure of one of the power rails. Although the following example environment in which the present embodiments may be implemented is described in terms of a petrochemical application, it should be understood that concurrent and redundant power supplying and distribution systems may also improve operations in other applications. For example, burner management applications, gas production applications, mining applications, and/or other heavy industrial applications may benefit from power ORing circuitry embodiments described herein, as well as any systems in which improved reliability and efficiency (e.g., less down time) is desired. By way of introduction,FIG.1is a diagrammatic representation of a petrochemical-related process in which embodiments described below may be implemented. In particular, illustrated is an example reactor system10, such as a polymerization reactor capable of processing olefin monomers, like ethylene or hexene, to produce homopolymers or co-polymers as products12. Any suitable reactor may be used, including batch, slurry, gas-phase, solution, high pressure, tubular or autoclave reactors, or any combination thereof. For ease of discussion,FIG.1refers to a loop reactor14for polymerization. However, it should be noted that the discussion set forth below is intended to be applicable, as appropriate, to any petrochemical process, industrial process, manufacturing process, or the like, as a way to provide context to the following discussion ofFIGS.2-9. Production processes, like the polymerization reactor process shown inFIG.1, may occur on an ongoing basis as part of a continuous operation to generate products (e.g., product12). Sometimes a variety of both continuous and batch systems may be employed throughout a production process. Various suppliers may provide reactor feedstocks16to the reactor system10via pipelines, trucks, cylinders, drums, and so forth. The suppliers may include off-site and/or on-site facilities, including olefin plants, refineries, catalyst plants, on or off-site laboratories, and the like. Examples of possible feedstocks16include olefin monomers18, diluents or diluting agents20, catalysts22, and/or other additives. The other feed components, additional raw materials24, may also be provided to the reactor14. Feedstocks16may change when using different manufacturing processes and/or when manufacturing a different final product. The feedstocks16may be stored or processed in any suitable vessel or process, such as in monomer storage and feed tanks, diluent vessels, catalyst tanks, co-catalyst cylinders and tanks, treatment beds like molecular sieve beds and/or aluminum packing, and so forth, prior to or after being received at the reactor system10. The reactor system10may include one type of reactor in a system or multiple reactors of the same or different type, and desired processing conditions in one of the reactors may be different from the operating conditions of the other reactors. The product12may be moved from the reactor system10for additional processing, such as to form polymer pellets from the product12. In general, the product12, or processed product (e.g., pellets) may be transported to a product load-out area for storage, blending with other products or processed products, and/or loading into railcars, trucks, bags, ships, and so forth, for distribution to customers. Processes, like the reactor system10, may receive or process feedstocks16at relatively high pressures and/or relatively high temperatures. For example, a hydrogen feedstock may be handled by the reactor system10via pipeline at approximately 900-1000 pounds per square inch gauge (psig) at psig at 90-110° F. Furthermore, some products may be generated using highly reactive, unstable, corrosive, or otherwise toxic materials as the feedstock16or as intermediate products, such as hydrogen sulfide, pure oxygen, or the like. Heat, pressure, and other operating parameters may be employed appropriately to obtain appropriate reaction conditions, which may increase a reactivity, instability, or corrosive nature of the feedstock16. These materials may be desired to be processed and transported using reliable and highly available systems, for example, to reduce a likelihood of a release event from occurring. Each of the feedstocks16, sub-reactor26, and/or feed system32may use different operating parameters to create suitable output intermediate products for use in subsequent reactions or as a product output. Operating parameters of the reactor system10may include temperature, pressure, flow rate, mechanical agitation, product takeoff, component concentrations, polymer production rate, and so forth, and one or more may be selected on to achieve the desired polymer properties. Controlling temperature may include using a gas burner, an electrical heating conduit, a heat exchange device28, or the like, to increase or reduce the temperature of intermediate products of the reactor system10. As an example, during operation, a cooling fluid may be circulated within the cooling jackets of the heat exchange devices68as needed to remove the generated heat and to maintain the temperature within the desired range, such as between approximately 150° F. to 250° F. (65° C. to 121° C.) for polyethylene. Feedstock16flow rates, control of operating parameters, and the like, may be managed by a control system (e.g., like the control system shown inFIG.2). The control system may generate control signals, for example, control signals that are transmitted to one or more actuators30to cause the actuator to open or close (or partially open or partially close) as a way to control operating parameters of the feedstock16, control of other operating parameters, and the like. Care may be taken when adjusting operating parameters since petrochemical manufacturing processing may be highly sensitive to erroneous operation. For example, fractions of a percentage of reliability change in a control system of the reactor system10may make a difference between a process being taken offline or a process working as expected. With the foregoing in mind, the components of the reactor system10may be connected to power supplies, power supply conditions, and other systems that enable the components to be highly available. Moreover, it should be noted that the present embodiments described herein may be implemented in a variety of industrial environments and should not be limited to the reactor system10described above. Referring now toFIG.2,FIG.2is an illustration of an example industrial automation system46that includes a distributed control system48(e.g., a “DCS”). The industrial automation system46may include the reactor system10fromFIG.1and/or any number of industrial automation components. Industrial automation components may include a user interface, the distributed control system48, a motor drive, a motor, a conveyor, specialized original equipment manufacturer machines, fire suppressant system, and any other device that may enable production or manufacture products or process certain materials. In addition to the aforementioned types of industrial automation components, the industrial automation components may also include controllers, input/output (IO) modules, motor control centers, motors, human-machine interfaces (HMIs), user interfaces, contactors, starters, sensors, drives, relays, protection devices, switchgear, compressors, network switches (e.g., Ethernet switches, modular-managed, fixed-managed, service-router, industrial, unmanaged), and the like. The industrial automation components may also be related to various industrial equipment such as mixers, machine conveyors, tanks, skids, specialized original equipment manufacturer machines, and the like. The industrial automation components may also be associated with devices used in conjunction with the equipment such as scanners, gauges, valves, and the like. In one embodiment, every aspect of the industrial automation component may be controlled or operated by a single controller (e.g., control system). In another embodiment, the control and operation of each aspect of the industrial automation components may be distributed via multiple controllers (e.g., control system). The industrial automation system46may divide logically and physically into different units50corresponding to cells, areas, factories, subsystems, or the like of the industrial automation system46. The industrial automation components (e.g., load components, processing components) may be used within a unit50to perform various operations for the unit50. The industrial automation components may be logically and/or physically divided into the units50as well to control performance of the various operations for the unit50. The distributed control system48may include computing devices with communication abilities, processing abilities, and the like. For example, the distributed control system48may include processing modules, a control system, a programmable logic controller (PLC), a programmable automation controller (PAC), or any other controller that may monitor, control, and operate an industrial automation device or component. The distributed control system48may be incorporated into any physical device (e.g., the industrial automation components) or may be implemented as a stand-alone computing device (e.g., general purpose computer), such as a desktop computer, a laptop computer, a tablet computer, a mobile device computing device, or the like. For example, the distributed control system48may include many processing devices logically arranged in a hierarchy to implement control operations by disseminating control signals, monitoring operations of the industrial automation system46, logging data as part of historical tracking operations, and so on. In an example distributed control system48, different hierarchical levels of devices may correspond to different operations. A first level52may include input/output communication modules (IO modules) to interface with industrial automation components in the unit50. A second level54may include control systems that control components of the first level and/or enable intercommunication between components of the first level52, even if not communicatively coupled in the first level52. A third level56may include network components, such as network switches, that support availability of a mode of electronic communication between industrial automation components. A fourth level58may include server components, such as application servers, data servers, human-machine interface servers, or the like. The server components may store data as part of these servers that enable industrial automation operations to be monitored and adjusted over time. A fifth level60may include computing devices, such as virtual computing devices operated from a server to enable human-machine interaction via an HMI presented via a computing device. It should be understood that levels of the hierarchy are not exhaustive and nonexclusive, and thus devices described in any of the levels may be included in any of the other levels. For example, any of the levels may include some variation of an HMI. One or more of the levels or components of the distributed control system48may use and/or include one or more processing components, including microprocessors (e.g., field programmable gate arrays, digital signal processors, application specific instruction set processors, programmable logic devices, programmable logic controllers), tangible, non-transitory, machine-readable media (e.g., memory such as non-volatile memory, random access memory (RAM), read-only memory (ROM), and so forth. The machine-readable media may collectively store one or more sets of instructions (e.g., algorithms) in computer-readable code form, and may be grouped into applications depending on the type of control performed by the distributed control system48. In this way, the distributed control system48may be application-specific, or general purpose. Furthermore, portions of the distributed control system48may be a or a part of a closed loop control system (e.g., does not use feedback for control), an open loop control system (e.g., uses feedback for control), or may include a combination of both open and closed system components and/or algorithms. Further, in some embodiments, the distributed control system48may utilize feed forward inputs. For example, depending on information relating to the feedstocks16(e.g., compositional information relating to the catalyst22and/or the additional raw material24, the distributed control system48may control the flow of any one or a combination of the feedstocks16into the sub-reactor26, the reactor14, or the like. Each of the levels52,54,56,58,60may include component redundancies, which may help provide a high availability control system. For example, within the first level, redundant and concurrently operating backplanes may provide power to each of the IO modules. To elaborate,FIG.3is an illustration of a distributed modular IO system72associated with the distributed control system48ofFIG.2. The distributed modular IO system72may include IO devices74, an IO module76and backplanes78(78A,78B). The distributed modular IO system72may include a network adapter80having two or more adapter modules86(86A,86B). The network adapter80may be coupled to at least one industrial automation network N1, N2. The first and second redundant industrial automation networks N1, N2may be Parallel Redundancy Protocol (PRP) LAN networks, Ethernet/IP networks, or other industrial automation networks so that the network adapter80may receive data from, transmits data to, and otherwise communicates with one or more industrial control modules, control systems, processing circuitry, or the like, such as one or more programmable logic controllers (PLC), microprocessors, and/or other electronic processors for machine and/or process control. The network adapter80may include a base82mounted to the support rail84or other support structure. The network adapter80may include first and second identical or otherwise redundant adapter modules86(86A,86B) operating in parallel with each other. The redundant adapter modules86may each be releasably connected to the adapter base88. Each of the adapter modules86may be operably connected to both the first and second networks N1, N2by connections in the adapter base88. The adapter modules86may also include electronic circuitry to communicate data with circuitry coupled to the networks N1, N2, with IO devices74, or with other interconnected components. The network adapter80may include first and second media landing modules90(90A,90B) removably coupled to the first and second adapter modules86through the adapter base88. The media landing boards90may each include at least two network connectors NC, such as RJ45 connectors, Small FormFactor Pluggable (SFP) connectors, optical fiber connectors, or the like. The industrial networks N1, N2may be coupled to the media landing boards90via the network connectors NC, and thus be connected to the adapter modules86though the media landing boards90. The IO device74redundant power conditioning and supplying IO modules (power conditioners)92(92A,92B), which may be coupled to the adapter base88and may include a power input terminal PT. The power input terminal PT may be used when connecting with at least one source of electrical power, such that the power conditioners92may supply system electrical power to the network adapter80via the adapter base88, as well as to other components coupled to the backplane78. As shown herein the power input terminals PT are removably connected to the adapter base88and are operably connected to the power conditioners92through the adapter base88. The IO device74may include a base94also mounted to the support rail84or another support structure. The base94may be located adjacent to base82. The base94may be operably, physically, and/or electrically connected to the base82via multi-contact electrical connectors96such that the backplane78may power and communicate between the network adapter80, IO devices74, the industrial networks N, and the like.FIG.3shows the backplane78as being external to the10device74, but those of ordinary skill in the art will recognize that the backplane78circuit or network is physically and electrically constructed within and extends through printed circuit boards and other circuitry located in the bases88and bases94via the electrical connectors96. The10device74may include 10 processing modules (IO modules)98(98A,98B,98C,98D). The IO modules98may be removably connected to the base94in respective mounting slots via electrical connections, such that each of the IO modules98may be operatively coupled to the backplane78. The IO modules98may use the backplane78to communicate with the network adapter80, the other IO (sub)modules98,100, and the like. In one embodiment, at least two of the IO modules98are identical to each other and operated in parallel with each other to provide a redundancy with respect to each other. The base94may include at least one terminal block102, which may include cage clamps, spring clamps, screw terminals, or other wiring connectors104that are adapted to be connected to field cables or field wiring110that are each associated with a field device106. The field device106may be an analog or digital device such as a sensor, flow meter, switch, probe, thermocouple, RTD, encoder, or the like, and the field device106may receive input data or transmit output data via the terminal blocks102. The network adapter80may include independent “adapter” Ethernet switches112(112A,112B), which may be operably connected to, form part of, and establish the backplane78. Similarly, the IO devices74may include independent IO module Ethernet switches114(114A,114B) that may be operably connected to, form part of, and establish the backplane78. The switches112and the switches114may be identical but are numbered differently to facilitate description of their operation. The switches112,114may perform a packet switching operation to direct data communication of any suitable backplane network/protocol. The IO module76may be a single-channel IO device that includes one or more removable and replaceable single-channel IO submodules100. The IO module76may include a base118adapted for mounting on a support rail84or another support structure. The base118may include multi-contact electrical connectors96to form a portion of the backplanes78. The IO module76may include a terminal block120connected to the base116. The terminal block120may include wiring connectors122that couple the terminal block120to other industrial automation components. The IO module76may include Ethernet switches124(124A,124B), each operably coupled to, form part of, and establish the backplane78. The IO module76may include at least two configurable IO modules126(126A,126B). The configurable IO modules126may each be defined by and include separate IO segments or IO submodules100, which may each be selectively installed on and removable from the base116. The configurable IO modules126may define a group of the IO submodules100. In this example, the configurable IO modules126each include eight, single-channel IO submodules100, and thus include eight individual IO data channels. The IO submodules100and/or the IO modules108may include electronic circuitry to perform a particular type of data input/output (IO) operation, such as a direct current (DC) input, DC output, alternating current (AC) input, AC output, safety input/output, highway addressable remote transducer protocol (HART) input/output, real time data (RTD) and/or thermocouple input and/or output, or other analog or digital input/output for data and signals. Each IO submodules100and/or the IO modules108may be respectively used for different types of data communication. Furthermore, each IO submodules100and/or the IO modules108may be associated with a single, dedicated IO data channel operably coupled to a group of one or more wiring connectors122of the terminal block120(e.g., column of wiring connectors122). In this way, a field device coupled to the one or more wiring connectors122may be associated with a particular IO data channel and may be operably connected to the corresponding IO submodule100associated with the same IO data channel. As noted above, the IO submodules100and/or the IO modules108be selected to be the appropriate IO type (e.g., analog, digital, AC input, AC output, DC input, DC output) as required for the particular field device connected to its associated IO data channel. As described above, the industrial automation system46may transport and process materials that may be classified as hazardous by chemical regulatory agencies and that may be used to produce products worth millions of dollars cumulatively. Consequently, the industrial automation system46may be desired to be not only highly reliable but also highly available, such as to satisfy a minimum level of availability for petrochemical applications may of service availability 99.999% percent of the time. To do so, short and long term monitoring operations of the industrial automation system46may be used to perform predictive maintenance operations, as well as reactive or other maintenance activities. For example, by monitoring operations of individual components of the industrial automation system46alone or in combination with system-wide or unit-wide monitoring operations, maintenance issues may be predicted prior to a component, a unit, and/or the system going offline. Predictive monitoring may improve availability due to, for example, being able to schedule outages of a component, a unit, and/or the system when the monitoring operations has flagged to do a repair or replacement of the portion of the industrial automation system46. Short and long term monitoring operations of the industrial automation system46may be managed by components of the distributed control system48, for example, by commands to perform specific operations (and resulting data) being propagated throughout the various levels of the distributed control system48. An example group of components inFIG.4may represent a portion of the distributed modular IO system72ofFIG.3associated with the distributed control system48but used to control loads at a more local level, such as in the unit50or in a nearby unit50. The distributed modular IO system72may be used in combination with power ORing (OR IO) circuitry to predictively maintain power distribution throughout a portion of the industrial automation system46, thereby reducing a likelihood of an unplanned outage from occurring. To elaborate,FIG.4is a block diagram of some of the components of the distributed modular IO system72ofFIG.3. The power conditioners92may couple respectively to power supplies138(138A,138B) and may include power converting circuitry to transform an electrical property of signals received from the power supplies138before outputting electrical signals to the backplanes78. Sometimes the field devices106may include loads140(140A,140B,140C) and sensing devices142(142A,142B,142C). The loads140may additionally or alternatively couple to the IO submodules100and/or the IO modules108ofFIG.3via the backplanes78at power ORing input/output (OR IO circuitry144(e.g., ORing circuits). The OR IO circuitry144may include power architectures that provide redundancy through the selection of multiple power sources. The OR IO circuitry144may be implemented using a variety of architectures that involve diodes, metal-oxide-semiconductor field-effect transistors (MOSFETs), and the like. Generally, the OR IO circuitry144may include multiple power or voltage inputs that may be ORed to a common output. As such, in the event that one input supply becomes unavailable, the OR IO circuitry144protects the connected load by providing power via a redundant source. Additional communication paths may be included between the loads140and the power conditioners92. The OR IO circuitry144may automatically switch between backplanes78in response to detecting one of power supply providing less than a threshold amount of power or voltage on one of the backplanes78. The loads140may each receive analog or digital signals from the backplane78A, from the backplane78B, or both at the OR IO circuitry144. Signals sent via the backplanes78may change individual operation of the loads140. For example, loads140may be a system interface for a downstream IO module coupled to the load140, and thus may supply power from the power conditioners92to the system side of downstream components coupled to the load140. IO circuitry146of the power conditioners92may be used to provide feedback, such as via DC signals transmitted between the IO module98(control) and the power conditioner92A,92B. The feedback may relate to a status of the load140and/or of components coupled to the load140, such as whether one or both connections to the backplane78are lost, or other suitable control operation related statuses or data. The IO module98(control) or the IO module98(system) may be programmed, and operated, as a distributed modular IO system72control system, and/or other control circuitry may be included. For ease of description, the IO module98(control) is described herein as a primary control system of at least this portion of the distributed modular IO system72. In some cases, the IO module98(system) may coordinate received signals and/or operations of the IO module98(control) with larger system operations by communicating with other portions of the distributed modular IO system72. The sensing devices142may sense operating parameters (e.g., speed, current, output voltage) of the load140and/or of ambient conditions150that may affect operation of the industrial automation system46. The sensing devices142may acquire the sensed data and may output the sensed data to other control circuitry via input/output (IO) circuitry146, such as control and processing circuitry described inFIG.3. The sensed data may be of any suitable format, and thus may include one or more analog electrical signals, digital data signals, pulse-width-modulated data signals, or the like. After the IO module98(system) receives the sensed data from the sensing devices142, the IO module98(system) may transmit the sensed data to the IO module98(control). The IO module98(control) may analyze the sensed data to determine one or more outputs to send to the loads140. In some cases, this includes provision of a command to one or more the power conditioners92to change how current outputs from the power conditioners92coupled to the loads140as a redundant pair. The power conditioners92may provide current concurrently to the loads140. As such, to test output currents from one of the power conditioners92, one of the power conditioners92may provide power to the load140, while the other power conditioner92does not. As a result, the loads140may connect to the power conditioners92and avoid an increased risk of failure if one of the connected power conditioners92is unable to provide the appropriate power. With this in mind, the present embodiments described below include output balancing operations and diagnostic operations of the power conditioners92, which may help to maintain balanced outputs from the redundant power conditioners92, trigger an intentional imbalance to diagnose whether both of the power conditioners92are able to individually supply the load140in the event that one goes offline, and detect when one of the power conditioners92is offline. Communication pathways with the one or more loads140may also enable the power conditioners92or other upstream control circuitry to diagnose or monitor one or more loads140. In the event that one of the backplanes78is offline, control circuitry of the power conditioners92and/or of the loads140may continue to receive power from the other of the backplanes78, thereby avoiding operational interruptions from losing power and from having to switch on the other backplane78to the components. Components, including the control circuitry of the power conditioners92and/or of the loads140, may be coupled to the two or more backplanes78using power ORing circuitry (e.g., OR IO circuitry144).FIG.5provides an example of how the two or more backplanes78and communication pathways may be inter-connected to the OR IO circuitry144, the power conditioners92and/or of the loads140. To elaborate,FIG.5is a block diagram of the power conditioners92, the loads140, and the OR IO circuitry144(144A,144B,144C,144D,144E,144F,144G,144H,144I,144J). The power conditioners92may include power conditioner control systems176(176A,176B). The loads140may include load control systems182(182A,182B,182C). The power conditioners92may be coupled to the backplanes78via the OR IO circuitry144G-J, such as to power the load power conditioner control systems176and the load control systems182. Each of the loads140may couple to each backplane78via the OR IO circuitry144A-F. Although described herein as a two-backplane redundant system, it should be understood that any number of backplanes greater than two may be used to provide additional redundancy to the distributed control system48. The power conditioners92may include a dual power feed for powering each power conditioner control systems176A,176B of the power conditioners92. Some loads140may not include load control systems182. The backplanes78A may be fed internally in the power conditioner92A, and the backplane78B may come from the partner power conditioner92B. Each power conditioner92may include one of power converters178(178A,178B) to independently convert input power into output power. Power conditioner control systems176A,176B may control respective outputs from the power conditioner92. For example, the power conditioner control system176A may instruct a change in output from the power converter178A, which changes the output from the power conditioner92A to the backplane78A. The power converters178may be any suitable type of power converter, such an AC-to-DC converter, a DC-to-DC converter, an AC-to-AC converter, a DC-to-AC converter, a diode rectifier, one or more static switches, or the like. The power converter178as shown is a DC-to-DC converter that steps down 24V input to 15V output on the backplanes78. The input power may be received from power supplies138coupled upstream from the power conditioner92. The power supplies138may couple to the power conditioner92via respective converters to further step down or convert power supplied to the power conditioner92. These converters may be additional to the power converters178included in the power conditioner92. The power converters178may be thought of local power output control that may be tailored to operation of loads140coupled to the backplane while a converter coupled external to the power conditioner92and downstream of a power supply138may prepare supply signals for use by the power conditioner92. Each power conditioner92may be independently coupled to respective power supplies138to reduce a likelihood of common cause fault taking downstream system components offline. In some cases, the OR IO circuitry144is also included between the power supplies138and the power conditioner92to provide each power conditioner92with concurrent and redundant sources for power. By using the OR IO circuitry144, the power conditioners92may provide redundant and concurrent amounts of power to the loads140via the backplane78. The OR IO circuitry144may include circuitry to power-or supply inputs or outputs to or from the associated device. Indeed, merely paralleling power supplies without additional protection may lead to interrupted operation when a fault occurs, a removal of another device, or the like. Thus, the OR IO circuitry144may include devices that enable inrush current and/or fault current protection. Some OR IO circuitry144may include semiconductor diodes, and some OR IO circuitry144include transistor-based current protections. Generally, the OR IO circuitry144internal circuitry prevents backflow of current from a direction of intended delivery (e.g., current transmitted toward the loads140from the power conditioner92). Example OR IO circuitry144is contemplated later with reference to at leastFIG.9. Communication paths180may also couple the power conditioner92to the loads140. The loads140are shown as interconnected using a daisy chain communication pathway, in which each subsequent load of the loads140connects back to the IO module98(control) via an adjacent load of the loads140. It is noted that any suitable communication architecture may be used to interconnect the loads140with upstream control circuitry of the distributed control system48. Moreover, in some cases, the backplane78may include such a structure able to include both power supply paths and the communication paths180. For instance, a communication path180A may couple the power conditioner92A to the load140A, which may be an adapter module. A communication path180B may couple the power conditioner92B to the load140A. The load140A may couple to downstream loads140via daisy chained communication paths180. For example, the load140A is coupled to the load140B via the communication path180C, and the load140B is coupled to the load140C via the communication path180D. The loads140may periodically send data to the load140A, acting as the adapter, to enable the load control system182A to monitor operating conditions and/or statuses of each of the loads140. The load140A may use the data from one or more of the loads140to diagnose a particular operation as isolated to one load140or instead be affecting multiple loads140. This data and diagnostic capability may enable the upstream control circuitry, like the adapter modules86, to diagnose a particular operation as isolated to one backplane78or instead be affecting multiple backplanes78. Input currents for the backplanes78may also be used during a load share diagnostic operation to determine whether an input current shifts to the proper rail when the power conditioner92force a voltage imbalance between backplane78A and backplane78B. Keeping the above descriptions in mind, the power conditioners92may themselves operate to balance the output of electrical signals to the backplanes78. In other words, the power conditioner control system176A may balance a first power output from the power converter178A with a second power output from the power converter178B provided to each of one or more loads140(e.g., load components) electrically coupled to the first backplane78A and the second backplane78B. Likewise, when one of the power conditioners92detects an imbalance between electrical signals provided to the backplanes78, the power conditioners92may sometimes adjust operations of the power converter178A, the power converter178B, or both to balance the outputs to make each power converter178contribute 50% load current to the loads140. Imbalance and/or balance conditions may be identified by the power conditioners92comparing sensed data to threshold amounts, or expected sensed data values. The balancing operation may involve changing an output voltage from one or more of the power converters178to trigger a change in current that results over time in a 50/50 sharing (e.g., equal proportion) of the current used to power the loads140. It should be understood that other target proportions may be used when balancing or sharing of the load. For example, some systems may use a 60/40 or 40/60 sharing target of power consumption of the loads140, amongst the power conditioners92, or any suitable target proportion. To help explain this balancing operation,FIG.6is a flow diagram of a process200for operating the primary power conditioner92to control a power output based on a power output from the secondary power conditioner92. The power conditioners92may independently monitor and share power output information with each other. When a diagnostic operation is not also running, the power conditioners92adjusts the respective backplane voltages linearly, cross comparing with the partner module, looking for their contribution to the total system power to be equal. The process200is described as being performed by the primary power conditioner92, and it should be understood that substantially similar operations are able to be performed by the secondary power conditioner92, another control system associated with the distributed control system48(e.g., IO module98(control)), or the like. These operations may be performed in response to processing circuitry of the power conditioner control system176A executing instructions stored in a tangible, non-transitory, computer-readable medium, such as a memory of power conditioner control system176A, or another suitable memory. Moreover, the operations of the process200are shown in a particular order; however, some of the operations may be performed in a different order than what is presented or omitted altogether. Certain voltage and current values are described herein, but it should be understood that these are example values and example ranges, which may be adjusted for specific systems and implementations. At block202, the power conditioner92A may receive a first indication of a first voltage and a first current output that may be present on the backplane78A. The power conditioner92A may include sensing devices that measure the first voltage and the first current output of the backplane78A and provide the power conditioner92A with the sensed data. At block204, the power conditioner92A may receive a second indication of a second voltage and a second current output on the backplane78B from the power conditioner92B. The power conditioner92B may include sensing devices that measure the second voltage and the second current output of the backplane78B. The power conditioner92B may transmit the sensed data to the power conditioner92A via the communication paths180E. At block206, the power conditioner92A may determine a difference between a power supplied to the backplane78A and a power supplied to the backplane78B. The power conditioner92A may determine the power supplied based on the received indications of voltage and currents from the sensing devices and the power conditioner92B. In some cases, the power conditioner92B may determine the power supplied to the backplane78B and transmit an indication of the power to the power conditioner92A. At block208, the power conditioner92A may determine whether the difference determined at block206is greater than or equal to a threshold value. The threshold value may indicate a suitable amount of deviation or imbalance between the amounts of power supplied to the different backplanes78. For example, a 2V difference between the outputs may be suitable while a 10V different may be unsuitable. Any difference may be used based on the particular application. When the difference is not greater than the threshold, the power conditioner92A may deem the power supplied as balanced and may determine to not make an adjustment. Thus, at block202, the power conditioner92A may receive a new indication of voltages and currents supplied via the backplanes78. When the difference is greater than the threshold, the power conditioner92A may deem the power supplied as imbalanced and make an adjustment to try to rebalance the outputs. Thus, at block210, the power conditioner92A may generate a control signal to cause the power converter178A to adjust the first voltage output and/or to cause the power converter178B to adjust the second voltage output based on the difference. The power conditioner92A may trigger an adjustment proportional to the difference exceeding the threshold value by a larger amount. The adjustment made may span multiple days, months, or even years, as to not introduce transient signal spikes into the distributed control system48. For example, the overall balancing adjustment may be relatively slow when compared to a voltage regulation control loop used by converter178, and smaller balancing power output targets may be used to shift operation toward balanced outputs incrementally over time. The voltage regulation control loop may control voltage output from the power converter178according to a frequency in the micro seconds per cycle. In some cases, power used by the loads140may dynamic, so it may be more efficient to maintain average power over a relatively long period of time (e.g., hours, days, months, years). By the time the incremental adjustment is applied over the timespan, the outputs from the power conditioner92may be balanced with each power conditioner92supplying equal amounts of average power. Having multiple power conditioners92also may enable diagnostic testing to detect presence and location of a fault, such as a fault affecting one or both of the backplanes78. In some cases, the power conditioners92may receive sensing data (e.g., sensor data) from the OR IO circuitry144of the loads140to further diagnose locations of faults or operations of the backplanes78. For example, a load140receiving a lower amount of current than an expected amount of current may indicate a partial fault or some other misoperation to one or more of the power conditioners92, which may transmit an indication to upstream control circuitry to further debug or alert an operator to the issue. To elaborate on fault detection operations,FIG.7is a flow diagram of a process220for operating the power conditioner92A to detect operational statuses of the respective backplanes78. The process220is described as being performed by the power conditioner92A (e.g., power conditioner control system176A of the power conditioner92A), and it should be understood that substantially similar operations are able to be performed by the power conditioner92B, another control system associated with the distributed control system48(e.g., IO module98(control)), or the like. These operations may be performed in response to processing circuitry of the power conditioner control system176A executing instructions stored in a tangible, non-transitory, computer-readable medium, such as a memory of the power conditioner control system176A, or another suitable memory. Moreover, the operations of the process220are shown in a particular order; however, some of the operations may be performed in a different order than what is presented or omitted altogether. Certain voltage and current values are described herein, but it should be understood that these are example values and example ranges, which may be adjusted for specific systems and implementations. At block222, the power conditioner92A may receive a first current measurement from a first sensing device. The first sensing device may be disposed at an input to the backplane78A, at one of the loads140, or at another downstream device coupled to the backplane78A. The first current measurement may indicate an output current on the backplane78A. At block224, the power conditioner92A may receive a second current measurement from the power conditioner92B. The second current measurement may indicate an output current on the backplane78B. The power conditioner92B may receive the second current measurement from a second sensing device. The second sensing device may be disposed at an input to the backplane78B, at one of the loads140, or at another downstream device coupled to the backplane78B. At block226, the power conditioner92A may compare the first current measurement and the second current measurement, respectively, to an expected current measurement. The expected current measurement may indicate an amount of current expected during normal operation of each of the backplanes78. At block228, the power conditioner92A may determine whether the first current measurement equals zero or is within a threshold range from zero and whether the second current measurement equals the expected current or is within a threshold range from the expected current. When the first current measurement equals zero and the second current measurement equals the expected current, at block230, the power conditioner92A may generate a backplane A fault alert that indicates that backplane78A may be or is experiencing a fault. For example, the power conditioner92A may transmit a control signal via the distributed control system48to trigger an alert be generated for an operator, to trigger an updated to an HMI, to perform a shutdown or startup operation, or the like. Otherwise, the power conditioner92A continues to, at block232, determine whether the first current measurement and the second current measurement equal zero or is within a threshold range from zero. When the first current measurement and the second current measurement equal zero, at block234, the power conditioner92A may generate a system-wide fault alert that indicates that multiple backplanes78may be or are experiencing a fault. Otherwise, the power conditioner92A continues to, at block236, determine whether the second current measurement equals zero or is within a threshold range from zero and whether the first current measurement equals the expected current or is within a threshold range from the expected current. When the second current measurement equals zero and the first current measurement equals the expected current, at block238, the power conditioner92A may generate a backplane B fault alert that indicates that backplane78B may be or is experiencing a fault. When none of the conditions at block228, block232, or block236are met, the power conditioner92A may, at block240, generate a normal operation alert that indicates that the backplanes78are operating as expected. At block242, the power conditioner92A may perform an operation based on the generated alert. For example, the power conditioner92A may transmit a control signal via the distributed control system48to trigger an alert be generated for an operator, to trigger an updated to an HMI, to perform a shutdown or startup operation, or the like. In some cases, at blocks228,232, and236, the power conditioner92A may determine, in addition to checking whether the first current measurement and/or the second current measurement equal zero, whether the first current measurement and/or the second current measurement equal are within a threshold range from zero and/or are outside a threshold range from an expected current. The threshold range from the expected current may be set greater than, equal to, or less than the threshold range from the expected current used to verify correct operation of the backplanes78. Different threshold levels than the ones described herein may be used to trigger different types of alerts. Comparing, at blocks228,232, and/or236, the first current measurement to zero, to a threshold range from zero, and/or to a threshold range from an expected current may enable the power conditioner92A to determine whether the first current measurement indicates a fault condition on the backplane A corresponding to a fault in the current sense circuitry (e.g., when the first current measurement equals zero) or an over voltage or misoperation of the current sense circuitry (e.g., when the first current measurement is outside the first threshold range from the expected current). Comparing, at blocks228,232, and/or236, the second current measurement to zero, to a threshold range from zero, and/or to a threshold range from an expected current may enable the power conditioner92A to determine whether the second current measurement indicates a fault condition on the backplane B corresponding to a fault in the current sense circuitry (e.g., when the second current measurement equals zero) or an over voltage or misoperation of the current sense circuitry (e.g., when the second current measurement is outside the first threshold range from the expected current). The power conditioner92A may reuse the results from the determination at block228at the determination operations of blocks232,236, which may improve efficiency of the determinations by not repeating the same comparison. Similar to determinations of block228, the power conditioner92A may determine now whether the second current measurement equals zero, is within a threshold range from zero, or is outside a first threshold range from the expected current. Similar operations may be performed by the power conditioner92B. In some cases, separate determinations are performed for both determining whether the first current measurement equals zero and determining whether the first current is outside the first threshold range, and the separate determinations may result in separate alerts being generated. Using separate determinations may enable the power conditioner92to generate different alerts that particularly indicate whether it was a fault in the current sense circuitry, an over voltage of the current sense circuitry, or a misoperation of the current sense circuitry that triggered the generation of the alert. Having multiple power conditioner92may enable diagnostic testing to detect whether one or both of the power conditioners92are capable of supplying 100% of current to the loads140, such as to predict before one of the power conditioners92goes offline whether the other power conditioner92is capable of supplying the loads140. Without testing load supplying capabilities of both of the power conditioners92, when it comes time to call on either power conditioner92to supply the full load current, the power conditioner92may be unable to output the full load current. By testing using operations described inFIG.8, one of the power conditioners92may adjust (e.g., increase, decrease) its output current to enable the operational verification of the other power conditioner92output without turning off or disconnecting either power conditioners92. This may improve operation of industrial automation systems46by enabling predictive maintenance operations to occur without introducing additional switching transients to do so. To do so, the power conditioner control system176A may cause a first output power and a second output power provided to the loads140to be imbalanced in response to determining that the first difference is less than the threshold amount. Thus, the power conditioner control system176A may incrementally shift the supply of the load140more to the power conditioner92B for testing until reaching the threshold amount of difference. To elaborate,FIG.8is a flow diagram of a process252for operating the power conditioners92to perform a diagnostic operation. The power conditioners92may independently monitor and share power output information with each other when operating outside of performing the diagnostic operation. At some time, the power conditioners92may start a diagnostic in which one power conditioner92raises the output voltage (e.g., by adjusting a digital-to-analog converter (DAC) voltage from the microprocessor of the power converter178) while monitoring an output current or voltage. Using feedback from on-board current and voltage measurements from sensing devices (e.g., measurements taken by one or more sensing devices disposed such as to measure signals at points 1-7 inFIG.9) as well as that of the partner power conditioner92, the distributed modular IO system72settles in a position where one power conditioner92is totally handling the loads140until the diagnostic is terminated. The process252is described as being performed by the power conditioner92A and it should be understood that substantially similar operations are able to be performed by either power conditioner92, another control system associated with the distributed control system48(e.g., IO module98(control)), or the like. These operations may be performed in response to processing circuitry of the power conditioner control system176A executing instructions stored in a tangible, non-transitory, computer-readable medium, such as a memory of the power conditioner control system176A, or another suitable memory. Moreover, the operations of the process252are shown in a particular order; however, some of the operations may be performed in a different order than what is presented or omitted altogether. Certain voltage and current values are described herein, but it should be understood that these are example values and example ranges, which may be adjusted for specific systems and implementations. At block254, the power conditioner92A may receive an instruction to start a diagnostic operation on the power conditioner92B. An IO module98may generate the instruction in response to a condition being met to trigger the diagnostic operation, in response to user input, in response to a duration of time passing, or the like. The diagnostic operation may involve verifying that the power conditioner92B can generate enough current to support the loads140in the event that the power conditioner92A goes offline. Other diagnostic operations may benefit from load shifting between the backplanes78and thus may additionally or alternatively be initiated with this process as well. The instruction may indicate a request to verify that the power converter178B is capable of supplying the one or more loads140and the power conditioner control systems176with a target current value while the first current on the first backplane78A and the second current on the second backplane78B are imbalanced, as may occur when the first current on the first backplane78A is negligibly small or 0 amps [A] and the second current on the second backplane78B increases in value to compensate for current draw of the load components (e.g., one or more loads140and the power conditioner control systems176). In response to the instruction, at block256, the power conditioner92A may generate a control signal to transmit to the power converter178A that adjusts the first voltage output in response to the control signal. The power converter178A may increase the first voltage output in response to the control signal to increase the proportion of the loads140powered by the power conditioner92A. At block258, the power conditioner92A may measure a current output from the power converter178A to the backplane78A. At block260, the power conditioner92A may transmit an indication of the power output via the backplane78A and/or indications of a voltage and current output via the backplane78A to the power conditioner92B. The voltage output to the backplane78A may match the voltage adjusted to in block256, and thus the power conditioner92A may not repeat sensing of the voltage output. At block262, the power conditioner92A may determine a difference between the current output from the power conditioner92A and a desired load current to be output by the power conditioner92A during the diagnostic. At block264, the power conditioner92A may determine whether the difference is greater than or equal to a threshold value. The threshold value may correspond to a current difference to maintain while the diagnostic test is being performed. For example, the threshold value may equal a value corresponding to the power conditioner92B wholly powering the load140and the power conditioner92A not powering the load140. Until the difference is greater than or equal to the threshold value, and thus until the power conditioner92A has passed the load140onto the power conditioner92B, the power conditioner92A may continue, at block256, to generate a control signal to transmit to the power converter178A, which in turn continues to reduce the current output on the backplane78A in response to the control signal. In this way, the power conditioner control system176A of the power conditioner92A may first determine that the difference at block264is less than the threshold, repeat instructing the power converter178A to increase the output voltage, resulting in a further increased output voltage to the backplane78A, and repeat the difference determination operations of blocks262,264to determine whether that most recent adjustment was sufficient to increase the output current from the power conditioner92A to the desired load current. When one or more additional increases are made to the output voltage and the output current to the backplane78A is a threshold value, the backplane78B may take over supplying current to the loads140and the process252may continue to block268. When the difference is greater than or equal to the threshold value, the power conditioner92A may, at block268, generate a control signal to perform the diagnostic or to instruct the power conditioner92B to perform the diagnostic. The diagnostic operation may involve testing a current output from the power conditioner92B to the backplane78B to verify that the power conditioner92B is able to supply a desired amount of power to a load. At conclusion of the diagnostic operation, the power conditioner92A may generate a control signal to instruct the power converter178A to return to the higher level of current output, such that the loads140on the backplanes78are equally supplied current from both power conditioners92. In some cases, the power conditioner control system176A of the power conditioner92A may generate and transmit an alert to notify the distributed control system48that the power conditioner92B is capable of operating the power converter178B to output the target current output used to supply the loads140with 100% of a load current. In some cases, OR IO circuitry144is included in the loads140A as well as in the power conditioners92.FIG.9shows example circuitry of the OR IO circuitry144that may be used in one or more of the loads140, one or more of the power conditioners92, or in other components coupled to the one or more backplanes78. It is noted thatFIG.9is an example of components and more or less components may be included in the OR IO circuitry144. To elaborate,FIG.9is a block diagram of the OR IO circuitry144A,144B of the load140A. This OR IO circuitry144is described as being part of the load140but could be part of any power conditioner92, any of the loads140, or any other component connected to either of the backplanes78. The OR IO circuitry144A,144B may include electronic fuse(s)280(e.g., “efuse”), drivers282, one or more semiconductor devices, such as MOSFETs or diodes284, switches286, and current sensing amplifiers288. By including the OR IO circuitry144A,144B between the supplying backplanes78and downstream components of the load140A, a likelihood of the load140losing power and going offline is reduced. For example, the load140A receives power when both the backplanes78are on, when just the backplane78A is on, and when just the backplane78B is on. The load140A may only lose supply from the backplanes78when both backplanes78are off. Currents may be prevented from back transmitting via the backplanes78based at least in part on the diodes284relative biasing (e.g., diodes284being forward biased diodes block currents from transmitting from an anode-side to a cathode-side). Capacitors290may be included to help smooth a switching off or on of the output from the OR IO circuitry144A,144B. Fuses292(292A,292B) may couple between the OR IO circuitry144and the backplane78input to the load140A to provide additional overcurrent protection. The fuses292and the electronic fuses280may provide redundant protection, such as redundant over-current protection, such that is one misoperates the other automatically protects. The electronic fuses280and the diodes284may provide redundancy in the power ORing in case one of the diodes284misoperates. The electronic fuse280may include one or more fuses on a local computing circuit. This may permit dynamic, real-time reprogramming of the computing circuit based on the positioning, and repositioning over time, of the one or more fuses. By using the electronic fuse280, the operation of the driver282may be changed while the load140A is in operation. For example, the driver282may perform undervoltage and/or overvoltage protection operations, overcurrent protection operations, and/or soft start control operations based on the state of the fuses of the electronic fuse280. Undervoltage and/or overvoltage protection operations may involve the electronic fuse280changing state in response to a sensed voltage condition and, responsive to the change of state, opening the switches286. The switches286may be any suitable type of switch, such as a metal-oxide-semiconductor field-effect transistor (MOSFET) switch. Different thresholds may be used to detect the undervoltage condition and the overvoltage condition. When the sensed voltage is greater than the threshold value corresponding to the overvoltage condition, the switches286may be opened. When the sensed voltage is less than the threshold value corresponding to the undervoltage condition, the switches286may be opened. Opening the switches286may stop current from transmitting to the downstream devices of the load140. Overcurrent protection operations may involve fuses292and/or the electronic fuse280changing state in response to a sensed current condition exceeding a threshold amount of current and, responsive to the change of state, opening the switches286. Soft start control operations may enable steady switching operations according to a soft start profile to slowly and carefully bring components of the load140online while minimizing an introduction of switching transients into the supplied power. Soft start control operations may be used in combination with many different types of devices, such as silicon carbide circuit breakers or control circuitry, which may be structurally advantageous to use for soft start control operations from the incorporate of the solid state switching technologies. In some systems, the power conditioner92A may increase the voltage output to shift the load wholly on the power conditioner92A. Similar operations apply for diagnosing full load capabilities of the power conditioner92B. That is, either of the power conditioners92may diagnose whether its circuitry is capable of supplying the whole load with current in the event that the other power conditioner92were to go offline. Indeed, either of the power converters178may raise its output to a current and/or voltage set point and then verify that the power has shifted to the corresponding power converters92. Either set point may correspond to a maximum rated output voltage and/or maximum rated output current for the specific power converter178. For example, the power converter178A may increase the power output to the backplane78A to a rated power output level (e.g., maximum power output) and consequently cause the loads140to shift to being wholly supplied by the power conditioner92A via the backplane78A. To elaborate,FIG.10is a flow diagram of a process304for operating the power conditioners92to perform a diagnostic operation. The power conditioners92may independently monitor and share power output information with each other when operating outside of performing the diagnostic operation. At some time, the power conditioners92may start a diagnostic in which one power conditioner92raises the output voltage (e.g., by adjusting a DAC voltage from a microprocessor of the power converter178) to a maximum output voltage level for the corresponding power converter178to shift the loads140to that power conditioner92. If the difference between the power output from that power conditioner92and a target power output used to supply the loads140is met, then that power conditioner92may be deemed to be capable of wholly supplying the loads140. It is noted that although process304is described relative to a target power output, in some cases, operation of the power conditioners92may be verified relative to a target current output, a target voltage output, or the like to confirm that each power conditioner92is able to supply the full load without the other. The process304is described as being performed by the power conditioner92A and it should be understood that substantially similar operations are able to be performed by either power conditioner92, another control system associated with the distributed control system48(e.g., IO module98(control)), or the like. These operations may be performed in response to processing circuitry of the power conditioner control system176A executing instructions stored in a tangible, non-transitory, computer-readable medium, such as a memory of the power conditioner control system176A, or another suitable memory. Moreover, the operations of the process304are shown in a particular order; however, some of the operations may be performed in a different order than what is presented or omitted altogether. Certain voltage values, current values and/or power values are described herein, but it should be understood that these are example values and example ranges, which may be adjusted for specific systems and implementations. At block306, the power conditioner92A may receive an instruction to start a diagnostic operation on itself. An IO module98may generate the instruction in response to a condition being met to trigger the diagnostic operation, in response to user input, in response to a duration of time passing, or the like. The diagnostic operation may involve verifying that the power conditioner92A can generate enough current to support the loads140in the event that the power conditioner92B goes offline. In response to the instruction, at block308, the power conditioner92A may generate and transmit a control signal to the power converter178A that adjusts the first voltage output in response to the control signal, such as to a maximum available voltage output level. The power converter178A may increase the first voltage output in response to the control signal to increase the proportion of the loads140powered by the power conditioner92A. At block310, the power conditioner92A may measure a power output from the power converter178A to the backplane78A. To do so, the voltage output to the backplane78A may match the voltage adjusted to in block308, and thus the power conditioner92A may not repeat sensing of the voltage output and instead measure the output current to the backplane78A. At block312, the power conditioner92A may determine a difference between the power output from the power conditioner92A and a desired power output to be output by the power conditioner92A during the diagnostic. The desired power output may correspond to a power output to be used to suitably supply the loads140without contribution from the power conditioner92B. At block314, the power conditioner92A may determine whether the difference is less than or equal to a threshold value. The threshold value may correspond to a power difference tolerance between the desired power output to be maintained to shift the load140to the backplane78A while the diagnostic test is being performed and the actual power output. For example, it may be permitted to not have an exact match between the desired power output and the actual power output, and the value of the threshold may represent a tolerated range of values. If the difference is less than or equal to the threshold, at block316, the power conditioner92A may generate an alert to indicate that the power has shifted to the power conditioner92A, and thus that the power conditioners92A is capable of supplying the loads140if the power conditioner92B were to go offline. However, if the difference is not less than or equal to the threshold (e.g., is greater than the threshold), at block316, the power conditioner92A may generate an alert to indicate that the power has not shifted to the power conditioner92A, and thus that the power conditioners92A is incapable of supplying the loads140if the power conditioner92B were to go offline. Either of the alerts generated at block316or at block318may be transmitted to control circuitry, such as the adapter modules86, upstream control circuitry, circuitry of the distributed control system48, or the like. Indeed, either alert generated may cause another measurement to be performed and/or an updating of a graphic on an HMI to convey successful completion of the diagnostic operation and whether or not the power conditioner92under test is able to supply the loads140alone. The alerts may trigger generation of an alarm, such as an audible alert or a graphic alarm on the HMI. For example, the alert generated at block318may trigger an alarm to draw attention of an operator to the determination that the power conditioner92A was unable to wholly supply the loads140. As another example, the alert generated may trigger a subsequent sensing operation to further debug or verify suitable operation of the backplanes78, the power converters178, and/or the power conditioners92. The alert being generated may cause the power conditioner92A to receive one or more current values from one or more current sensors associated with one or more load components, such as by the alert being associated with a control signal being generated to trigger a sensing operation. The loads140may sometimes include a load control system182, like is shown inFIG.5andFIG.9. For example, when the load140is an IO module (e.g., one of the IO (sub)modules76,98,100,108,126), that IO module may include a load control system182. The load control system182may include processing and memory circuitry to provide local diagnostic monitoring and processing capabilities to the IO module. For example, the load control system182A may communicate with sensing devices that measure voltage, current, temperature, pressure, or other operating parameters at points 1-7 marked onFIG.9. The load140A may include sensing devices (e.g., one or more sensors) disposed at one or more locations within the load140to take measurements at points 1-7. In this way, the load140A may use OR IO circuitry144A that includes one or more integrated sensors. When being installed within the industrial automation system46, the OR IO circuitry144A may be included in the load140A such that the sensors need not be separately installed to sense electrical parameters of at least the points 1-7. The integrated sensors of the IO circuitry144A may be included within a same physical housing of at least the IO circuitry144A. The load control system182A may receive voltage data indicative of local voltage feedback corresponding to the points 1-5. The load control system182A may process the voltage data to track voltages experienced at the each of the corresponding portions of the load140being measured at points 1-5. The load control system182A may receive current data from the current sensing amplifier288A,288B indicative of local current supply corresponding to the points 6 and 7. The load control system182A may process the current data to track currents on both of the backplanes78. Some or all of the same sensing data may also be sent to control circuitry, such as adapter modules86and/or another suitable control component. The control circuitry may sense a fault condition occurring or about to occur based on sensing data received from one or more of the loads140(e.g., more than one IO module) even when each respective load140(e.g., respective IO module) may be unable to detect the pattern of the fault in its own locally sensed data. The load control system182A may generate control signals for the drivers282and/or the electronic fuses280to implement control operations instructed by upstream control circuitry (e.g., power conditioner control systems176A,176B, IO module98(control), other components of the distributed control system48), to implement local control operations based on the sensed data of any of the points 1-7, or the like. It is noted that some of the components described herein may be implemented using wireless communication technology, wired communication technology, or a combination of the two. For example, the IO circuitry146may be outfitted for wireless communication in addition to or instead of wired communication. Thus, the sensed data may sometimes be transmitted via wireless and/or radio frequency signals, even if power signals are transmitted via hardwired connections. Technical effects of the systems and methods described herein include an industrial automation system that uses a shared power backplane to coordinate load sharing between two power IO modules (e.g., power conditioners) and to perform periodic diagnostics to check redundancy of power IO modules. The industrial automation system may use power ORing circuitry (OR IO circuitry) at loads and IO modules within a distributed control system to improve redundancy and availability of an industrial automation system while reducing occurrences of transients associated with typical redundant power supply operations and switching. By using these power ORing systems and methods and the IO modules described herein, the industrial automation system operation may improve by being able to switch between primary and secondary power supplies without abrupt or sudden switching operations, which can introduce switching transients and reduce life of industrial automation components. Diagnostic capabilities from the concurrent and redundant power supplies may further improve industrial automation system operation by making it more likely that a misoperation may be detected early and making it easier to detect locations of faults using internal sensing circuitry of the power ORing circuitry. Furthermore, IO modules with a universal IO configuration are also described as being used with the power ORing circuitry, and may provide additional benefits from being able to be used with the diagnostic operations to provide further monitoring capabilities, such as providing a distributed control system that may be less complex to maintain. While the present disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the present disclosure is not intended to be limited to the particular forms disclosed. Rather, the present disclosure is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the following appended claims. The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f). | 81,906 |
11860600 | DESCRIPTION OF EMBODIMENTS Embodiment 1 <Configuration of Programmable Display Device1> FIG.1is a block diagram illustrating a configuration of a programmable display device1in accordance with Embodiment 1 of the present invention. As shown inFIG.1, the programmable display device1includes a control section10, a display section20, a touch panel30, a user memory40, a speaker50, and an interface section60. The programmable display device1is connected to a programmable logic controller (PLC)2via a communication cable, so as to be communicable with the PLC2. The programmable display device1is a dedicated computer configured to display a graphics screen for operation and display to realize an operation function and a display function specific to the programmable display device. The programmable display device1is used as a human machine interface (HMI). The control section10identifies an action of displaying the status of a device3connected to the PLC2or an action of controlling the status of the device3in accordance with the operation carried out on the touch panel30. The control section10controls each section of the programmable display device1. The control section10includes a display control section110, an obtaining section130, a setting accepting section140, and a process executing section150. The display control section110causes the display section20to display a screen. The obtaining section130obtains, from the PLC2, data related to the device3. The setting accepting section140accepts a process setting concerning the data obtained by the obtaining section130. The process executing section150executes a process in accordance with the process setting accepted by the setting accepting section140. Here, the process setting refers to configuring a setting concerning (a) the content of a process to be executed for the data obtained by the obtaining section130and (b) information for executing the process. To be more specific, the process setting includes configuring a new setting and reconfiguring the setting to change (edit) the already-set matter(s). The display section20displays a screen. The touch panel30accepts input operation that an end user has carried out on the display section20. The user memory40stores information therein. The speaker50outputs audio to the outside of the programmable display device1. Hereinafter, the end user may simply be referred to as a “user”, occasionally. Note that the programmable display device1may incorporate an interface alone in place of the speaker50. If the programmable display device1incorporates the interface alone, the speaker50is connected to the programmable display device1as an external device via the interface. The interface section60is a communication section via which the programmable display device1communicates with the PLC2. The PLC2is a control device configured to read the status of the device3and/or provide a control instruction to the device3at predetermined scanning times in accordance with the sequential program prepared by a user. The device3may be a device controlled by the PLC2, a device configured to output a value detected by a sensor, and/or the like. There are a plurality of such devices3. The programmable display device1displays, via e.g. a screen and/or parts, the status of the device3obtained by the PLC2. <Indication of Display Screen P1by Programmable Display Device1> FIG.2is a view illustrating a display screen P1displayed on the display section20, which is included in the programmable display device1shown inFIG.1. The display screen P1is shown as one example of the display screen displayed on the display section20. The process related to the display screen P1described herein is also described as one example. The display control section110causes the display section20to display the display screen P1shown inFIG.2. The display screen P1includes a switch SW1, a numerical value indicator ND1, and a trend graph TG1. The switch SW1is an object corresponding to one of the devices3, and accepts operation for turning-ON/OFF, for example. The numerical value indicator ND1is an object corresponding to one of the devices3, and displays the numerical value of the latest data obtained by the obtaining section130. The trend graph TG1is an object corresponding to one of the devices3, and indicates a graph of the values of the data obtained by the obtaining section130and accumulated in the user memory40. The display control section110causes the display section20to display the switch SW1, the numerical value indicator ND1, and the trend graph TG1as objects in this manner. Each object has a device address and/or a variable(s) specifying a place for storing data that the obtaining section130is to obtain from a device memory of the PLC2. The user memory40stores therein data of plural display screens including the display screen P1. The user memory40accumulates therein data that the obtaining section130has obtained from the PLC2for the purpose of displaying the trend graph TG1. When the user carries out predetermined operation such as pressing and holding the numerical value indicator ND1on the display screen P1, the display control section110causes the display section20to display a setting item list L1concerning the numerical value indicator ND1. The setting item list L1includes switching buttons B1, B2, and B3and setting buttons B4, B5, and B6. The switching button B1is a button used to switch between enabling and disabling of a logging function that is to be executed by the process executing section150. The switching button B2is a button used to switch between enabling and disabling of a notification function that is to be executed by the process executing section150. The switching button B3is a button used to switch between enabling and disabling of data analysis that is to be executed by the process executing section150. The logging, notification, and data analysis that are to be executed by the process executing section150are examples of the content of the process to be executed for the data obtained by the obtaining section130. The setting button B4is a button used to cause the display section20to display a setting screen for configuring a setting concerning the logging function that is to be executed by the process executing section150. The setting button B5is a button used to cause the display section20to display a setting screen for configuring a setting concerning the notification function that is to be executed by the process executing section150. The setting button B6is a button used to cause the display section20to display a setting screen for configuring a setting concerning the data analysis that is to be executed by the process executing section150. <Indication of Setting Screen SP1by Programmable Display Device1> FIG.3is a view illustrating a setting screen SP1displayed on the display section20, which is included in the programmable display device1shown inFIG.1. The setting screen SP1is one example of the setting screen displayed on the display section20. The process concerning the setting screen SP1described herein is also one example. In response to user operation of touching the setting button B4on the display screen P1, the display control section110causes the display section20to display the setting screen SP1for configuring a setting concerning the logging function that is to be executed by the process executing section150. The setting screen SP1is a screen for accepting a setting concerning the logging of data related to the numerical value indicator ND1. Hereinafter, data indicating a numerical value concerning the numerical value indicator ND1will be referred to as numerical data. The setting screen SP1includes setting items SA1to SA8, a cancel button C1, and an enter button E1. The setting item SA1is an item for setting a cycle of executing logging of numerical data to the user memory40. The setting item SA1is provided based on the assumption that the process executing section150executes logging of numerical data to the user memory40in response to a trigger issued at a certain cycle, in a state where the user memory40can store the numerical data therein. The setting item SA2is an item for setting the number of logs of numerical data that the process executing section150logs to the user memory40. The setting item SA3is an item for enabling or disabling the function in which the process executing section150overwrites the logs of the numerical data in the user memory40so as to keep updating the logs of the numerical data logged to the user memory40. In a case where the function that can be set via setting item SA3is disabled, the process executing section150stops logging of the numerical data to the user memory40at the point of time when the number of logs of the numerical data logged to the user memory40reaches the number of logs having been set via the setting item SA2. The setting item SA4is an item for setting the current time and for displaying the current time so that it can be referred to at the time of setting a starting date and time and an ending date and time. The setting item SA5is an item for setting the date and time to start the logging of the numerical data that is to be carried out by the process executing section150with respect to the user memory40. The setting item SA6is an item for setting the date and time to end the logging of the numerical data that is to be carried out by the process executing section150with respect to the user memory40. The setting item SA7is an item for setting a file name of a file which is created by the process executing section150and in which the numerical data is accumulated. The process executing section150stores this file in the user memory40. The setting item SA8is an item for accepting a selection as to which of a first setting and a second setting is to be selected. In the first setting, only numerical data obtained by the obtaining section130while the numerical value indicator ND1is displayed on the display section20is specified as target data, which is numerical data that is to be subjected to the logging executed by the process executing section150. In the second setting, numerical data obtained by the obtaining section130while the obtaining section130is executing the obtaining of the numerical data is specified as the target data. When the user carries out operation of touching the enter button E1on the setting screen SP1, the setting accepting section140detects the operation from the touch panel30. The setting accepting section140accepts, via the setting screen SP1, a process setting configured via the setting items SA1to SA8, which are included in the setting screen SP1. The process executing section150causes the process setting accepted by the setting accepting section140to be stored in the user memory40such that the process setting is associated with the numerical value indicator ND1. The display control section110causes the display section20to stop displaying the setting screen SP1, and causes the display section20to display the display screen P1. Meanwhile, when the user carries out operation of touching the cancel button C1on the setting screen SP1, the display control section110causes the display section20to stop displaying the setting screen SP1, and causes the display section20to display the display screen P1. As described above, the setting accepting section140accepts, via the setting screen SP1displayed in response to predetermined operation, a process setting concerning (a) the content of a process to be executed for the numerical data obtained by the obtaining section130and (b) information for executing the process. The setting accepting section140accepts the process setting in a state where the obtaining section130is executing the obtaining of the numerical data. The process executing section150executes the process including the content on the basis of the above information and in accordance with the process setting accepted by the setting accepting section140. The process setting configured via the setting screen SP1is accepted in a state where the obtaining section130is executing the obtaining of the numerical data. Consequently, the end user can easily configure the process setting via the setting screen SP1by himself/herself, without the need to pay attention to a variable name and/or a device address. This can reduce the cost and time taken for the process setting. Furthermore, the above configuration enables the end user to configure the process setting by himself/herself, thereby eliminating the need to request an outside manufacturer or the like to configure the process setting. In addition, the above configuration eliminates the need for the outside manufacturer or the like to identify, among plural pieces of data, a piece(s) of data requiring a process setting before delivering the programmable display device1to the end user. Consequently, the outside manufacturer or the like can deliver the programmable display device1to the end user in a simple manner without various kinds of advance preparations. The configuration shown inFIG.3will be further described. Specifically, the setting accepting section140accepts, via the setting screen SP1, accumulation of the numerical data obtained by the obtaining section130, the accumulation of the numerical data being accepted as the content of the process to be executed for the numerical data obtained by the obtaining section130. The process executing section150executes, as the process to be executed for the numerical data, the accumulation of the numerical data accepted by the setting accepting section140. Consequently, the end user can easily configure the process setting concerning the accumulation of the numerical data via the setting screen SP1. In addition, the setting accepting section140accepts, via the setting screen SP1, the selection as to which of the first setting and the second setting is to be selected, the selection being accepted as the process setting accepted by the setting accepting section140. In a case where the first setting is accepted by the setting accepting section140, the process executing section150executes logging of only the numerical data obtained by the obtaining section130while the numerical value indicator ND1is displayed on the display section20. In a case where the second setting is accepted by the setting accepting section140, the process executing section150executes logging of the numerical data obtained by the obtaining section130while the obtaining section130is executing the obtaining of the numerical data. In the case where the first setting is accepted by the setting accepting section140, the amount of the numerical data that is to be subjected to the logging by the process executing section150is smaller than that of the case where the second setting is accepted, and thus the burden on the programmable display device1can be reduced. In addition, since the above configuration enables the end user to switch between the first setting and the second setting, the amount of data to be collected can be changed according to the period of time to execute the logging. <Indication of Setting Screen SP2by Programmable Display Device1> FIG.4is a view illustrating a setting screen SP2that is not the setting screen SP1shown inFIG.3. In response to user operation of touching the setting button B5on the display screen P1shown inFIG.2, the display control section110causes the display section20to display a setting screen SP2used to configure a setting concerning the notification function that is to be executed by the process executing section150, as indicated by the reference sign101inFIG.4. The setting screen SP2accepts a notification setting concerning the numerical value indicator ND1. The setting screen SP2includes setting items SB1to SB3, a reference button RB1, a cancel button C2, and an enter button E2. The setting item SB1is an item for setting a predetermined condition. The process executing section150executes notification concerning the numerical value indicator ND1, if the predetermined condition on the numerical data concerning the numerical value indicator ND1is satisfied. The notification may be an indication of an abnormality of the numerical value indicator ND1, for example. Via the setting item SB1, it is possible to set the predetermined condition. The predetermined condition is used as a condition for monitoring the numerical value indicator ND1. The setting item SB1includes a plus button SBB, which is a button used to set a new condition in addition to the condition(s) having been set via the setting item SB1. The user can add a new condition by carrying out operation of touching the plus button SBB. Via the setting item SB1, it is possible to set AND, OR, and the like. The setting item SB2is an item for setting a way to execute the notification concerning the numerical value indicator ND1. The way that can be set via the setting item SB2may be an e-mail, audio output from the speaker50, a buzzer, and/or color changing or flashing of the display screen P1, for example. The setting item SB3is concerned with a setting content similar to that of the setting item SA8. However, the setting item SB3is related to notification, rather than to the logging of the setting item SA8. The reference button RB1is a button used to cause the display section20to display a result of data analysis executed by the process executing section150. Upon user operation of touching the reference button RB1, the display control section110causes the display section20to display a table T1, which indicates the result of the data analysis executed by the process executing section150, as indicated by the reference sign102inFIG.4. The table T1is one example of the result of the process having been executed by the process executing section150. The table T1includes, as examples of representative values of the numerical data calculated by the process executing section150, a maximum value, a minimum value, and an average value of the numerical values of the logs of the numerical data as well as a maximum value, a minimum value, and an average value of the intervals between the logs of the numerical data. That is, the table T1includes the information indicating the changes in the numerical values of the numerical data occurred until the process setting is configured via the setting screen SP2as well as the information indicating the time intervals between the changes in the numerical values. The reference sign102inFIG.4indicates the table T1showing the result of the data analysis. However, this is not limitative. Alternatively, the table T1may indicate a result obtained by any of other calculation methods and other analysis methods involving use of the numerical data accumulated in the user memory40. Upon user operation of touching an omission button RB2, the display control section110causes the display section20to stop displaying the result of the data analysis. When the user carries out operation of touching the enter button E2on the setting screen SP2, the setting accepting section140accepts, via the setting screen SP2, the process setting configured via the setting items SB1to SB3, which are included in the setting screen SP2. The process executing section150causes the user memory40to store therein the process setting accepted by the setting accepting section140. The display control section110causes the display section20to stop displaying the setting screen SP2, and then causes the display section20to display the display screen P1. Meanwhile, when the user carries out operation of touching the cancel button C2on the setting screen SP2, the display control section110causes the display section20to stop displaying the setting screen SP2, and then causes the display section20to display the display screen P1. As described above, the setting accepting section140accepts, as the content of the process to be executed for the numerical data obtained by the obtaining section130, notification that is to be executed when a predetermined condition on the numerical data is satisfied. The setting accepting section140accepts the predetermined condition as the information for executing the process. In addition, the process executing section150executes the notification as the process to be executed for the numerical data obtained by the obtaining section130, if the predetermined condition on the numerical data is satisfied. Consequently, the end user can easily configure the setting concerning the predetermined condition for the notification via the setting screen SP2. Furthermore, the process executing section150executes, as the process to be executed for the numerical data, the data analysis of analyzing the numerical data obtained by the obtaining section130. The display control section110causes the display section20to display the setting screen SP2in response to predetermined operation, and also causes the display section20to display the result of the process having been executed by the process executing section150. In other words, the display control section110causes the display section20to display the result of the data analysis having been executed by the process executing section150. Consequently, the setting screen SP2is displayed on the display section20, and the result of the data analysis is also displayed on the display section20. Therefore, the end user can easily configure the process setting via the setting screen SP2while referring to the result of the data analysis. <Indication of Setting Screen SP3by Programmable Display Device1> FIG.5is a view illustrating a setting screen SP3that is not the setting screen SP1shown inFIG.3or the setting screen SP2shown inFIG.4. In response to user operation of touching the setting button B6on the display screen P1shown inFIG.2, the display control section110causes the display section20to display a setting screen SP3used to configure a setting concerning the data analysis that is to be executed by the process executing section150, as shown inFIG.5. The setting screen SP3is a screen for accepting a setting concerning data analysis of numerical data related to the numerical value indicator ND1. The setting screen SP3includes a setting item SC1, a table T2, a reference button RB3, a reset button RS1, a cancel button C3, and an enter button E3. The setting item SC1is concerned with a setting content similar to that of the setting item SA8. However, the setting item SC1is related to data analysis, rather than to the logging of the setting item SA8. Via the table T2, a kind of a representative value that is to be set as the target data can be selected from among the kinds of the representative values of the numerical data included in the table T1. Upon user operation of touching one(s) of the kinds of the representative values included in the table T2, the display control section110causes the display section20to display a sign(s) indicating the selected one(s) of the kinds of the representative values. The process executing section150calculates only a representative value(s) of the kind(s) having been selected via the table T2, and the display control section110causes the display section20to display only the representative value(s) having been calculated by the process executing section150. Similarly to the reference button RB1, the reference button RB3is a button used to cause the display section20to display a result of data analysis having been executed by the process executing section150. Upon user operation of touching the reference button RB3, the display control section110causes the display section20to display the table T1, which is described above. The reset button RS1is a button used to reset the representative value(s) of the numerical data having been calculated by the process executing section150to zero. When the user carries out operation of touching the reset button RS1, the process executing section150detects the operation from the touch panel30, and resets the calculated representative value(s) of the numerical data to zero. When the user carries out operation of touching the enter button E3on the setting screen SP3, the setting accepting section140accepts, via the setting screen SP3, the process setting configured via the setting item SC1, the table T2, and the reset button RS1included in the setting screen SP3. The process executing section150causes the process setting accepted by the setting accepting section140to be stored in the user memory40such that the process setting is associated with the numerical value indicator ND1. The display control section110causes the display section20to stop displaying the setting screen SP3, and then causes the display section20to display the display screen P1. Meanwhile, when the user carries out operation of touching the cancel button C3on the setting screen SP3, the display control section110causes the display section20to stop displaying the setting screen SP3, and then causes the display section20to display the display screen P1. As described above, the setting accepting section140accepts, via the setting screen SP3, data analysis of analyzing the numerical data obtained by the obtaining section130, the data analysis being accepted as the content of the process to be executed for the numerical data obtained by the obtaining section130. The process executing section150executes, as the process to be executed for the numerical data, the data analysis accepted by the setting accepting section140. Furthermore, the setting accepting section140accepts the data analysis as the content of the process to be executed for the numerical data obtained by the obtaining section130. Specifically, the setting accepting section140accepts, via the setting screen SP3, at least one kind of the representative values of the numerical data as the information for executing the process. The process executing section150calculates only the representative value of the kind accepted by the setting accepting section140, and the display control section110causes the display section20to display only the representative value having been calculated by the process executing section150. Consequently, the end user can cause the display section20to display only the necessary representative value by setting the kind of the representative value via the setting screen SP3. Embodiment 2 The following will describe Embodiment 2 of the present invention. For convenience of description, members having functions identical to those described in Embodiment are assigned identical referential numerals, and their descriptions are omitted here.FIG.6is a view illustrating a display screen P2displayed on a display section20included in a programmable display device1in accordance with Embodiment 2 of the present invention. The following will explain a case where, in response to user input operation carried out on the display section20of the programmable display device1, a process executing section150refers to authentication information stored in a user memory40and executes an authentication process of authenticating the user. In this case, if the user is authenticated by the process executing section150, a display control section110causes the display section20to display the display screen P2shown inFIG.6. The display screen P2includes a user name UN and an authority level AL of the user authenticated by the process executing section150. In the user memory40, the user name, the authority level of the user, and the authentication information are preliminarily stored in association with each other. A setting accepting section140refers to the authority level of the user and the authentication information stored in the user memory40, and determines, in accordance with the authority level of the user, whether to permit or inhibit a transition to the setting screen. Specifically, the setting accepting section140permits a transition to the setting screen, if the setting accepting section140determines that the authority level of the user is equal to or higher than a predetermined level. Meanwhile, the setting accepting section140inhibits a transition to the setting screen, if the setting accepting section140determines that the authority level of the user is lower than the predetermined level. The setting screen described herein refers to any of the setting screens SP1to SP3of Embodiment 1. The following will explain a case where the setting accepting section140determines that the authority level AL of the user is lower than the predetermined level inFIG.6. In this case, even when the user carries out predetermined operation such as pressing and holding the numerical value indicator ND1on the display screen P2, the setting accepting section140inhibits a transition to the setting screen and therefore the display control section110would not allow the display section20to display the setting item list L1. In this case, the display control section110may cause the display section20to display, by e.g. a pop-up screen, information indicating an operation error. This configuration makes it possible to prevent an erroneous operation and an erroneous setting made by a user who is not familiar with the device and equipment and does not have an authority. Embodiment 3 The following will describe Embodiment 3 of the present invention. For convenience of description, members having functions identical to those described in Embodiment are assigned identical referential numerals, and their descriptions are omitted here.FIG.7is a view illustrating a setting screen SP4displayed on a display section20included in a programmable display device1in accordance with Embodiment 3 of the present invention. If predetermined operation is carried out on the display screen P1shown inFIG.2after an obtaining section130has started obtaining of data, the display control section110causes the display section20to display a setting screen SP4. The setting screen SP4includes a table T3. The table T3includes items that can be set via the setting screens SP1to SP3of Embodiment 1. These items are listed in the table T3. In response to user operation of touching the items in the table T3, the setting accepting section140switches between activation and deactivation of the data logging, notification, and data analysis. The table T3shows whether the data logging, notification, and data analysis are activated or deactivated. Consequently, the user can easily check, in a list, whether each of the data logging, notification, and data analysis is activated or deactivated by referring to the table T3. As a result, it is possible to avoid performance degradation that might otherwise be caused by unnecessary operation. In addition, in response to user operation of touching one of the items in the table T3, the display control section110may cause the display section20to display a setting screen for the touched one of the items. Furthermore, in place of the table T3, icons indicating the items may be included in the setting screen SP4. When the user carries out operation of touching the enter button E4on the setting screen SP4, the setting accepting section140accepts, via the setting screen SP4, the process setting configured via the table T3included in the setting screen SP4. The process executing section150causes the user memory40to store therein the process setting accepted by the setting accepting section140. The display control section110causes the display section20to stop displaying the setting screen SP4, and then causes the display section20to display the display screen P1. Meanwhile, when the user carries out operation of touching the cancel button C4on the setting screen SP4, the display control section110causes the display section20to stop displaying the setting screen SP4, and then causes the display section20to display the display screen P1. Note that, after the obtaining section130has started obtaining of the numerical data, the process executing section150may execute a process in which all of the process settings accepted by the setting accepting section140are stored in a project (a file including a series of the display screens) downloaded to the programmable display device1. Alternatively, the process executing section150may execute a process in which, among the process settings, only a setting(s) being not a setting(s) already stored in the project is stored in the project. In the case where the above process is carried out, the setting accepting section140may accept, via the setting screen, a selection as to whether (a) all of the accepted process settings are to be stored in the project or (b) among the accepted process settings, only the setting(s) being not the setting(s) already stored in the project is to be stored in the project. Thanks to this configuration, the amount of the information of the process setting(s) to be stored in the project may be reduced. This can save time. The project described above is downloaded to the programmable display device1from an external device such as a personal computer (PC), and is then stored in the user memory40. The following will explain a case where the process setting accepted by the setting accepting section140is applied also to another programmable display device1. By storing the process setting in the project and storing the project in a storage medium or the like, it is possible to easily reflect the process setting also to another programmable display device1. This may sometimes reduce the amount of the information of the process settings. In such a case, it is possible to save the time taken for such a work. That is, if there exist devices of the same type and pieces of equipment of the same type, it is possible to manage the process settings as settings specific to one of the devices or pieces of equipment or to manage the process settings as common settings for the devices and the pieces of equipment and duplicate the process settings. [Software Implementation Example] Control blocks (the control section10, specifically, the display control section110, the obtaining section130, the setting accepting section140, and the process executing section150) of the programmable display device1can be realized by a logic circuit (hardware) provided in an integrated circuit (IC chip) or the like or can be alternatively realized by software. In the latter case, the programmable display device1includes a computer that executes instructions of a program that is software realizing the foregoing functions. The computer, for example, includes at least one processor and at least one computer-readable storage medium storing the program. An object of the present invention can be achieved by the processor of the computer reading and executing the program stored in the storage medium. Examples of the processor encompass a central processing unit (CPU). Examples of the storage medium encompass a “non-transitory tangible medium” such as a read only memory (ROM), a tape, a disk, a card, a semiconductor memory, and a programmable logic circuit. The computer may further include a random access memory (RAM) or the like in which the program is loaded. Further, the program may be supplied to or made available to the computer via any transmission medium (such as a communication network and a broadcast wave) which allows the program to be transmitted. Note that an aspect of the present invention can also be achieved in the form of a computer data signal in which the program is embodied via electronic transmission and which is embedded in a carrier wave. The present invention is not limited to the embodiments, but can be altered by a skilled person in the art within the scope of the claims. The present invention also encompasses, in its technical scope, any embodiment derived by combining technical means disclosed in differing embodiments. Aspects of the present invention can also be expressed as follows: A programmable display device in accordance with an aspect of the present invention includes: an obtaining section configured to obtain data; a setting accepting section configured to accept a process setting via a setting screen displayed in response to predetermined operation, the process setting concerning (a) a content of a process to be executed for the data obtained by the obtaining section and (b) information for executing the process; and a process executing section configured to execute the process including the content, on the basis of the information and in accordance with the process setting accepted by the setting accepting section, the setting accepting section being further configured to accept the process setting in a state where the obtaining section is executing the obtaining of the data. The process setting configured via the setting screen is accepted in a state where the obtaining section is executing the obtaining of the data. Consequently, an end user can easily configure the process setting via the setting screen by himself/herself, without the need to pay attention to a variable name and/or a device address. This can reduce the cost and time taken for the process setting. The setting accepting section may be further configured to accept, via the setting screen, accumulation of the data obtained by the obtaining section, the accumulation of the data being accepted as the content; and the process executing section may be further configured to execute, as the process, the accumulation of the data accepted by the setting accepting section. Consequently, the end user can easily configure, via the setting screen, the process setting for the accumulation of the data. The setting accepting section may be further configured to accept, as the content, notification that is to be executed in a case where a predetermined condition on the data is satisfied, and the setting accepting section may be further configured to accept the predetermined condition as the information; and the process executing section may be further configured to execute the notification as the process, in a case where the predetermined condition on the data is satisfied. Consequently, the end user can easily configure, via the setting screen, the setting concerning the predetermined condition for the notification. The programmable display device may further include a display control section configured to cause a display section to display the setting screen in response to the predetermined operation, the display control section being further configured to cause the display section to display a result of the process having been executed by the process executing section, wherein: the setting accepting section may be further configured to accept, via the setting screen, data analysis of analyzing the data obtained by the obtaining section, the data analysis being accepted as the content; the process executing section may be further configured to execute, as the process, the data analysis having been accepted by the setting accepting section; and the display control section may be further configured to cause the display section to display a result of the data analysis having been executed by the process executing section. Since the setting screen is displayed on the display section and the result of the data analysis is also displayed on the display section, the end user can easily configure the process setting via the setting screen while referring to the result of the data analysis. The setting accepting section may be further configured to accept, via the setting screen, at least one kind of one or more representative values of the data as the information, in the case where the setting accepting section accepts the data analysis as the content; the process executing section may be further configured to calculate only a representative value of said at least one kind having been accepted by the setting accepting section; and the display control section is further configured to cause the display section to display only the representative value having been calculated by the process executing section. The end user can cause the display section to display only a necessary representative value by setting a kind of a representative value via the setting screen. The programmable display device may further include a display control section configured to cause a display section to display the setting screen in response to the predetermined operation, the display control section being further configured to cause the display section to display an object related to the data, wherein: the setting accepting section is further configured to accept, via the setting screen, a selection as to which of a first setting or a second setting is to be selected, the selection being accepted as the process setting, the first setting specifying, as target data, only data obtained by the obtaining section while the object is displayed on the display section, the target data being the data that is to be subjected to the process executed by the process executing section, the second setting specifying, as the target data, data obtained by the obtaining section while the obtaining section is executing the obtaining of the data; and the process executing section is further configured to: execute the process only for the data obtained by the obtaining section while the object is displayed on the display section, in a case where the first setting is accepted by the setting accepting section; and to execute the process for the data obtained by the obtaining section while the obtaining section is executing the obtaining of the data, in a case where the second setting is accepted by the setting accepting section. In the case where the first setting is accepted by the setting accepting section, the amount of the data that is to be subjected to the process to be carried out by the process executing section is smaller than that of the case where the second setting is accepted, and thus the burden on the programmable display device can be reduced. In addition, since the above configuration enables the end user to switch between the first setting and the second setting, the amount of data to be collected can be changed according to the period of time to execute the process. REFERENCE SIGNS LIST 1Programmable display device20Display section110Display control section130Obtaining section140Setting accepting section150Process executing sectionND1Numerical value indicator (object)SW1Switch (object)TG1Trend graph (object)SP1to SP4Setting screen | 42,681 |
11860601 | When practical, similar reference numbers denote similar structures, features, or elements. DETAILED DESCRIPTION The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims. While certain features of the currently disclosed subject matter may be described for illustrative purposes in relation to using machine-vision for aiding automated manufacturing processes (e.g. a CNC process), it should be readily understood that such features are not intended to be limiting. As used herein, the term “cutting” can generally refer to altering the appearance, properties, and/or state of a material. Cutting can include, for example, making a through-cut, engraving, bleaching, curing, burning, etc. Engraving, when specifically referred to herein, indicates a process by which a CNC machine modifies the appearance of the material without fully penetrating it. For example, in the context of a laser cutter, it can mean removing some of the material from the surface, or discoloring the material e.g. through an application of focused electromagnetic radiation delivering electromagnetic energy as described below. As used herein, the term “laser” includes any electromagnetic radiation or focused or coherent energy source that (in the context of being a cutting tool) uses photons to modify a substrate or cause some change or alteration upon a material impacted by the photons. Lasers (whether cutting tools or diagnostic) can be of any desired wavelength, including for example, microwave, lasers, infrared lasers, visible lasers, UV lasers, X-ray lasers, gamma-ray lasers, or the like. Also, as used herein, “cameras” includes, for example, visible light cameras, black and white cameras, IR or UV sensitive cameras, individual brightness sensors such as photodiodes, sensitive photon detectors such as a photomultiplier tube or avalanche photodiodes, detectors of infrared radiation far from the visible spectrum such as microwaves, X-rays, or gamma rays, optically filtered detectors, spectrometers, and other detectors that can include sources providing electromagnetic radiation for illumination to assist with acquisition, for example, flashes, UV lighting, etc. Also, as used herein, reference to “real-time” actions includes some degree of delay or latency, either programmed intentionally into the actions or as a result of the limitations of machine response and/or data transmission. “Real-time” actions, as used herein, are intended to only approximate an instantaneous response, or a response performed as quickly as possible given the limits of the system, and do not imply any specific numeric or functional limitation to response times or the machine actions resulting therefrom. Also, as used herein, unless otherwise specified, the term “material” is the material that is on the bed of the CNC machine. For example, if the CNC machine is a laser cutter, lathe, or milling machine, the material is what is placed in the CNC machine to be cut, for example, the raw materials, stock, or the like. In another example, if the CNC machine is a 3-D printer, then the material is either the current layer, or previously existent layers or substrate, of an object being crafted by the 3-D printing process. In yet another example, if the CNC machine is a printer, then the material can be the paper onto which the CNC machine deposits ink. INTRODUCTION A computer-numerically-controlled (CNC) machine is a machine that is used to add or remove material under the control of a computer. There can be one or more motors or other actuators that move one or more heads that perform the adding or removing of material. For CNC machines that add material, heads can incorporate nozzles that spray or release polymers as in a typical 3D printer. In some implementations, the heads can include an ink source such as a cartridge or pen. In the case of 3-D printing, material can be built up layer by layer until a fully realized 3D object has been created. In some implementations, the CNC machine can scan the surface of a material such as a solid, a liquid, or a powder, with a laser to harden or otherwise change the material properties of said material. New material may be deposited. The process can be repeated to build successive layers. For CNC machines that remove material, the heads can incorporate tools such as blades on a lathe, drag knives, plasma cutters, water jets, bits for a milling machine, a laser for a laser cutter/engraver, etc. FIG.1is an elevational view of a CNC machine100with a camera positioned to capture an image of an entire material bed150and another camera positioned to capture an image of a portion of the material bed150, consistent with some implementations of the current subject matter.FIG.2is a top view of the implementation of the CNC machine100shown inFIG.1. The CNC machine100shown inFIG.1corresponds to one implementation of a laser cutter. While some features are described in the context of a laser cutter, this is by no means intended to be limiting. Many of the features described below can be implemented with other types of CNC machines. The CNC machine100can be, for example, a lathe, engraver, 3D-printer, milling machine, drill press, saw, etc. While laser cutter/engravers share some common features with CNC machines, they have many differences and present particularly challenging design constraints. A laser cutter/engraver is subject to regulatory guidelines that restrict the egress of electromagnetic radiation from the unit when operating, making it challenging for light to enter or escape the unit safely, for example to view or record an image of the contents. The beam of a laser cutter/engraver must be routed from the emitter to the area to be machined, potentially requiring a series of optical elements such as lenses and mirrors. The beam of a laser cutter/engraver is easily misdirected, with a small angular deflection of any component relating to the beam path potentially resulting in the beam escaping the intended path, potentially with undesirable consequences. A laser beam may be capable of causing material destruction if uncontrolled. A laser cutter/engraver may require high voltage and/or radio frequency power supplies to drive the laser itself. Liquid cooling is common in laser cutter/engravers to cool the laser, requiring fluid flow considerations. Airflow is important in laser cutter/engraver designs, as air may become contaminated with byproducts of the laser's interaction with the material such as smoke, which may in turn damage portions of the machine for example fouling optical systems. The air exhausted from the machine may contain undesirable byproducts such as smoke that must be routed or filtered, and the machine may need to be designed to prevent such byproducts from escaping through an unintended opening, for example by sealing components that may be opened. Unlike most machining tools, the kerf—the amount of material removed during the operation—is both small and variable depending on the material being processed, the power of the laser, the speed of the laser, and other factors, making it difficult to predict the final size of the object. Also unlike most machining tools, the output of the laser cutter/engraver is very highly dependent on the speed of operation; a momentary slowing can destroy the workpiece by depositing too much laser energy. In many machining tools, operating parameters such as tool rotational speed and volume of material removed are easy to continuously predict, measure, and calculate, while laser cutter/engravers are more sensitive to material and other conditions. In many machining tools, fluids are used as coolant and lubricant; in laser cutter/engravers, the cutting mechanism does not require physical contact with the material being affected, and air or other gasses may be used to aid the cutting process in a different manner, by facilitating combustion or clearing debris, for example. The CNC machine100can have a housing surrounding an enclosure or interior area defined by the housing. The housing can include walls, a bottom, and one or more openings to allow access to the CNC machine100, etc. There can be a material bed150that can include a top surface on which the material140generally rests. In the implementation ofFIG.1, the CNC machine can also include an openable barrier as part of the housing to allow access between an exterior of the CNC machine and an interior space of the CNC machine. The openable barrier can include, for example, one or more doors, hatches, flaps, and the like that can actuate between an open position and a closed position. The openable barrier can attenuate the transmission of light between the interior space and the exterior when in a closed position. Optionally, the openable barrier can be transparent to one or more wavelengths of light or be comprised of portions of varying light attenuation ability. One type of openable barrier can be a lid130that can be opened or closed to put material140on the material bed150on the bottom of the enclosure. Various example implementations discussed herein include reference to a lid. It will be understood that absent explicit disclaimers of other possible configurations of the operable barrier or some other reason why a lid cannot be interpreted generically to mean any kind of openable barrier, the use of the term lid is not intended to be limiting. One example of an openable barrier can be a front door that is normally vertical when in the closed position and can open horizontally or vertically to allow additional access. There can also be vents, ducts, or other access points to the interior space or to components of the CNC machine100. These access points can be for access to power, air, water, data, etc. Any of these access points can be monitored by cameras, position sensors, switches, etc. If they are accessed unexpectedly, the CNC machine100can execute actions to maintain the safety of the user and the system, for example, a controlled shutdown. In other implementations, the CNC machine100can be completely open (i.e. not having a lid130, or walls). Any of the features described herein can also be present in an open configuration, where applicable. As described above, the CNC machine100can have one or more movable heads that can be operated to alter the material140. In some implementations, for example the implementation ofFIG.1, the movable head can be the head160. There may be multiple movable heads, for example two or more mirrors that separately translate or rotate in able to locate a laser beam, or multiple movable heads that operate independently, for example two mill bits in a CNC machine capable of separate operation, or any combination thereof. In the case of a laser-cutter CNC machine, the head160can include optical components, mirrors, cameras, and other electronic components used to perform the desired machining operations. Again, as used herein, the head160typically is a laser-cutting head, but can be a movable head of any type. The head160, in some implementations, can be configured to include a combination of optics, electronics, and mechanical systems that can, in response to commands, cause a laser beam or electromagnetic radiation to be delivered to cut or engrave the material140. The CNC machine100can also execute operation of a motion plan for causing movement of the movable head. As the movable head moves, the movable head can deliver electromagnetic energy to effect a change in the material140that is at least partially contained within the interior space. In one implementation, the position and orientation of the optical elements inside the head160can be varied to adjust the position, angle, or focal point of a laser beam. For example, mirrors can be shifted or rotated, lenses translated, etc. The head160can be mounted on a translation rail170that is used to move the head160throughout the enclosure. In some implementations the motion of the head can be linear, for example on an X axis, a Y axis, or a Z axis. In other implementations, the head can combine motions along any combination of directions in a rectilinear, cylindrical, or spherical coordinate system. A working area for the CNC machine100can be defined by the limits within which the movable head can cause delivery of a machining action, or delivery of a machining medium, for example electromagnetic energy. The working area can be inside the interior space defined by the housing. It should be understood that the working area can be a generally three-dimensional volume and not a fixed surface. For example, if the range of travel of a vertically oriented laser cutter is a 10″×10″ square entirely over the material bed150, and the laser from the laser beam comes out of the laser cutter at a height of 4″ above the material bed of the CNC machine, that 400 in2 volume can be considered to be the working area. Restated, the working area can be defined by the extents of positions in which material140can be worked by the CNC machine100, and not necessarily tied or limited by the travel of any one component. For example, if the head160could turn at an angle, then the working area could extend in some direction beyond the travel of the head160. By this definition, the working area can also include any surface, or portion thereof, of any material140placed in the CNC machine100that is at least partially within the working area, if that surface can be worked by the CNC machine100. Similarly, for oversized material, which may extend even outside the CNC machine100, only part of the material140might be in the working area at any one time. The translation rail170can be any sort of translating mechanism that enables movement of the head160in the X-Y direction, for example a single rail with a motor that slides the head160along the translation rail170, a combination of two rails that move the head160, a combination of circular plates and rails, a robotic arm with joints, etc. Components of the CNC machine100can be substantially enclosed in a case or other enclosure. The case can include, for example, windows, apertures, flanges, footings, vents, etc. The case can also contain, for example, a laser, the head160, optical turning systems, cameras, the material bed150, etc. To manufacture the case, or any of its constituent parts, an injection-molding process can be performed. The injection-molding process can be performed to create a rigid case in a number of designs. The injection molding process may utilize materials with useful properties, such as strengthening additives that enable the injection molded case to retain its shape when heated, or absorptive or reflective elements, coated on the surface or dispersed throughout the material for example, that dissipate or shield the case from laser energy. As an example, one design for the case can include a horizontal slot in the front of the case and a corresponding horizontal slot in the rear of the case. These slots can allow oversized material to be passed through the CNC machine100. Optionally, there can be an interlock system that interfaces with, for example, the openable barrier, the lid130, door, and the like. Such an interlock is required by many regulatory regimes under many circumstances. The interlock can then detect a state of opening of the openable barrier, for example, whether a lid130is open or closed. In some implementations, an interlock can prevent some or all functions of the CNC machine100while an openable barrier, for example the lid130, is in the open state (e.g. not in a closed state). The reverse can be true as well, meaning that some functions of the CNC machine100can be prevented while in a closed state. There can also be interlocks in series where, for example, the CNC machine100will not operate unless both the lid130and the front door are both closed. Furthermore, some components of the CNC machine100can be tied to states of other components of the CNC machine, such as not allowing the lid130to open while the laser is on, a movable component moving, a motor running, sensors detecting a certain gas, etc. In some implementations, the interlock can prevent emission of electromagnetic energy from the movable head when detecting that the openable barrier is not in the closed position. Converting Source Files to Motion Plans A traditional CNC machine accepts a user drawing, acting as a source file that describes the object the user wants to create or the cuts that a user wishes to make. Examples of source files are: 1) .STL files that define a three-dimensional object that can be fabricated with a 3D printer or carved with a milling machine, 2) .SVG files that define a set of vector shapes that can be used to cut or draw on material, 3) .JPG files that define a bitmap that can be engraved on a surface, and 4) CAD files or other drawing files that can be interpreted to describe the object or operations similarly to any of the examples above. FIG.3Ais a diagram illustrating one example of an SVG source file310, consistent with some implementations of the current subject matter.FIG.3Bis an example of a graphical representation320of the cut path330in the CNC machine, consistent with some implementations of the current subject matter.FIG.3Cis a diagram illustrating the machine file340that would result in a machine creating the cut path330, created from the source file310, consistent with some implementations of the current subject matter. The example source file310represents a work surface that is 640×480 units with a 300×150 unit rectangle whose top left corner is located 100 units to the right and 100 units down from the top-left corner of the work surface. A computer program can then convert the source file310into a machine file340that can be interpreted by the CNC machine100to take the actions illustrated inFIG.3B. The conversion can take place on a local computer where the source files reside on the CNC machine100, etc. The machine file340describes the idealized motion of the CNC machine100to achieve the desired outcome. Take, for example, a 3D printer that deposits a tube-shaped string of plastic material. If the source file specifies a rectangle then the machine file can instruct the CNC machine to move along a snakelike path that forms a filled in rectangle, while extruding plastic. The machine file can omit some information, as well. For example, the height of the rectangle may no longer be directly present in the machine file; the height will be as tall as the plastic tube is high. The machine file can also add some information. For example, the instruction to move the print head from its home position to a corner of the rectangle to begin printing. The instructions can even depart from the directly expressed intent of the user. A common setting in 3D printers, for example, causes solid shapes to be rendered as hollow in the machine file to save on material cost. As shown by the example ofFIGS.3A-C, the conversion of the source file310to the machine file330can cause the CNC machine to move the cutting tool from (0,0) (inFIG.3B) to the point at which cutting is to begin, activate the cutting tool (for example lower a drag knife or energize a laser), trace the rectangle, deactivate the cutting tool, and return to (0,0). Once the machine file has been created, a motion plan for the CNC machine100can be generated. The motion plan contains the data that determines the actions of components of the CNC machine100at different points in time. The motion plan can be generated on the CNC machine100itself or by another computing system. A motion plan can be a stream of data that describes, for example, electrical pulses that indicate exactly how motors should turn, a voltage that indicates the desired output power of a laser, a pulse train that specifies the rotational speed of a mill bit, etc. Unlike the source files and the machine files such as G-code, motion plans are defined by the presence of a temporal element, either explicit or inferred, indicating the time or time offset at which each action should occur. This allows for one of the key functions of a motion plan, coordinated motion, wherein multiple actuators coordinate to have a single, pre-planned effect. The motion plan renders the abstract, idealized machine file as a practical series of electrical and mechanical tasks. For example, a machine file might include the instruction to “move one inch to the right at a speed of one inch per second, while maintaining a constant number of revolutions per second of a cutting tool.” The motion plan must take into consideration that the motors cannot accelerate instantly, and instead must “spin up” at the start of motion and “spin down” at the end of motion. The motion plan would then specify pulses (e.g. sent to stepper motors or other apparatus for moving the head or other parts of a CNC machine) occurring slowly at first, then faster, then more slowly again near the end of the motion. The machine file is converted to the motion plan by the motion controller/planner. Physically, the motion controller can be a general or special purpose computing device, such as a high performance microcontroller or single board computer coupled to a Digital Signal Processor (DSP). The job of the motion controller is to take the vector machine code and convert it into electrical signals that will be used to drive the motors on the CNC machine100, taking in to account the exact state of the CNC machine100at that moment (e.g. “since the machine is not yet moving, maximum torque must be applied, and the resulting change in speed will be small”) and physical limitations of the machine (e.g. accelerate to such-and-such speed, without generating forces in excess of those allowed by the machine's design). The signals can be step and direction pulses fed to stepper motors or location signals fed to servomotors among other possibilities, which create the motion and actions of the CNC machine100, including the operation of elements like actuation of the head160, moderation of heating and cooling, and other operations. In some implementations, a compressed file of electrical signals can be decompressed and then directly output to the motors. These electrical signals can include binary instructions similar to l's and0's to indicate the electrical power that is applied to each input of each motor over time to effect the desired motion. In the most common implementation, the motion plan is the only stage that understands the detailed physics of the CNC machine100itself, and translates the idealized machine file into implementable steps. For example, a particular CNC machine100might have a heavier head, and require more gradual acceleration. This limitation is modeled in the motion planner and affects the motion plan. Each model of CNC machine can require precise tuning of the motion plan based on its measured attributes (e.g. motor torque) and observed behavior (e.g. belt skips when accelerating too quickly). The CNC machine100can also tune the motion plan on a per-machine basis to account for variations from CNC machine to CNC machine. The motion plan can be generated and fed to the output devices in real-time, or nearly so. The motion plan can also be pre-computed and written to a file instead of streamed to a CNC machine, and then read back from the file and transmitted to the CNC machine100at a later time. Transmission of instructions to the CNC machine100, for example, portions of the machine file or motion plan, can be streamed as a whole or in batches from the computing system storing the motion plan. Batches can be stored and managed separately, allowing pre-computation or additional optimization to be performed on only part of the motion plan. In some implementations, a file of electrical signals, which may be compressed to preserve space and decompressed to facilitate use, can be directly output to the motors. The electrical signals can include binary instructions similar to l's and0's to indicate actuation of the motor. The motion plan can be augmented, either by precomputing in advance or updating in real-time, with the aid of machine vision. Machine vision is a general term that describes the use of sensor data, and not only limited to optical data, in order to provide additional input to machine operation. Other forms of input can include, for example, audio data from an on-board sound sensor such as a microphone, or position/acceleration/vibration data from an on-board sensor such as a gyroscope or accelerometer. Machine vision can be implemented by using cameras to provide images of, for example, the CNC machine100, the material being operated on by the CNC machine, the environment of the CNC machine100(if there is debris accumulating or smoke present), or any combination of these. These cameras can then route their output to a computer for processing. By viewing the CNC machine100in operation and analyzing the image data it can, for example, be determined if the CNC machine100is working correctly, if the CNC machine100is performing optimally, the current status of the CNC machine100or subcomponents of the CNC machine100, etc. Similarly, the material can be imaged and, for example, the operation of the CNC machine100can be adjusted according to instructions, users can be notified when the project is complete, or information about the material can be determined from the image data. Error conditions can be identified, such as if a foreign body has been inadvertently left in the CNC machine100, the material has been inadequately secured, or the material is reacting in an unexpected way during machining. Camera Systems Cameras can be mounted inside the CNC machine100to acquire image data during operation of the CNC machine100. Image data refers to all data gathered from a camera or image sensor, including still images, streams of images, video, audio, metadata such as shutter speed and aperture settings, settings or data from or pertaining to a flash or other auxiliary information, graphic overlays of data superimposed upon the image such as GPS coordinates, in any format, including but not limited to raw sensor data such as a .DNG file, processed image data such as a JPG file, and data resulting from the analysis of image data processed on the camera unit such as direction and velocity from an optical mouse sensor. For example, there can be cameras mounted such that they gather image data from (also referred to as ‘view’ or ‘image’) an interior portion of the CNC machine100. The viewing can occur when the lid130is in a closed position or in an open position or independently of the position of the lid130. In one implementation, one or more cameras, for example a camera mounted to the interior surface of the lid130or elsewhere within the case or enclosure, can view the interior portion when the lid130to the CNC machine100is a closed position. In particular, in some preferred embodiments, the cameras can image the material140while the CNC machine100is closed and, for example, while machining the material140. In some implementations, cameras can be mounted within the interior space and opposite the working area. In other implementations, there can be a single camera or multiple cameras attached to the lid130. Cameras can also be capable of motion such as translation to a plurality of positions, rotation, and/or tilting along one or more axes. One or more cameras mounted to a translatable support, such as a gantry210, which can be any mechanical system that can be commanded to move (movement being understood to include rotation) the camera or a mechanism such as a mirror that can redirect the view of the camera, to different locations and view different regions of the CNC machine. The head160is a special case of the translatable support, where the head160is limited by the track220and the translation rail170that constrain its motion. Lenses can be chosen for wide angle coverage, for extreme depth of field so that both near and far objects may be in focus, or many other considerations. The cameras may be placed to additionally capture the user so as to document the building process, or placed in a location where the user can move the camera, for example on the underside of the lid130where opening the CNC machine100causes the camera to point at the user. Here, for example, the single camera described above can take an image when the lid is not in the closed position. Such an image can include an object, such as a user, that is outside the CNC machine100. Cameras can be mounted on movable locations like the head160or lid130with the intention of using video or multiple still images taken while the camera is moving to assemble a larger image, for example scanning the camera across the material140to get an image of the material140in its totality so that the analysis of image data may span more than one image. As shown inFIG.1, a lid camera110, or multiple lid cameras, can be mounted to the lid130. In particular, as shown inFIG.1, the lid camera110can be mounted to the underside of the lid130. The lid camera110can be a camera with a wide field of view112that can image a first portion of the material140. This can include a large fraction of the material140and the material bed or even all of the material140and material bed150. The lid camera110can also image the position of the head160, if the head160is within the field of view of the lid camera110. Mounting the lid camera110on the underside of the lid130allows for the user to be in view when the lid130is open. This can, for example, provide images of the user loading or unloading the material140, or retrieving a finished project. Here, a number of sub-images, possibly acquired at a number of different locations, can be assembled, potentially along with other data like a source file such as an SVG or digitally rendered text, to provide a final image. When the lid130is closed, the lid camera110rotates down with the lid130and brings the material140into view. Also as shown inFIG.1, a head camera120can be mounted to the head160. The head camera120can have a narrower field of view122and take higher resolution images of a smaller area, of the material140and the material bed, than the lid camera110. One use of the head camera120can be to image the cut made in the material140. The head camera120can identify the location of the material140more precisely than possible with the lid camera110. Other locations for cameras can include, for example, on an optical system guiding a laser for laser cutting, on the laser itself, inside a housing surrounding the head160, underneath or inside of the material bed150, in an air filter or associated ducting, etc. Cameras can also be mounted outside the CNC machine100to view users or view external features of the CNC machine100. Multiple cameras can also work in concert to provide a view of an object or material140from multiple locations, angles, resolutions, etc. For example, the lid camera110can identify the approximate location of a feature in the CNC machine100. The CNC machine100can then instruct the head160to move to that location so that the head camera120can image the feature in more detail. While the examples herein are primarily drawn to a laser cutter, the use of the cameras for machine vision in this application is not limited to only that specific type of CNC machine100. For example, if the CNC machine100were a lathe, the lid camera110can be mounted nearby to view the rotating material140and the head160, and the head camera120located near the cutting tool. Similarly, if the CNC machine100were a 3D printer, the head camera120can be mounted on the head160that deposits material140for forming the desired piece. An image recognition program can identify conditions in the interior portion of the CNC machine100from the acquired image data. The conditions that can be identified are described at length below, but can include positions and properties of the material140, the positions of components of the CNC machine100, errors in operation, etc. Based in part on the acquired image data, instructions for the CNC machine100can be created or updated. The instructions can, for example, act to counteract or mitigate an undesirable condition identified from the image data. The instructions can include changing the output of the head160. For example, for a CNC machine100that is a laser cutter, the laser can be instructed to reduce or increase power or turn off. Also, the updated instructions can include different parameters for motion plan calculation, or making changes to an existing motion plan, which could change the motion of the head160or the gantry210. For example, if the image indicates that a recent cut was offset from its desired location by a certain amount, for example due to a part moving out of alignment, the motion plan can be calculated with an equal and opposite offset to counteract the problem, for example for a second subsequent operation or for all future operations. The CNC machine100can execute the instructions to create the motion plan or otherwise effect the changes described above. In some implementations, the movable component can be the gantry210, the head160, or an identifiable mark on the head160. The movable component, for example the gantry210, can have a fixed spatial relationship to the movable head. The image data can update software controlling operation of the CNC machine100with a position of the movable head and/or the movable component with their position and/or any higher order derivative thereof. Because the type of image data required can vary, and/or because of possible limitations as to the field of view of any individual camera, multiple cameras can be placed throughout the CNC machine100to provide the needed image data. Camera choice and placement can be optimized for many use cases. Cameras closer to the material140can be used for detail at the expense of a wide field of view. Multiple cameras may be placed adjacently so that images produced by the multiple cameras can be analyzed by the computer to achieve higher resolution or wider coverage jointly than was possible for any image individually. The manipulation and improvement of images can include, for example, stitching of images to create a larger image, adding images to increase brightness, differencing images to isolate changes (such as moving objects or changing lighting), multiplying or dividing images, averaging images, rotating images, scaling images, sharpening images, and so on, in any combination. Further, the system may record additional data to assist in the manipulation and improvement of images, such as recordings from ambient light sensors and location of movable components. Specifically, stitching can include taking one or more sub-images from one or more cameras and combining them to form a larger image. Some portions of the images can overlap as a result of the stitching process. Other images may need to be rotated, trimmed, or otherwise manipulated to provide a consistent and seamless larger image as a result of the stitching. Lighting artifacts such as glare, reflection, and the like, can be reduced or eliminated by any of the above methods. Also, the image analysis program can performing edge detection and noise reduction or elimination on the acquired images. Edge detection can include performing contrast comparisons of different parts of the image to detect edges and identify objects or features in the image. Noise reduction can involve averaging or smoothing of one or more images to reduce the contribution of periodic, random, or pseudo-random image noise, for example that due to CNC machine100operation such as vibrating fans, motors, etc. FIG.4Ais a diagram illustrating the addition of images, consistent with some implementations of the current subject matter. Images taken by the cameras can be added, for example, to increase the brightness of an image. In the example ofFIG.4A, there is a first image410, a second image412, and a third image414. First image410has horizontal bands (shown in white against a black background in the figure). The horizontal bands can conform to a more brightly lit object, though the main point is that there is a difference between the bands and the background. Second image412has similar horizontal bands, but offset in the vertical direction relative to those in the first image410. When the first image410and second image412are added, their sum is shown in by the third image414. Here, the two sets of bands interleave to fill in the bright square as shown. This technique can be applied to, for example, acquiring many image frames from the cameras, possibly in low light conditions, and adding them together to form a brighter image. FIG.4Bis a diagram illustrating the subtraction of images, consistent with some implementations of the current subject matter. Image subtraction can be useful to, for example, isolate dim laser spot from a comparatively bright image. Here, a first image420shows two spots, one representative of a laser spot and the other of an object. To isolate the laser spot, a second image422can be taken with the laser off, leaving only the object. Then, the second image422can be subtracted from the first image420to arrive at the third image424. The remaining spot in the third image424is the laser spot. FIG.4Cis a diagram illustrating the differencing of images to isolate a simulated internal lighting effect, consistent with some implementations of the current subject matter. There can be an object in the CNC machine100, represented as a circle in first image430. This could represent, for example an object on the material bed150of the CNC machine100. If, for example, half of the material bed150of the CNC machine100was illumined by outside lighting, such as a sunbeam, the second image420might appear as shown, with the illuminated side brighter than the side without the illumination. It can sometimes be advantageous to use internal lighting during operation, for example to illuminate a watermark, aid in image diagnostics, or simply to better show a user what is happening in the CNC machine. Even if none of these reasons apply, however, internal lighting allows reduction or elimination of the external lighting (in this case the sunbeam) via this method. This internal lighting is represented in the third image434by adding a brightness layer to the entire second image432. To isolate the effect of the internal lighting, the second image432can be subtracted from434to result in fourth image436. Here, fourth image436shows the area, and the object, as it would appear under only internal lighting. This differencing can allow image analysis to be performed as if only the controlled internal lighting were present, even in the presence of external lighting contaminants. Machine vision processing of images can occur at, for example, the CNC machine100, on a locally connected computer, or on a remote server connected via the internet. In some implementations, image processing capability can be performed by the CNC machine100, but with limited speed. One example of this can be where the onboard processor is slow and can run only simple algorithms in real-time, but which can run more complex analysis given more time. In such a case, the CNC machine100could pause for the analysis to be complete, or alternatively, execute the data on a faster connected computing system. A specific example can be where sophisticated recognition is performed remotely, for example, by a server on the internet. In these cases, limited image processing can be done locally, with more detailed image processing and analysis being done remotely. For example, the camera can use a simple algorithm, run on a processor in the CNC machine100, to determine when the lid130is closed. Once the CNC machine100detects that the lid130is closed, the processor on the CNC machine100can send images to a remote server for more detailed processing, for example, to identify the location of the material140that was inserted. The system can also devote dedicated resources to analyzing the images locally, pause other actions, or diverting computing resources away from other activities. In another implementation, the head160can be tracked by onboard, real-time analysis. For example, tracking the position of the head160, a task normally performed by optical encoders or other specialized hardware, can be done with high resolution, low resolution, or a combination of both high and low resolution images taken by the cameras. As high-resolution images are captured, they can be transformed into lower resolution images that are smaller in memory size by resizing or cropping. If the images include video or a sequence of still images, some may be eliminated or cropped. A data processor can analyze the smaller images repeatedly, several times a second for example, to detect any gross misalignment. If a misalignment is detected, the data processor can halt all operation of the CNC machine100while more detailed processing more precisely locates exactly the head160using higher resolution images. Upon location of the head160, the head160can be adjusted to recover the correction location. Alternatively, images can be uploaded to a server where further processing can be performed. The location can be determined by, for example, looking at the head160with the lid camera, by looking at what the head camera120is currently imaging, etc. For example, the head160could be instructed to move to a registration mark. Then the head camera120can then image the registration mark to detect any minute misalignment. Basic Camera Functionality The cameras can be, for example, a single wide-angle camera, multiple cameras, a moving camera where the images are digitally combined, etc. The cameras used to image a large region of the interior of the CNC machine100can be distinct from other cameras that image a more localized area. The head camera160can be one example of a camera that, in some implementations, images a smaller area than the wide-angle cameras. There are other camera configurations that can be used for different purposes. A camera (or cameras) with broad field of view can cover the whole of the machine interior, or a predefined significant portion thereof. For example, the image data acquired from one or more of the cameras can include most (meaning over 50%) of the working area. In other embodiments, at least 60%, 70%, 80%, 90%, or 100% of the working area can be included in the image data. The above amounts do not take into account obstruction by the material140or any other intervening objects. For example, if a camera is capable of viewing 90% of the working area without material140, and a piece of material140is placed in the working area, partially obscuring it, the camera is still considered to be providing image data that includes 90% of the working area. In some implementations, the image data can be acquired when the interlock is not preventing the emission of electromagnetic energy. In other implementations, a camera mounted outside the machine can see users and/or material140entering or exiting the CNC machine100, record the use of the CNC machine100for sharing or analysis, or detect safety problems such as an uncontrolled fire. Other cameras can provide a more precise look with limited field of view. Optical sensors like those used on optical mice can provide very low resolution and few colors, or greyscale, over a very small area with very high pixel density, then quickly process the information to detect material140moving relative to the optical sensor. The lower resolution and color depth, plus specialized computing power, allow very quick and precise operation. Conversely, if the head is static and the material is moved, for example if the user bumps it, this approach can see the movement of the material and characterize it very precisely so that additional operations on the material continue where the previous operations left off, for example resuming a cut that was interrupted before the material was moved. Video cameras can detect changes over time, for example comparing frames to determine the rate at which the camera is moving. Still cameras can be used to capture higher resolution images that can provide greater detail. Yet another type of optical scanning can be to implement a linear optical sensor, such as a flatbed scanner, on an existing rail, like the sliding gantry210in a laser system, and then scan it over the material140, assembling an image as it scans. To isolate the light from the laser, the laser may be turned off and on again, and the difference between the two measurements indicates the light scattered from the laser while removing the effect of environmental light. The cameras can have fixed or adjustable sensitivity, allowing them to operate in dim or bright conditions. There can be any combination of cameras that are sensitive to different wavelengths. Some cameras, for example, can be sensitive to wavelengths corresponding to a cutting laser, a range-finding laser, a scanning laser, etc. Other cameras can be sensitive to wavelengths that specifically fall outside the wavelength of one or more lasers used in the CNC machine100. The cameras can be sensitive to visible light only, or can have extended sensitivity into infrared or ultraviolet, for example to view invisible barcodes marked on the surface, discriminate between otherwise identical materials based on IR reflectivity, or view invisible (e.g. infrared) laser beams directly. The cameras can even be a single photodiode that measures e.g. the flash of the laser striking the material140, or which reacts to light emissions that appear to correlate with an uncontrolled fire. The cameras can be used to image, for example, a beam spot on a mirror, light escaping an intended beam path, etc. The cameras can also detect scattered light, for example if a user is attempting to cut a reflective material. Other types of cameras can be implemented, for example, instead of detecting light of the same wavelength of the laser, instead detecting a secondary effect, such as infrared radiation (with a thermographic camera) or x-rays given off by contact between the laser and another material. The cameras may be coordinated with lighting sources in the CNC machine100. The lighting sources can be positioned anywhere in the CNC machine100, for example, on the interior surface of the lid130, the walls, the floor, the gantry210, etc. One example of coordination between the lighting sources and the cameras can be to adjust internal LED illumination while acquiring images of the interior portion with the cameras. For example, if the camera is only capable of capturing images in black and white, the internal LEDs can illuminate sequentially in red, green, and blue, capturing three separate images. The resulting images can then be combined to create a full color RGB image. If external illumination is causing problems with shadows or external lighting effects, the internal lighting can be turned off while a picture is taken, then turned on while a second picture is taken. By subtracting the two on a pixel-by-pixel basis, ambient light can be cancelled out so that it can be determined what the image looks like when illuminated only by internal lights. If lighting is movable, for example on the translation arm of the CNC machine100, it can be moved around while multiple pictures are taken, then combined, to achieve an image with more even lighting. The brightness of the internal lights can also be varied like the flash in a traditional camera to assist with illumination. The lighting can be moved to a location where it better illuminates an area of interest, for example so it shines straight down a slot formed by a cut, so a camera can see the bottom of the cut. If the internal lighting is interfering, it can be turned off while the camera takes an image. Optionally, the lighting can be turned off for such a brief period that the viewer does not notice (e.g. for less than a second, less than 1/60th of a second, or less than 1/120th of a second). Conversely, the internal lighting may be momentarily brightened like a camera flash to capture a picture. Specialized lights may be used and/or engaged only when needed; for example, an invisible but UV-fluorescent ink might be present on the material. When scanning for a barcode, UV illumination might be briefly flashed while a picture is captured so that any ink present would be illuminated. The same technique of altering the lighting conditions can be performed by toggling the range-finding and/or cutting lasers as well, to isolate their signature and/or effects when imaging. If the object (or camera) moves between acquisitions, then the images can be cropped, translated, expanded, rotated, and so on, to obtain images that share common features in order to allow subtraction. This differencing technique is preferably done with automatic adjustments in the cameras are overridden or disabled. For example, disabling autofocus, flashes, etc. Features that can ideally be held constant between images can include, for example, aperture, shutter speed, white balance, etc. In this way, the changes in the two images are due only to differences from the lighting and not due to adjustment in the optical system. Multiple cameras, or a single camera moved to different locations in the CNC machine100, can provide images from different angles to generate 3D representations of the surface of the material140or an object. The 3D representations can be used for generating 3D models, for measuring the depth that an engraving or laser operation produced, or providing feedback to the CNC machine100or a user during the manufacturing process. It can also be used for scanning, to build a model of the material140for replication. The camera can be used to record photos and video that the user can use to share their progress. Automatic “making of” sequences can be created that stitch together various still and video images along with additional sound and imagery, for example the digital rendering of the source file or the user's picture from a social network. Knowledge of the motion plan, or even the control of the cameras via the motion plan directly, can enable a variety of optimizations. In one example, given a machine with two cameras, one of which is mounted in the head and one of which is mounted in the lid, the final video can be created with footage from the head camera at any time that the gantry is directed to a location that is known to obscure the lid camera. In another example, the cameras can be instructed to reduce their aperture size, reducing the amount of light let in, when the machine's internal lights are activated. In another example, if the machine is a laser cutter/engraver and activating the laser causes a camera located in the head to become overloaded and useless, footage from that camera may be discarded when it is unavailable. In another example, elements of the motion plan may be coordinated with the camera recording for optimal visual or audio effect, for example fading up the interior lights before the cut or driving the motors in a coordinated fashion to sweep the head camera across the material for a final view of the work result. In another example, sensor data collected by the system might be used to select camera images; for example, a still photo of the user might be captured from a camera mounted in the lid when an accelerometer, gyroscope, or other sensor in the lid detects that the lid has been opened and it has reached the optimal angle. In another example, recording of video might cease if an error condition is detected, such as the lid being opened unexpectedly during a machining operation. The video can be automatically edited using information like the total duration of the cut file to eliminate or speed up monotonous events; for example, if the laser must make400holes, then that section of the cut plan could be shown at high speed. Traditionally, these decisions must all be made by reviewing the final footage, with little or no a priori knowledge of what they contain. Pre-selecting the footage (and even coordinating its capture) can allow higher quality video and much less time spent editing it. Video and images from the production process can be automatically stitched together in a variety of fashions, including stop motion with images, interleaving video with stills, and combining video and photography with computer-generated imagery, e.g. a 3D or 2D model of the item being rendered. Video can also be enhanced with media from other sources, such as pictures taken with the user's camera of the final product. Additional features that can be included individually, or in any combination, are described in the sections below. Network Configuration FIG.5is a diagram illustrating a cloud-based system supporting operation of a CNC machine100, consistent with some implementations of the current subject matter. By taking the most computationally expensive steps in the process described herein and distributing them over a network510to distributed computers, several advantages can be realized. The cloud-based system can include any number or combinations of first computing systems520(servers, mainframes, etc.) to provide powerful computation services. There can also be smaller second computing systems530, for example, mobile phones, tablet computers, personal computers, etc. where users can access and interact with the system. Both the first computing system520and the second computing system530can be connected by the network510to each other and to any number of CNC machines. The network510can be a wired or wireless network. Calibration Between Camera Images And Machine Coordinates FIG.6is a diagram illustrating a mapping between a pixel in an image and the equivalent point in a CNC machine, consistent with some implementations of the current subject matter. Optical systems such as cameras provide greater utility when their output is correlated with other values such as known physical references. This can occur by a process of one-time, repeated, or continual calibration, in which, for example, measurements of known quantities can be made by optical systems and any deviation of the measured value from the known value can be recorded and used to compensate for future deviations. For this reason, correct optical calibration is crucial to CNC machine function and performance. Optical calibration by processes such as head homing can be part of factory set-up, system resets, and responses to observed alignment errors that may otherwise occur. The premise of one example of this approach is to use optical measurements to leverage relationships between features with known system geometry in order to calculate unknowns of interest. As described herein, there can be one or more cameras located throughout the interior of the CNC machine. These cameras can generate images according to their individual configuration. For example, the cameras can be oriented at varying angles relative to the material bed, contain optical components that transform or distort the images acquired with the cameras, and so on, with any or all of these affecting the final image. A mapping may be created by which a pixel in the image is determined to correspond to one or more locations in the CNC machine. As shown inFIG.6, a pixel can conform to a particular spot on the material bed. In some implementations of the current subject matter, an image of a fixture and/or calibration tool that is not necessarily part of the CNC machine may be captured with at least one camera located within an enclosure that also contains the fixture and/or the calibration tool. A mapping relationship can be applied to map between a pixel in the image to CNC machine coordinates. As used herein, CNC machine coordinates can refer to a location at which the tool can process material within the working area of the CNC machine. Such approaches lend themselves to a variety of mapping and/or locating purposes during factory calibration, troubleshooting, or at other points in the lifecycle of the CNC machine. To further illustrate,FIG.14is a diagram illustrating an example of how such fixtures or calibration tools may be used to establish the precise location of pixels by making use of a set of unique identifiers. As shown inFIG.14, the set of unique identifiers can be a grid that includes one or more unique patterns (e.g., unique 6×6 patterns and/or the like). That is, the unique pattern in each grid can encode a corresponding unique identifier indicative of the location of that grid. In this example, CNC machine optics may be used to obtain at least one field of view of one or more grids. As each pattern is unique, the CNC machine may determine, based on the observed pattern and the corresponding unique identifier, which precise piece of the grid the camera at a given location is observing at a given time. It should be appreciated that a random number generator may be used to generate the grids. For instance, the random number generator can be used to generate the pattern included in each grid. The pattern in each grid may further be tested for uniqueness relative to the pattern included in the other grids. Furthermore, movement and/or rotation (e.g. in the X, Y or Z dimensions) of an optical system component (such as a camera) followed by re-imaging of such a grid can yield positional data used in downstream location-based calculations including, for example, calculations to establish, with a threshold level of confidence, the location of an optical component in relation to other internal CNC machine components. In some implementations of the current subject matter, use of a fixture and/or calibration tool having two or more tiered steps may be a useful way of establishing a library of optical image data that can be used as a reference for interpolating positional information, or for application of appropriate de-warping algorithm parameters that should be used when materials of varying thicknesses or properties are being analyzed or machined. The two or more tiered steps of the fixture and/or calibration tool may be achieved by having a stepped design to the fixture or calibration tool itself (as shown inFIG.10C), or by virtue of a moveable component to the fixture or calibration tool itself. For instance, the fixture and/or the calibration tool may reside on a platform that can be precisely moved by a certain measurement (e.g. lifted with nanometer precision in the Z direction). Alternatively and/or additionally, precision calibration may be achieved by the use of a fixture and/or calibration tool that incorporate tilt within its range of movement. These approaches may benefit from complementary calibration tools, such as use of depth-finding measurements to measure, verify or calibrate the associated height of a given tiered step; a laser range-finder may be useful in such applications (seeFIG.10). These approaches may have application in the replacement of machine parts (for example, as a result of wear and tear from regular use), and when the installation of a replacement part (e.g. a head) needs to be calibrated to align with the positioning of the previous part. In such cases, the previous data from factory calibrations may be used to pre-load the most appropriate calibration settings into the new part ahead of installation into the target machine. Subsequent calibrations may also be performed based on earlier calibrations. For example, a high-precision calibration may be performed before processing a detailed print where it may be necessary to correct for, and operate with extreme precision (e.g. nanometer-scale). In such cases, a re-calibration that takes into account the minor variations and manufactured differences between the profiles of fixtures or tools used in the original factory calibrations may be necessary. Unique identifiers within the fixtures themselves may thus be useful for this purpose of enabling retrospective adjustments or calibrations relative to that of the original factory calibration; for example, if precise machining is required, the ultra-precise measurements of the individual fixture that was used in the initial factory calibration may be taken into consideration either at the time of original calibration or whenever the user indicates a higher level of calibration precision is needed (e.g., via activation of a deep scan mode). FIG.7is a diagram illustrating how the appearance of different features at different distances from a camera can appear the same in images acquired by the camera, consistent with some implementations of the current subject matter. In another example, a pixel can refer to a range of possible locations in 3-dimensional space. The fact that there can be a range of possible locations is a limitation of using only a single 2D image to locate a spot in 3-dimensional space. InFIG.7, the image (of the solid black portion in the material) as a spot can appear the same for a small location, close by, offset very little from the camera (FIG.7A i), a medium sized spot, offset a medium distance from the camera (FIG.7A ii), or a large location, far away, offset significantly from the camera (FIG.7A iii). A distance measurement, for example by a stereoscopic view, laser rangefinder, or a known material constant such as the height of the material being imaged, can disambiguate these situations and provide a solution to determine the correct singular location corresponding to the pixel(s) in the image. For this reason, some implementations can have multiple inputs to the image mapping, for example using multiple images or an image plus a distance measurement to locate a position in three-dimensional space. In some implementations, an image of at least a portion of a material bed of a CNC machine can be captured with at least one camera located inside an enclosure that contains the material bed. A mapping relationship can map between a pixel in the image to a location at which the tool can affect a material within the processing area of the machine. In some implementations, the determination can be stored as a mapping file or calibration file that can be used during operation of CNC machine to translate instructions for individual pixels, for example to commence machining at a location in an image indicated by the user, into commands for the CNC machine that reflect that goal. The determination can include compensating for a difference in the image relative to a shape of the material, shape of the material bed, or a shape of any object or feature shown in the image. More generally, the determination can include creating a mapping relationship that maps a pixel in the image to a location by compensating for one or more differences in the image relative to one or more physical parameters of the laser CNC machine and/or the material. These differences can include one or more of a shape of the material bed, lighting conditions within a housing of the laser CNC machine, a thickness of the material, a height of an upper surface of the material relative to the material bed, or the like. FIG.8is a diagram illustrating continuous recalibration during execution of a motion plan, consistent with some implementations of the current subject matter. In some implementations, this includes such calibration occurring in real-time, including continuous re-calibration during the execution of a motion plan. For example, an instruction to draw a circle is executed (to generate the circle on the left side ofFIG.8). The optical sensor can indicate that the expected path is broken, for example because the machine was bumped during processing, which, if uncorrected, could result in a line as shown by the right side ofFIG.8. In this example, the imaging system can observe the location of the tool head, the tool marks, and the material, and recalculate the relationship between all parts and the machine frame, bringing the system back to a known state. The machine can optionally then complete the operation correctly. Furthermore, the state of the pixel, for example, a color, can be translated into an instruction for one or more components of the CNC machine, for example, the laser. One example method of calibration can be to deliver laser energy to one or more locations that are shown in the image. Based on a known location of the laser head, the location corresponding to a particular pixel in the image can be determined by detecting the effect of the laser on the material through the change to the particular pixel. One-to-one mapping is an example implementation of this, whereby a spot-to-pixel map is generated over the repeated process of drawing a spot at a location (for example chosen sequentially or at random), using a camera to image the spot and relate it to a pixel, and so on until a spot-to-pixel map of known locations. The pixel coordinates in the image and the corresponding location coordinates of the laser head in the CNC machine can be stored as the mapping file. During operation, the mapping file can be used directly, or can be used as a basis for interpolation or extrapolation to convert pixel coordinates to location coordinates and vice versa. Such a mapping is only correct at a fixed distance from the camera equal to the distance of the material on which the spots were drawn; for that reason, the distance between the camera and the material may be stored so that the correct distance may be confirmed before utilizing the mapping. In another implementation, there may be multiple such operations performed to determine the parameters of that mapping at various distances. The distance to the actual material may be measured, for example via a laser rangefinder or entered manually by the user typing in a known quantity, and the mapping with the closest distance used. In another implementation, two or more such mappings at various differences may be used to create a three dimensional model of how images at various x, y, and z coordinates (that is,2dlocations and distances from the camera) will appear on the camera, so the process may be reversed and accurate locations determined for images of arbitrary flat or variable-depth surfaces. This application may be important when precision of a CNC machine as a whole may be compromised and/or have been subjected to even small misalignments that may impact overall print performance. For example, during routine operation, the opening and closing of a lid of a CNC machine in between individual operating runs may result in marginally different positioning of optical components attached to the lid itself, perhaps due to hinge variance, the limit of certain manufacturing tolerances, operator inconsistencies, and/or the like. The level of stringency required in modeling interior components of a CNC machine in such situations may vary from taking and combining positional data from multiple optical images, through to the use of stereovision to take multiple or 3D images to triangulate position, and/or the use of uncertainty modeling of the variables that may be most likely to cause misalignment (i.e., modeling of lid hinge joint parameters and the associated confidence intervals that exist for multiple open/close events). Furthermore, the use of machine learning models (e.g., neural networks and/or the like) that run in the background of every image taken may be routinely analyzed across a variety of calibration-related applications. Such applications may include, for example, location-based calibrations (e.g., identification of a reference location for mobile machine components as in the case of head homing, camera alignments to ensure precise processing directly over a desired material location, etc.), material calibrations (e.g., determining material size, whether objects have been placed in the CNC machine for scanning or by accident, or to provide calibration context to an object; such as detecting the brand and model of an electronic device placed in the machine for tracing or engraving), anomaly adjustments (e.g, scorch marks detected lead to adjustments to the air fan directionality or speed), and/or the like. The calibration process can include one or more mathematical operations that are performed to correct or compensate for non-linearity between the image and the actual geometry of the material bed such that the image is converted to appear as an accurate representation of the material bed area. One way of considering this process is as removal of distortions arising from proximity of the camera, perspective issues, color variations, etc. that can cause the incorrect end image to appear as an inaccurate representation of the material bed (and any material positioned on the material bed) rather than as a precise and accurate map of how the material bed and any material on it appear on a pixel by pixel basis without the above-noted effects. Head Homing as an Example of Optical Calibration FIG.9is a diagram illustrating the location of a CNC machine component by optical calibration, consistent with some implementations of the current subject matter. The following example offers one implementation of a multi-step, automated process to precisely locate the head (and attached gantry) in either x or y dimensions, and determine the number of steps in either direction to send to head in order to calibrate a target position (e.g., centered position, “home position” at 0,0, etc.). In this example, the X and Y actions are assumed to be driven by stepper motors, capable of motion in the X and Y directions in increments of one “step”, or small unit of distance. In this example,FIG.9shows the process of identifying the Y (vertical) location of a horizontal gantry in a CNC machine. First, the region of the image that contains the gantry is identified by taking an image (FIG.9A), instructing the gantry to move a small distance, taking a second picture (FIG.9B), and then subtracting the second picture from the first (FIG.9C). The result will have dark pixels in regions that are identical and have light pixels in regions that changed between images. Other metrics to quantify the pixels that have changed can be implemented, for example, converting the image in a pixel to a numerical value based on brightness, color, or the like. This resulting differential image helps to identify regions of interest in locating the gantry. Next, a high-pass filter such as a Sobel filter can be used to identify the regions of the image (FIG.9D) that are “edges.” This filtered image is, for analytical purposes, can be sub-divided into “buckets” that allow for simplified analysis of the high frequency signal data that came out of the Sobel filter (FIG.9E). This enables a variety of heuristics to be used to identify the position of the gantry. Examples of useful parameters used in these heuristics include, but are not limited to, the global intensity score (i.e., the sum of high frequency signal across all buckets), which may, for example, be used to identify when the gantry is in the middle region of the image, where the gantry reflects maximum light. In this case, the algorithm may switch strategies depending on the global intensity score, using for example the maximum bucket score (i.e., the bucket and corresponding score with the most intense high frequency data) to further narrow down the precise location of the gantry when it is known to be in the central region. Calculated parameters, such as the global intensity and maximum bucket scores, may then be examined within the context of pre-determined, factory-calculated power functions, which can express the expected levels of high frequency signal relative to known head positions along a given axis (FIG.9D). This “homing context step” can be conducted using factory calibrations at the level of the individual machine, a manufactured lot of machines, or in the broadest sense (i.e., used across an entire model or product line). The decision as to which level of calibration is most appropriate, can depend on manufacturing variances (e.g., variance attributable small differences in lid camera positioning or offsets, potential aberrations in gantry, etc.), or on the degree of imaging sophistication used in calibration (e.g., 3D scanners). The position of the head can also be calibrated by moving the unit to a known position, for example driving the motors until they are not capable of further travel, and then moving to different locations and capturing images that can be used for calibration. Selection and interpretation of the parameters that are deemed important in the homing context step may vary, or require differential thresholds to optimize their application. As an example, the internal features within the CNC machine itself, may possess position-dependent high frequency signal properties that may confound signal interpretation, thus rendering one calculated parameter superior to another for the purpose of locating the head along a particular axis.FIG.9Dshows that for positions where the head is towards the front of CNC machine, a reflective laser tube installed on the gantry is a prominent feature imaged by the lid camera, resulting in larger global scores that can be reliably used to locate the y-axis. Conversely, maximum bucket score may be more appropriate for robust interpretation of y-axis positioning in cases where the gantry is closer to the rear of the unit. Further manipulations to the lighting environment within the CNC machine (e.g., turning internal lights off or additional lights on), may be utilized in order to generate more favorable conditions during calibration. In an alternative implementation of head homing, pattern recognition techniques may be useful. For example, as shown inFIG.9FandFIG.9G, image capture of a fiducial mark (e.g., a logo located on the head of a CNC machine—shown by the “*”), may be used to pattern match, scaling in and out relative to a known image until fine-grain calibration accuracy is achieved. Height-Sensing as an Example of Optical Calibration FIG.10is a diagram illustrating of a method for optical calibration based on height sensing, consistent with some implementations of the current subject matter. A variety of methods may be used to achieve accurate determination of the distance from the tool to the surface of the material or other key spatial relationships that are important for proper machine tool usage. In one implementation, shown inFIG.10A, a distance measuring laser1010that is visible to an optical sensor such as a camera shines a dot on the surface of the material140that can be imaged by the camera and whose location within the frame of the camera may be used to interpolate heights of the surface of materials given some calibration data (FIG.10B). The distance measuring laser can be replaced by a collimated light source, a traditional LED with low divergence, or other means of projecting a dot that can be imaged by the camera. As shown inFIG.10C, some implementations can benefit from the use of a specially calibrated jig with a number of steps of known heights. During calibration, the position of the head can be moved in such a way that it shines light (e.g., from the aforementioned laser or collimated light source) on steps of the jig; images of the visible light may be taken at each position, and the height of the jig—known from manufacturing parameters of the jig—can be recorded. The visible beam strikes the material at an angle, such that the position of the dot in the frame of the camera image will move an amount related to the step height; by calibrating where the beam lands in the camera image for a sufficient number of step heights, interpolation for heights within this calibrated range is possible. In an extension to this implementation, de-warping techniques, (e.g., known reference regions such as checker-board patterns of known shape and against which distortion may be measured and compensated for) may be used to calibrate intrinsic camera parameters (e.g. tilt, pose, fisheye, chromatic aberration, etc.) relative to its view of the bed. Material Thickness Determination—General A variety of methods can be used to determine the thickness of the material140to be cut or engraved. One method can be to determine the height of a top surface of the material140and compare this height to a known position of a bottom surface of the material140. Typically, though not necessarily, the bottom surface of the material140coincides with the top surface of the material bed150, which can be of a known height. The difference between the height of the top surface of the material140and the height of the bottom surface of the material140can then be determined to be the thickness of the material140. In another implementation, the process used to determine the thickness of the material140can be calibrated by measuring a material140with a known thickness. For example, an object with a 1 cm thickness can be placed on the material bed150. Data can be acquired by the cameras and the data can be associated with the known thickness of the object. In another implementation, the cameras can determine the height of the surface the material140is resting on. For example, if there are other pieces of material140between the topmost material140and the material bed150, the cameras can measure the height of the topmost surface before material140is inserted or measure the height of the topmost surface in a location not obscured by the material140. In one implementation, the height at different points can be measured, for example in a grid pattern, on the surface of the material140in order to characterize the curvature of the material140. Once the height at many points on the material140is known (and consequently the surface curvature), instructions can be generated so that one or more actuators can follow the curve of the material140. For example, a cutting laser can be kept in focus, a camera can be kept in focus, a 3D printer head can maintain a constant separation from the material base, or a CNC milling bit can be kept a constant distance from the material140. Furthermore, the same process can be repeated with the bed or surface upon which the material rests, for example before the material is placed on the bed. Then, the knowledge of the possibly-curved surface of the bed and possibly-curved surface of the material can be combined by subtracting the bed height from the material height at each x-y location. There may be voids between the bed and the material, for example if the material is warped, but if the material is of uniform thickness and touches the bed at a minimum of one location then the smallest value of (material height—bed height) will be the material thickness. Material thickness, distinct from height, is useful in, for example, ensuring that tab-and-slot joinery will fit properly. Therefore, in some implementations, the height of the material relative to a known height of a point in the CNC machine can be determined. The point can include a point on the head, a fixed point on the housing of the CNC machine, a structural feature connecting the laser to the material bed, or the like. Heights can be measured at multiple locations in a preset pattern, such as a grid, across the material, or at a sufficient number of random locations on the material. Based on any number of height measurements, the minimum height can be determined and assigned as the material thickness. Once the distance between the surface and a lens (or any other reference point in the CNC machine100) is known, this can be incorporated to precisely control the height of the head160(and optics internal to the head160) when machining. Contrast detection, phase detection, or any other distance finding techniques described herein can also be implemented on other machines, for example, a CNC mill where the distance determines where the head160is to position a bit. In this way, the motion plan can incorporate, for example, contrast detection, autofocus, etc. to perform real-time analysis of the position of the material140and/or the position of the head160relative to the material140. Direct Inspection of Material Thickness In another implementation, the material140can be imaged by a camera at a low angle relative to the surface of the material. The angle can be, for example, 0 degrees (parallel to the surface), less than 5 degrees, less than 10 degrees, etc. This “edge-on” view allows a direct determination of the height of the material. Here, an image of the material can be acquired. The height or thickness of the material140is related to the number of pixels of the material in the image. In some implementations, a distance measurement between the camera and the edge of the material can first be performed. Based on the distance from the camera to the edge which is being imaged, a conversion can be performed between the height in pixels and the material height. FIG.11is a diagram illustrating a method for correlating the height of the material and the focal length of the lens, consistent with some implementations of the current subject matter. In some implementations, for example those described inFIG.9, knowledge of the position of the lens within the height system can be important for optical calibration. Therefore, the height of the material and the focal height (Z) of the lens must be correlated. The height of the material may be determined as described above. One approach to determining the Z height of the lens can involve construction of a series of calibration cuts by using the processing laser to cut material at different lens positions, as instructed by the stepper motor driving the lens up and down (FIG.11A). The radial spokes diagram inFIG.11B provides an example of a pattern that can be used to calibrate for Z height. Each spoke can correspond to a different Z height of the lens. This particular pattern offers the advantage of being imaged in a single frame by a camera that is mounted near the lens, close to the material, with a narrow field of view. The pattern may be positioned (e.g., on a calibration jig with a disposable insert upon which the pattern is engraved) such that no additional material changes between height-sensing and Z-height calibrations are necessary. In some implementations, the area of each of the radial spokes can be calculated using a Bayesian approach that analyzes the distribution of line thicknesses and determines the position of lens that would produce the thinnest line, even if that position is between two of the example lines drawn. This thinnest line necessarily occurs when the distance between the lens and the material is equal to the focal length of the lens. Since this approach does not rely on any assumed knowledge of the focal length of the lens, but rather calculates the optimal lens distance from the material at a given material height from observed behavior, this particular approach accounts for any manufacturing variance in focal length of the lens that may exist. Note that it does not identify the focal length of the lens numerically, only the stepper motor settings that position the lens at the proper distance from the material. In other implementations, lines of varying width can be cut or engraved in the material, where the varying can be performed by scanning a focal point of the laser by movement of one or more optical components in the laser or laser head. The scanning can be fixed during one line (e.g. one of the radial line) but changed between each radial line. Any number of lines can be made at different CNC machine settings. The width of the lines can be imaged and calculated, and an optimal configuration of the laser head can be determined based on finding the thinnest line. When the optimal configuration lies between settings that were tested, the function of the change in line width versus the changes in CNC machine settings can extrapolated, interpolated, or otherwise curve-fit to determine the optimal settings (e.g. position of a focusing lens that results in the most focused laser beam at the material surface) based on the available sample cuts. Additionally, such an approach as described inFIG.11may also be used during periodic calibrations during the lifecycle of a CNC machine, such as for the purpose of analyzing and detecting drift in kerf over time. This may be performed via machine learning algorithms that instruct the CNC machine to use known calibration patterns, make use of organic images of processed CNC machine-made cuts in material (i.e., such as those produced during normal operations), and/or the like in order to detect anomalous kerf readings and provide feedback adjustments of machine parameters. Alternative implementations of material height-sensing calibration methods include use of a reference beam to strike the surface of a material, while a probe beam measures the height of the lens moved by the stepper motor to various Z heights; the distances between the two height readings may be subtracted and a calibration curve based on the distance from the lens to the material can be created. FIG.12is a diagram illustrating an absolute determination of the distance of the head from the material, consistent with some implementations of the current subject matter. In another example, the distance from the head containing the laser to the material can be determined absolutely, and not solely in reference to the processing laser's focusing lens. As shown inFIG.12, this can be performed by imaging a spot from the distance-measuring laser incident on a material of known distance relative to the head. Using information on the relationship between laser dot position in the camera image and material distance determined during focus calibration, and the measurement of the laser dot position relative to the camera image combined with the known distance between the material and the head, the absolute distance to any surface can be determined. Location Identification Via Correlation of Lid and Head Camera Images In another implementation, the lid and head camera images may be coordinated such that alignment of their respective images via capture of common images is possible. For example, it may be possible to de-warp both images to be flat, square and such that the head camera captures a small, common feature also imaged by the lid camera (e.g. a dot on a piece of material that can be used to adjust the offset between two images until the dot appears in the same location in each). By determining the precise position of the head and dot from the head image (e.g., determined via such approaches as “head homing” outlined above), the head and lid images may be correlated for mapping of broader features that are captured by the lid camera, but not within the field of view of the head camera. For example, if a small piece of material is to be imaged precisely, the system may first capture a lid image. The head can then be moved so that a head image may be taken of the region indicated by the lid image to contain the material. The distance of the head to the surface of the material can be measured by means of the distance laser and head camera. At this point, high resolution imagery and depth information of the small piece of material are both available. In a further implementation, the calibration process and the creation of the mapping relationship can also account for deviations between a laser head position and where the laser beam is actually emitted. Such deviations can arise due to small misalignments in the laser beam path and optical components, such that a location of the beam emitted from the head may differ relative to the laser head position according to where in the x-y plane the laser head is currently positioned. For example, if a beam is not in proper alignment with the optical systems, it may not impact the material at the focal point of the lens. Instead, it may drift in its location relative to the focal point as the laser head is moved around the bed. An optical system can measure this drift by printing calibration patterns or by observing errors in requested production processes and compensate for it. Similar compensation can be used for other predictable errors, such as those which have a linear relationship to the laser head position, for example compensating for misaligned rails in a linear system that introduce an angular offset to straight lines. With the appropriate mapping stored at the CNC machine or the computing system commanding it, the mapping file can be used to convert instructions for the image into commands for the CNC machine. For example, a user, through a graphical user interface, or a motion plan, can instruct that material represented by a given pixel be engraved to a particular depth. The location coordinate in the CNC machine corresponding to the pixel coordinate can be determined from the mapping file. The CNC machine can move the laser head to the location coordinate. The CNC machine can then execute the command for the laser to engrave the material to the particular depth by applying measured amounts of laser power. The compensating applied to the image can include, for example, at least one of: de-warping the image, de-distorting the image, color adjusting the image, measuring a thickness of the material, correcting for a variable alignment between a laser head position and where the laser energy is emitted, correcting for surface topography, correcting for capturing a side view of the material having a thickness that is viewed from the camera, or the like. The image can be modified prior to generating the mapping file and can incorporate additional mappings corresponding to the one or more modifications. Image Aberration Correction FIG.13is a diagram illustrating correcting aberrations in images acquired by a camera with a wide field of view, consistent with some implementations of the current subject matter. A principal challenge of wide-angle imaging inside a small working space with the unit closed is the distortion introduced by the wide-angle lens required. Images from cameras, particularly those with a wide field of view, can suffer from multiple types of distortions. In one implementation, an image correction program can be executed to convert distorted image data1310(which can be considered to be the sum of a perfect image and a distortion) to corrected image data1360(which can be either a perfect image or at least an image with reduced distortion). The distortion correction can include processing an image to achieve one or more (or optionally all) of removing the distortion, enhancing the image by increasing contrast, and mapping pixels in the image to corresponding physical locations within the working area, or other areas in the CNC machine. The distortions can be due to optical components in a camera, such as a wide angle lens, the de-centration of an imaging sensor within said lens, chromatic aberrations, reflections or reflectivity, damage or undesirable coatings on the lens, etc. These distortions can be compounded given external factors related to the orientation of the camera110with respect to the material bed150it is observing as a result of its mount on the lid130including the camera's position, rotation and tilt. After making the corrections, the image data can be replaced with, or used instead of, the corrected image data prior to identifying conditions in the CNC machine100or performing further image analysis. In another implementation, the conversion can be performed by imaging one or more visible features1320shown in the distorted image data. In the example shown inFIG.13, the visible features1320can be crosses distributed with a known distance separation across a surface of an object. The distorted image1310, which includes the visible features1320, can be acquired. A partially de-distorted image1330can be generated by applying a barrel de-distortion function to the distorted image1310. The partially de-distorted image1330can be separated into smaller images1240, with each of the smaller images1340including only one of the visible features1320. The plurality of smaller images1340can be sorted (as shown by the numbering in the smaller images1340), based on coordinates of the visible features1320, into at least one set of visible features, the set of visible features being approximately co-linear. For example, smaller images 1, 2, 3, and 4 can be determined to be co-linear (in the X direction) and smaller images 1 and 5 can be determined to be co-linear (in the Y direction). Mathematical expressions for a line1350that passes through each of the coordinates can be calculated for each of the set of visible features and based on the coordinates of the visible features1320in the corresponding set. The line1350can be, for example, a polynomial fit to the set of visible features1320, a spline, etc. The distorted image data1310, at any point in the image data, can be converted to the corrected image data1360by applying a correction to the distorted image data1310based on an interpolation of the mathematical expressions to other points in the distorted image data1310. For example, the interpolation can be between lines1350that extend in two orthogonal directions (i.e. a grid pattern shown inFIG.13). The linear distance between the interpolated lines can correspond to less than 5 pixels, less than 3 pixels, or a single pixel. Optionally, coarser interpolation can be used that extends over more pixels than those mentioned previously. In other implementations, a preview can be generated that can include a view of the material and one or more features of a design to be cut into the material. The one or more features can reflect the determined relationship between the individual pixels in the image and a number of locations at which the laser can deliver energy to make cuts in the material. The view can be a graphical image generated on a computing device. For example, a camera at some angle relative to the material bed can image a square drawn on a piece of wood placed on the material bed. Because the camera is not looking directly down at the material, the image initially captured will appear elongated in some direction. By applying the mapping, the view of the material and square can be generated not in the elongated shape but instead modified to appear with the proper proportional dimensions. A user can enter instructions to modify the image, for example, drawing a diagonal line through the square. Based on the mapping between the pixel coordinates in the modified image and the location coordinates in the CNC machine, commands can be issued to the CNC machine to cut or engrave the indicated line at the proper location coordinates. During the calibration process or at any time during operation of the CNC machine, the image can be captured with a first camera and a second camera can be directed to acquire a second image corresponding to at least part of the first image. Also, a location of the second camera can be determined using the first camera. In some implementations, the first camera can be the lid camera and the second camera can be the head camera. Though in other implementations, any camera operatively connected to or viewing any part of the CNC machine or the material can be the first camera or the second camera. In some implementations, operation of a component can be imaged by a camera as part of calibrating the component. For example, the laser head can be commanded to take a certain number of steps via its actuator. A camera can image how the laser head moves during execution of the command and convert those steps into a sequence of location coordinates. The coordinates of the laser head can be compared with the command to, for example, provide a calibration factor for the system when commanding the laser head to move, detect skips or other anomalies in movement of the laser head, etc. Other components in the CNC machine can also be imaged in a similar manner and the images translated into real coordinates in the CNC machine for analysis or calibration. FIG.15is a flowchart illustrating a process1500for calibrating a CNC machine, consistent with implementations of the current subject matter. Referring toFIG.15, the process may be performed by the CNC machine100and/or one or more data processors coupled with the CNC machine100. The CNC machine100can capture of an image of at least a portion of the CNC machine100using at least one camera located within an enclose that contains a material bed of the CNC machine100(1502). In some implementations of the current subject matter, the CNC machine100can include one or more cameras (e.g., head camera160) that are capable of capturing an image of at least a portion of the interior of the CNC machine100. These cameras can include, for example, a single wide-angle camera, multiple cameras, a moving camera where the images are digitally combined, and/or the like. The CNC machine100can create a mapping relationship that maps a pixel in the image to a location within the CNC machine100while compensating for a difference in the image relative to one or more physical parameters of the CNC machine100and/or a material positioned on the material bed of the CNC machine100(1504). As noted, optical systems such as, for example, cameras and/or the like, can provide greater utility when their output is correlated with other values such as known physical references within the CNC machine100. Thus, a one-time, repeated, and/or continual calibration can be performed such that measurements of known quantities can be made by optical systems and any deviation of the measured value from the known value can be recorded and used to compensate for future deviations. In some implementations of the current subject matter, these calibration can include mapping a pixel in the image captured by the cameras at operation1502to a physical location within the CNC machine100. For instance, this mapping may include translating the pixels within the image to CNC machine coordinates that correspond to locations within the CNC machine100at which a tool coupled with the CNC machine100(e.g., laser, drill, and/or the like) can process a material disposed within the CNC machine100(e.g., on the material bed150). According to various implementations of the current subject matter, a method for calibrating a CNC machine can include capturing one or more images of at least a portion of the CNC machine. The one or more images can be captured with at least one camera located inside an enclosure containing a material bed. A mapping relationship can be created which maps a pixel in the one or more images to a location within the CNC machine. The creation of the mapping relationship can include compensating for a difference in the one or more images relative to one or more physical parameters of the computer-numerically-controlled machine and/or a material positioned on the material bed. The CNC machine can include at least one tool configured to deliver electromagnetic energy to at least one location on the material positioned on the material bed. The one or more physical parameters of the CNC machine and/or the material comprise one or more of a shape of the material bed, lighting conditions within a housing of the CNC machine, a thickness of the material, and a height of an upper surface of the material relative to the material bed. A pixel coordinate in the one or more images can be mapped, based at least on the mapping relationship, to a location coordinate in the CNC machine. An instruction corresponding to the pixel coordinate can be converted to a command for execution at the location. The conversion can be based at least on the mapping of the pixel coordinate in the one or more images to the location coordinate in the CNC machine. The command can be executed as part of a motion plan for the CNC machine. The compensation can include at least one of: de-warping the one or more images, de-distorting the one or more images, color adjusting the one or more images, measuring a thickness of the material, correcting for a variable alignment between a position of a source of electromagnetic energy and a location where electromagnetic energy is emitted, correcting for surface topography, and correcting for capturing a side view of the material having a thickness that is viewed from the camera. A preview including a view of the material and one or more features of a design to be cut into the material can be generated. The one or more features can reflect at least the mapping relationship between the pixel in the one or more images and the location within the computer-numerically-controlled machine. The location can correspond to a location to which electromagnetic energy is delivered in order to effect at least one change in the material. The capturing of the one or more images can include capturing a first image with a first camera and directing, based at least in part on the first image, a second camera to capture a second image. The location of the second camera can be determined based at least on the first image and/or the second image. The creation of the mapping relationship further includes compensating for a deviation between a source of electromagnetic energy within the computer-numerically-controlled machine and a location to which the electromagnetic energy is delivered. The image can include one or more unique patterns from a calibration tool that includes a plurality of unique patterns. The mapping relationship can be created based at least on a location of the one or more unique patterns within the calibration tool. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” Use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. | 108,331 |
11860602 | DETAILED DESCRIPTION FIG.1is a diagram showing various typical components of a metrology system1including a generic CMM, which provides one context for application of the principles disclosed herein. Certain aspects of the metrology system1are further described in the previously incorporated '746 patent. The metrology system1may include: a CMM body2; a motion controller3that controls a drive of the coordinate measuring machine body2; an operating unit4for manually operating the coordinate measuring machine body2; a host computer5that issues commands to the motion controller3and executes processing such as for the inspection of features on a workpiece10(i.e., an object to be measured) that is disposed on the CMM body2. A representative input unit61and output unit62are connected to the host computer5, as well as a display unit5D. The display unit5D may display a user interface, for example, as described in more detail below. The CMM body2may include: a probe21having a stylus tip21T which may contact a surface of the workpiece10; a movement mechanism22that includes a three axis slide mechanism24that holds the base end of the probe21; a measurement stage23that holds the workpiece10and on which the drive mechanism25moves the slide mechanism24. In various implementations, the drive mechanism25may be controlled by a CMM control portion (e.g., including the motion controller3). As will be described in more detail below, in various implementations one or more sensors of the CMM (e.g., including the probe21and/or stylus tip21T) may be moved relative to the measurement stage23(e.g., as controlled by the motion controller3) and utilized for determining workpiece feature measurement data (e.g., with regard to physical dimensions of features of the workpiece10). FIGS.2A and2Bare diagrams of a computing system105including various elements of one implementation of a programming system including a programming portion202on which workpiece feature inspection operations may be programmed for a CMM (e.g., the CMM body2ofFIG.1). As shown inFIG.2A, in various implementations the computing system105(e.g., the computer5ofFIG.1or a separate computer) may include a memory portion170, a display portion175, a processing portion180, an input-output devices portion185and the programming portion202. The memory portion170includes resident programs and other data utilized by the computing system105. The display portion175provides the display for the computing system105(e.g., similar to the display unit5D ofFIG.1), including the features provided by the programming portion202. The processing portion180provides for the signal processing and control of the computing system105, while the input-output devices portion185receives and provides control signals and outputs to and from various devices (e.g. the CMM controller3ofFIG.1). As shown inFIGS.2A and2B, in one implementation, the programming portion202includes a CAD file processing portion205, an inspection path and/or sequence manager206, a plan view editing user interface portion210, a 3D view portion220, a program view editing user interface portion230, a first set of feature-directed operations portion235, a transparency operations portion237, an editing operations portion240, which may include an inspection plan modification notices portion249, another operations portion250, a programming environment synchronization and/or notices manager260, an execution time portion270, and a simulation status and control portion280. In various implementations, the computer aided design (CAD) file processing portion205inputs a workpiece CAD file corresponding to a workpiece (e.g., the workpiece10ofFIG.1) and analyzes the file to automatically determine inspectable workpiece features on the workpiece corresponding to a plurality of geometric feature types (e.g., cylinder, plane, sphere, cone, etc.) The inspection path/sequence manager206may automatically determine a motion control path that allows the CMM to obtain measurements that characterize the workpiece features. Methods usable for implementing the CAD file processing portion205and/or the inspection path/sequence manager206are known in the art, as exemplified in various commercial CAD products, and/or in CAD “extension programs” for creating inspection programs and/or other known CMM inspection programming systems and/or systems which automatically generate machine tool programs from CAD data. For example, U.S. Pat. Nos. 5,465,221; 4,901,253; 7,146,291; 7,783,445; 8,302,031; 5,471,406 and 7,058,472, each of which is hereby incorporated herein by reference in its entirety, disclose various methods which may be used to analyze CAD data and determine geometric features of a workpiece and then automatically generate a motion control path for placing a probe or sensor at sampling points that measure or characterize the geometric features. European Patent Number EP1330686 also provides relevant teachings. In some implementations, determining the geometric features may simply comprise extracting or recognizing the categorized geometric features inherently defined in some modern CAD systems. In some implementations, product and manufacturing information (PMI, for short) is present in the CAD data, and may be used in the aforementioned processes. In various implementations, PMI conveys non-geometric attributes in CAD data, and may include geometric dimensions and tolerances, surface finish, and the like. In some implementations, in the absence of PMI, default tolerances and other default inspection rules may be used in automatic operations of the CAD file processing portion205and the inspection path/sequence manager206. The motion control path may generally define a feature inspection sequence as well as individual inspection sampling points (e.g., touch probe measurement points, or non-contact measurement points, or point cloud determination regions, etc.), as well as the motion/measurement path between such points. Various systems with relevant teachings regarding sampling points and measurement paths are described in U.S. Pat. Nos. 9,013,574; 9,639,083 and 9,646,425, and in U.S. Patent Publication Nos. 2016/0298958 and 2016/0299493, each of which is hereby incorporated herein by reference in its entirety. In various implementations, sequence and motion path planning may follow simple rules that avoid collisions, or more complicated rules or processes that both avoid collisions and optimize motion path length or inspection time, etc. In some implementations, the CAD file processing portion205may include the inspection path/sequence manager206, or they may be merged and/or indistinguishable. Examples of automatic path planning methods may be found in the previously cited references. In various implementations, one or both of the aforementioned automatic processes may be automatically triggered when a target CAD file is identified in the programming portion202. In other implementations, one or both of the aforementioned automatic processes may be triggered in relation to a target CAD file based on operator input that initiates the processes. In other implementations, similar processes may be semi-automatic and may require user input in the programming portion202for certain operations or decisions. In any case, in various implementations the aforementioned processes may, in effect, be used to provide a comprehensive inspection plan and/or inspection program for a workpiece. In some contexts, the connotations of the term “inspection plan” may encompass primarily what features are to be inspected and what measurements are to be made on each, and in what sequence, and the connotations of the term “inspection program” may primarily encompass how the inspection plan is to be accomplished on a particular CMM configuration (e.g., following the “instructions” inherent in the inspection plan, but also including the motion speeds and path, the probe or sensor to be used, and so on, for a defined CMM configuration.) Other portions of the programming portion202may use the results of the CAD file processing portion205and the inspection path/sequence manager206to perform their operations and populate and/or control their associated user interface portions, and the like. As shown inFIG.2B, the plan view editing user interface portion210includes an editable plan representation214of a workpiece feature inspection plan for the workpiece corresponding to the CAD file. In various implementations, the program view editing user interface portion230may also (or alternatively) include an editable plan representation234. As will be described in more detail below, elements of a workpiece feature inspection plan in the editable plan representation214or234may generally be reviewed to see which workpiece features are being inspected and in what order, and may also be edited by adding, removing or otherwise altering particular program element operations that are associated with particular workpiece features. In previous CMM programming systems, such reviewing and editing operations have not always been easy for a user to perform, view and/or understand, particularly for relatively unskilled users. For example, as disclosed in certain of the incorporated references, certain prior systems have provided different windows with different types of information about the programmed operations, and for which it has been difficult for users to view and/or understand the various effects and features that certain types of selected elements and/or edits may correspond to and/or produce in the different windows. In particular, one issue that may arise is with respect to a workpiece feature (e.g., a “target” feature) that is occluded by another workpiece feature in a 3D view (i.e., as displayed by the 3D view portion220). For example, if a feature-directed operation is performed in the editable plan representation214and/or234(e.g., a user makes a selection for selecting a workpiece feature in the editable plan representation214and/or234), it may be difficult for the user to view or otherwise visualize the corresponding workpiece feature in the 3D view if the corresponding workpiece feature is occluded by another workpiece feature. As will be described in more detail below, in accordance with features disclosed herein, the transparency operations portion237may be configured to automatically identify as a current target feature a workpiece feature in the 3D view that corresponds to a workpiece feature or inspection operation representation that is indicated by a current feature-directed operation (e.g., a selection operation) of the first set of feature-directed operations portion235. After identifying the target feature, the transparency operations portion237may automatically render as at least partially transparent in the 3D view an occluding workpiece feature that would otherwise be occluding at least a portion of the current target feature in the 3D view. The transparency operations portion237may further automatically terminate the transparency operations associated with the current target feature in the 3D view when the current feature-directed operation is terminated. In various implementations, the first set of feature-directed operations portion235may include a selection operation236A and a hover or pass-over operation236B. For example, the selection operation236A may include a positioning of a selection indicator relative to a workpiece feature or inspection operation representation in the editable plan representation214or234and a performance of a selection action (e.g., a clicking of a mouse button, etc.) for selecting the workpiece feature or inspection operation representation in the editable plan representation214or234. In various implementations, the hover or pass-over operation236B may correspond to a single type of hover or pass-over operation, or may alternatively be implemented as a separate hover operation and a separate pass-over operation, etc. In one implementation, a hover operation may include a positioning of a selection indicator relative to a workpiece feature or inspection operation representation in the editable plan representation214or234and a hovering of the selection indicator for at least a specified period of time relative to the workpiece feature or inspection operation representation. In one implementation, a pass-over operation may include a moving of a selection indicator to pass over a workpiece feature or inspection operation representation in the editable plan representation214or234. In various implementations, a hover operation and/or a pass-over operation may also be included as a type of selection operation (e.g., wherein a first type of selection operation may require a performance of a selection action, and a second type of selection operation may not require a performance of a selection action in addition to the positioning of the selection indicator). In such a configuration, the hover or pass-over operation236B may be a sub-operation of the selection operation236A, or the blocks236A and236B may otherwise be merged, etc. In addition to potentially activating certain transparency operations of the transparency operations portion237, at least some feature-directed operations of the first set of feature-directed operations portion235may be part of or may be utilized to initiate certain editing operations of the editing operations portion240. For example, when a user is intending to edit a workpiece feature or inspection operation representation in the editable plan representation214or234, the user may first perform a feature-directed operation (e.g., a selection operation236A) for selecting the workpiece feature or inspection operation representation that is to be edited. In various implementations, portions or all of the first set of feature-directed operations portion235and the editing operations portion240may be merged and/or indistinguishable. In various implementations, it is desirable for results and/or related effects of any operations of the first set of feature-directed operations portion235, transparency operations portion237and/or editing operations portion240, etc. to be immediately reflected in the various portions of the programming portion202and its user interface(s). For example, when a user utilizes a selection operation236A to select a workpiece feature in the editable plan representation214or234, it may be desirable for the transparency operations portion237to immediately operate to automatically render as at least partially transparent in the 3D view an occluding workpiece feature that would otherwise be occluding at least a portion of the corresponding target feature in the 3D view. As another example, as described in more detail in the previously incorporated '958 publication, when a user performs various editing operations of the editing operations portion240, it may be desirable for the corresponding results and/or related effects to be immediately incorporated (e.g., automatically or with very minimal effort by the user) into the current version of the inspection plan and/or inspection program, which is then reflected in the various portions of the programming portion202and its user interface(s). Such features are noted to be in contrast to certain prior systems as described in certain of the incorporated references, in which visualization of the effect of selections, editing changes, etc. to the plan and/or program have not been immediately or continuously available in the user interface (e.g., through a displayed “3D” simulation or moving animation). In such prior systems, it has been typical to require the user to activate a special mode or display window that is not normally active in real time during editing operations in order to see a “recording” or specially generated simulation of the CMM running the edited inspection program. In various implementations, an “immediate” ability to view a selected workpiece feature and/or the editing results in a 3D simulation or animation view may be critical to the evaluation, determination and/or acceptance of an editing operation. In various implementations, the immediate ability to view a selected workpiece feature and/or editing results in a 3D simulation or animation view may be accomplished at least in part through the operations of the programming environment synchronization/notices manager260. For example, the programming environment synchronization/notices manager260may be utilized in combination with the transparency operations portion237and the first set of feature-directed operations portion235to perform certain functions. As described above, in various implementations, such functions may include automatically rendering as at least partially transparent in the 3D view an occluding workpiece feature that would otherwise be occluding at least a portion of a current target feature in the 3D view. More specifically, the target feature in the 3D view may correspond to a workpiece feature or inspection operation representation in the editable plan representation214or234that is indicated by a current feature-directed operation (e.g., a user selection), wherein the correspondence between the workpiece features and inspection operation representations in the editable plan representation214or234and the workpiece features in the 3D view may be determined at least in part by the programming environment synchronization/notices manager260. In various implementations, the programming environment synchronization/notices manager260may be implemented at least in part using known “publisher-subscriber” methods, which are sometimes implemented using XML like languages (e.g., as used for notifications between web pages). In various implementations, a publisher-subscriber method may be implemented by adapting methods such as a list-based method, or a broadcast-based method, or a content-based method to support the features disclosed herein. In a CMM programming environment, the publishers and subscribers are generally located in the same processing space, and it is possible for the identity of the “subscriber” windows to be known by the “publisher” (e.g., as may be recorded or implemented using the programming environment synchronization/notices manager260, for example.) Applicable to such cases, U.S. Pat. No. 8,028,085, which is hereby incorporated herein by reference in its entirety, describes low latency methods which may be adapted to support such features. In one implementation, determining and/or generating various workpiece features and measurement operations in the CAD file processing portion205and the inspection path/sequence manager206may include generating and/or sharing a unique identifier for each workpiece feature and inspection operation. When the results from those portions are used in other portions of the programming portion202(e.g., as outlined above), the various identifiers may also be used or cross-referenced in the other portions to establish relevant associations between corresponding workpiece features and/or inspection operations across the various processing and/or user interface portions. In various implementations, such techniques may be utilized for determining correspondences such as between the workpiece features and inspection operation representations in the editable plan representation214or234and the workpiece features in the 3D view, as part of the performance of various transparency operations as disclosed herein and/or various editing operations, etc. The user interface of the programming portion202includes editing operations (which also include the underlying programming instructions and/or routines) usable to edit the workpiece feature inspection plan and/or inspection program. For example, following an activation of a feature-directed operation (e.g., a selection operation) for selecting text or graphical elements that represent workpiece features or inspection operations in the editable plan representations214or234, the editing operations may include activation of relevant commands or other user interface operations that affect the selected elements. In various implementations, the editing operations portion240may provide or identify such operations. In one implementation, the inspection plan modification notices portion249may be responsive to operations included in the editing operations portion240to provide a notice to the programming environment synchronization/notices manager260that an inspection plan modification is taking place. In response, the programming environment synchronization/notices manager260may then (e.g., automatically) manage the exchange of various event or programming operation notifications and related unique identifiers, such that the CAD file processing portion205and/or the inspection path/sequence manager206appropriately edit or modify the current inspection plan and inspection program in a synchronized manner when one of the editing operations is performed. Such plan and program modifications may be performed very quickly in various implementations, because the unique identifiers described above may be used to efficiently focus the modifications on only those features and/or measurement operations affected by the currently active one of the editing operations. After that, the programming environment synchronization/notices manager260may notify other portions of the programming portion202(e.g., as outlined above), so that they are immediately updated using information from the edited plan and/or program. The unique identifier(s) of the most recently edited elements may again be used to speed up such operations, in that the updating need only focus on those elements associated with the identifiers. In various implementations, the programming environment synchronization/notices manager260may also manage inter-portion communications and exchanges besides those associated with the editing operations (e.g., using various techniques and identifiers similar to those outlined above.) In various implementations, it may facilitate the synchronization between the various user interface windows or portions of the programming portion202. For example, selection of a particular feature or instruction in one window may automatically trigger a notification or instruction to other windows to display a corresponding feature or instruction in that other window, or depict a program operating state associated with the selected feature or instruction, or the like. In various implementations, such functions may be utilized with respect to the transparency operations as described herein, as well as with respect to other functions (e.g., editing functions), etc. It will be appreciated that the implementation(s) outlined above for achieving real time synchronization between various portions of the programming portion202is exemplary only, and not limiting. For example, the function of the identifiers outlined above may be provided by suitable database or lookup table associations or the like, without the presence of an explicit “identifier.” These and other alternatives will be apparent to one of ordinary skill in the art based on the teachings disclosed herein. The execution time portion270may include an execution time indicator portion272and an execution time calculating portion274. In order to provide feedback to a user performing editing operations, the execution time indicator portion272may provide a “real time” indication of an estimated inspection program execution time for operating the CMM to execute a workpiece inspection program corresponding to the current workpiece feature inspection plan as executed by a current CMM configuration. In various implementations, the programming portion202may be configured such that the execution time indicator portion272is automatically updated in response to a utilization of one of the operations included in the editing operations portion240to modify the current workpiece feature inspection plan, so as to automatically indicate the estimated effect of the modification on the inspection program execution time. In various implementations, the editing operations portion240may include or identify operations corresponding to inclusion of a workpiece feature241A, exclusion of a workpiece feature241B, a delete command242, an undo command243, sequence editing244and altering a CMM configuration245, as described in more detail in the previously incorporated '958 publication. In various implementations, the editing operations portion240may further include or identify operations corresponding to adding or deleting individual sampling points (e.g., touch points for a stylus) on a workpiece feature, or changing the motion plan for traversing between individual sampling points, or the like. Another operations portion250may include other operations relevant to the use and functioning of the programming portion202and/or general computing system105. The 3D view portion220may display a 3D view including workpiece features on the workpiece and an indication of inspection operations to be performed on the workpiece features according to the current workpiece feature inspection plan. The simulation status and control portion280may include a simulation status portion281that is configured to characterize a state of progress through the current workpiece feature inspection plan corresponding to a currently displayed 3D view, and the execution time indicator portion272may be displayed in conjunction with the simulation status portion281. In various implementations, the simulation status portion281may include a current time indicator282that moves along a graphical total time range element283to characterize a state of progress through the current workpiece feature inspection plan corresponding to the currently displayed 3D view, and the execution time indicator portion272may be displayed in association with the graphical total time range element283. In one implementation, the simulation status portion281further includes a current time display284which includes a numerical time representation that is automatically updated corresponding to the current time indicator282or the currently displayed 3D view, and that further characterizes the state of progress through the current workpiece feature inspection plan corresponding to the currently displayed 3D view. In one implementation, the simulation status and control portion280further includes a simulation animation control portion290which includes elements that are usable to control at least one of a start291, pause292, stop293, reset294, reverse295, loop296, increase in speed297or decrease in speed298of an animated display of simulated progress through the current workpiece feature inspection plan as displayed in the 3D view. In various implementations, the transparency operations portion237may also be utilized to implement certain additional transparency operations with respect to the display in the 3D view (e.g., with respect to an animated display of simulated progress through the current workpiece feature inspection plan as displayed in the 3D view). For example, in various implementations the first set of feature-directed operations may further include inspection operations performed on workpiece features as part of the current workpiece feature inspection plan. In such a configuration, when an inspection operation is performed or selected, a workpiece feature that the inspection operation is directed to may be automatically identified as a current target feature by the transparency operations. In various implementations, the inspection operation that is performed or selected may be included in an inspection sequence, and may include measuring and/or touching a sampling point on the workpiece feature that the inspection operation is directed to, using a CMM measuring probe. In various implementations, when an inspection operation is automatically performed as part of an active program simulation, a workpiece feature that the inspection operation is performed on may be automatically identified as a current target feature by the transparency operations. In various implementations, when an inspection operation is performed as part of manually or semi-automatically stepping through a program simulation, a workpiece feature that the inspection operation is performed on may be automatically identified as a current target feature by the transparency operations. In any of these examples, once a current target feature is identified, as described above, the transparency operations may further include automatically rendering as at least partially transparent in the 3D view an occluding workpiece feature that would otherwise be occluding at least a portion of the current target feature in the 3D view. In various implementations, the computing system105and/or other associated computer system(s) may include suitable unitary or distributed computing systems or devices, which may include one or more processors that execute software to perform the functions described herein. Processors include programmable general-purpose or special-purpose microprocessors, programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. Software may be stored in memory, such as random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such components. Software may also be stored in one or more storage devices, such as disk drives, solid-state memories, or any other medium for storing data. Software may include one or more program modules which include routines, programs, objects, components, data structures, and so on, that perform particular tasks or implement particular abstract data types. In distributed computing environments, the functionality of the program modules may be combined or distributed across multiple computing systems or devices and in various implementations may be accessed via service calls. FIG.3is a diagram of a user interface305(e.g., as may be shown on the display unit5D ofFIG.1, the display portion175ofFIG.2A, etc.), It will be appreciated that certain numbered elements3XX of the user interface305may correspond to and/or be provided by similarly numbered elements2XX ofFIGS.2A and2B, except as otherwise described below. In the implementation shown inFIG.3, the user interface305includes a plan view window310, a 3D view window320and a program view window330. The plan view window310includes an editing user interface portion312, the 3D view window320includes a workpiece inspection program simulation portion322, and the program view window330includes an editing user interface portion332and a simulation status and control portion380. In various implementations, the simulation status and control portion380may include a simulation status portion381and a simulation animation control portion390, as will be described in more detail below with respect toFIG.5. As shown inFIG.3, the editing user interface portions312and332each include editable plan representations314and334, respectively, of a workpiece feature inspection plan for a workpiece10corresponding to a CAD file. The editable plan representation314is organized in terms of geometric features to be inspected on the workpiece. The editable plan representation334is organized as inspection program pseudo-code or actual code or graphical program operation representations or the like, in various implementations. When editing operations are performed for one of the editable plan representations314or334, the other editable plan representation may be automatically updated in a manner consistent with those editing operations by operation of the various system elements illustrated and described with respect toFIGS.2A and2B. However, in an alternative implementation, only one of the editable plan representations314or334may be editable. In such a case, the other plan representation may be absent, or hidden, or may be displayed and automatically updated in a manner similar to that outlined above. As described above with respect toFIGS.2A and2B, in various implementations, a computer aided design (CAD) file processing portion may input a workpiece CAD file corresponding to a workpiece10and may analyze the file to automatically determine inspectable workpiece features on the workpiece10corresponding to a plurality of geometric feature types (e.g., cylinder, plane, sphere, cone, etc.) InFIG.3the editable plan representations314and334include the editable set of workpiece features316and336to be inspected. As will be described in more detail below, an execution time indicator372is provided that is indicative of an estimated inspection program execution time for operating the CMM to execute a workpiece inspection program corresponding to the current workpiece feature inspection plan as executed by a current CMM configuration. Editing operations are usable to edit the workpiece feature inspection plan, and the system is configured such that the execution time indicator372is automatically updated in response to a utilization of one of the editing operations to modify the current workpiece feature inspection plan, so as to automatically indicate the estimated effect of the modification on the inspection program execution time. The editable plan representation314that is illustrated inFIGS.3-6includes a number of workpiece features316F1-316Fn on the workpiece10′ that may be inspected (e.g., where n represents a total number of workpiece features that may be inspected). The workpiece features316F1-316Fn correspond to workpiece features326F1-326Fn in the workpiece inspection program simulation portion322, and to workpiece features336F1-336Fn in the editable plan representation334, respectively. In order to simplify the figures, only some of the workpiece features are labeled. In the example ofFIG.3, the workpiece features316F1-316F21are currently visible in the plan view window310, wherein a user may utilize controls to increment or scroll down (e.g., utilizing a vertical scroll bar317, etc.) to view additional workpiece features. Similarly, a vertical scroll bar337may be used to scroll up and down the program view window330. The 3D view window320displays a 3D view of the workpiece inspection program simulation portion322including workpiece features326on the workpiece10′. In various implementations, the 3D view may also include an indication of inspection operations to be performed on the workpiece features326according to the current workpiece feature inspection plan (e.g., as will be described in more detail below with respect toFIG.5). As shown inFIG.3, some illustrative examples of workpiece features are labeled, such as planes326F1and326F2, a sphere326F3, cylinders326F7and326F8, and a cone326F11. In various implementations, these correspond to workpiece features316F1,316F2,316F3,316F7,316F8and316F11in the editable plan representation314, and to workpiece features336F1,336F2,336F3,336F7,336F8and336F11in the editable plan representation334, respectively. With respect to the workpiece feature326F8in the 3D view, in the editable plan representation334the corresponding workpiece feature336F8includes a description of “cylinder—1214” along with a displayed cylinder icon, and in the editable plan representation314the corresponding workpiece feature316F8includes a description of “1214” along with a displayed cylinder icon. In various implementations, such descriptions and icons may be automatically generated and displayed as corresponding to a numbered designation and geometric type (e.g., cylinder, plane, sphere, cone, etc.) for each of the workpiece features. With respect to the editing operations that are usable to edit the workpiece feature inspection plan, in one implementation the editing user interface portion312may include workpiece feature exclusion/inclusion elements318(e.g., checkboxes next to each of the workpiece features316) that operate to toggle between an exclusion state (e.g., with the associated box unchecked) and an inclusion state (e.g., with the associated box checked) for each associated workpiece feature316. An exclusion state may correspond to an exclusion of the associated workpiece feature316from the set of workpiece features to be inspected, and an inclusion state may correspond to an inclusion of the associated workpiece feature316in the set of workpiece features to be inspected. In the example ofFIG.3, all of the workpiece features316have been selected for inclusion. In various implementations, the editing operations may include a utilization of the workpiece feature exclusion/inclusion elements318to either exclude or include workpiece features316with respect to the set of workpiece features to be inspected, and the execution time indicator372may automatically be updated in response to a utilization of a workpiece feature exclusion/inclusion element318, as described in more detail in the previously incorporated '958 publication. In various implementations, as part of editing or other processes, enabling a user to clearly view a selected workpiece feature and/or related editing processes and/or results in a 3D simulation or animation view may provide various advantages. As one example, with respect to a determination and/or acceptance of an editing operation, the total execution time of an inspection plan may depend in part on the number of workpiece features to be inspected and the inspection operations to be performed thereon. The resulting total execution time relates directly to the inspection throughput of a CMM, which determines its cost of ownership and/or ability to support a desired rate of production. In order to reduce the total execution time (e.g., to increase efficiency, etc.), a user may review workpiece features and inspection operations to determine which ones may not need to be included in a current inspection plan. As part of such review (and for other reasons), it may be desirable for a user to be able to view each workpiece feature and/or any corresponding inspection operations in the 3D view. However, such viewing may be inhibited if a target workpiece feature that is currently under consideration is occluded in the 3D view by another workpiece feature (e.g., workpiece feature326F3is occluding workpiece feature326F8in the 3D view ofFIG.3). In accordance with various principles disclosed herein, in such a circumstance, transparency operations may be performed to render the occluding workpiece feature(s) at least partially transparent, as will be described in more detail below with respect toFIG.4. FIG.4is a diagram of the user interface305illustrating a target feature326F8and an occluding workpiece feature326F3that has been rendered as at least partially transparent in the 3D view window320by performance of transparency operations. In various implementations, such transparency operations may be automatically performed by the system in response to a feature-directed operation included in a first set of feature-directed operations. In various implementations, the first set of feature-directed operations may include a selection operation for a workpiece feature or an inspection operation representation in the editable plan representation. In various implementations, the first set of feature-directed operations may also or alternatively include at least one of a hover or pass-over operation for a workpiece feature or an inspection operation representation in the editable plan representation. In various implementations, the first set of feature-directed operations that the transparency operations are performed in response to may include only a single feature-directed operation (e.g., a selection operation), or may include multiple feature-directed operations (e.g., a selection operation and a hover or pass-over operation, etc.). In various implementations, a selection operation may include a positioning of a selector element (e.g., a mouse cursor) proximate to a workpiece feature or inspection operation representation in an editable plan representation, and a performance of a selection action for selecting a workpiece feature or inspection operation representation. For example, with respect to the editable plan representation314or334ofFIG.3, a user may utilize a mouse or other input device to position a selector element (e.g., a movable pointer, cursor, highlighted area, finger on a touch screen, etc.) over a workpiece feature or inspection operation, and may select the workpiece feature or inspection operation representation through a performance of a selection action (e.g., pressing a key, button, mouse, pushing a finger on a touch screen, etc.). As another example, in a holographic three-dimensional view, the selector element may include an element such as a pointer or the user's finger, and the selector element may be used to perform a selection action in the editable plan representation314or334(e.g., the user making a particular type of motion with the selector element for making a selection). In one specific illustrative example, in the state illustrated inFIG.3, a user may have positioned a selector element over the workpiece feature336F8in the editable plan representation334(e.g., as indicated by a highlighting of the workpiece feature336F8or other indicator in various implementations). In the state illustrated inFIG.4, the user may have performed a selection action for selecting the workpiece feature336F8(e.g., as may be indicated by a dotted-line box around the workpiece feature336F8or other indicator in various implementations), on the basis of which certain transparency operations may have been performed in the 3D view (e.g., rendering the workpiece feature326F3as at least partially transparent), as will be described in more detail below. As another example, a hover operation may similarly include a positioning of a selection indicator relative to a workpiece feature or inspection operation representation in the editable plan representation314or334and a hovering of the selection indicator for at least a specified period of time relative to the workpiece feature or inspection operation representation. As part of the hover operation, in various implementations, once the specified period of time has been reached with the selection indicator still positioned relative to (e.g., positioned on top of, etc.) the workpiece feature or inspection operation representation, such a sequence may operate as a type of selection action which thus selects the workpiece feature or inspection operation representation. In one specific illustrative example, in the state illustrated inFIG.3, a user may have positioned a selector element over the workpiece feature336F8in the editable plan representation334(e.g., as indicated by a highlighting of the workpiece feature336F8or other indicator in various implementations). In the state illustrated inFIG.4, the user may have maintained the position of the selector element for the specified period of time in accordance with the hover operation for selecting the workpiece feature336F8(e.g., as may be indicated by a dotted-line box around the workpiece feature336F8or other indicator in various implementations), on the basis of which certain transparency operations have been performed in the 3D view (i.e., rendering the workpiece feature326F3as at least partially transparent) as will be described in more detail below. As yet another example, a pass-over operation may include a moving of a selection indicator to pass over a workpiece feature or inspection operation representation in the editable plan representation214or234. In one specific illustrative example, in the state illustrated inFIG.3, a user may not yet have positioned a selector element over the workpiece feature336F8in the editable plan representation334(i.e., in which case the workpiece feature336F8may not yet be highlighted or otherwise indicated). In the state illustrated inFIG.4, the user may be in the process of having the selection indicator pass over the workpiece feature336F8(e.g., as may be indicated by highlighting and/or a dotted-line box around the workpiece feature336F8or other indicator in various implementations), on the basis of which certain transparency operations may have been performed in the 3D view (e.g., rendering the workpiece feature326F3as at least partially transparent) as will be described in more detail below. In various implementations, a hover operation and/or a pass-over operation may also be included as a type of selection operation (e.g., wherein a first type of selection operation may require a performance of a selection action, and a second type of selection operation may not require a performance of a selection action in addition to the positioning of the selection indicator, such as a hover operation). As noted above, in response to a performance of a current feature-directed operation (e.g., a selection operation, a hover operation, a pass-over operation, etc.) that is included in a first set of feature-directed operations, a set of transparency operations may be performed. As an initial step, the transparency operations may include automatically identifying as a current target feature a workpiece feature in the 3D view that corresponds to a workpiece feature or inspection operation representation that is indicated by the current feature-directed operation included in the first set of feature-directed operations. In the example ofFIG.4, the workpiece feature that is indicated by the current feature-directed operation is the workpiece feature336F8in the editable plan representation334, and the current target feature that is correspondingly automatically identified is the workpiece feature326F8in the 3D view window320. Various processes by which the workpiece feature336F8may be corresponded with the workpiece feature326F8have been previously described herein with respect toFIGS.2A and2B. In various implementations, the transparency operations may further include determining if there are one or more workpiece features that are occluding the target feature in the 3D view. In various implementations, such a determination may require consideration of various factors. As one possible factor, such a determination may depend at least in part on a current orientation of the 3D view. For example, depending on the relative positions of the workpiece features on the workpiece10′, a given workpiece feature may or may not be occluding another workpiece feature depending on the orientation of the 3D view. In various implementations, the system may be configured to utilize the known positions, sizes, etc. of the workpiece features on the workpiece10′, in combination with the known viewing angle for a current orientation of the 3D view, etc., in order to determine which workpiece features may be occluding other workpiece features and/or inspection operations, etc. In the example ofFIG.4, it has been determined that the workpiece feature326F3is occluding the workpiece feature326F8in the current orientation of the 3D view. In the example ofFIG.4, it has further been determined that certain other workpiece features are not occluding the target feature326F8. For example, the workpiece features326F2,326F7and326F11have each been determined to not be occluding the workpiece feature326F8in the current orientation of the 3D view. In accordance with this determination, the workpiece features326F2,326F7and326F11have not automatically been rendered as transparent by the automatically performed transparency operations. In accordance with the determination that the workpiece feature326F3is occluding at least a portion of the workpiece feature326F8in the current orientation of the 3D view, as part of the transparency operations the workpiece feature326F3has been automatically rendered as at least partially transparent. In various implementations, the amount of transparency of the workpiece feature326F3may be set for a user to be able to clearly view the otherwise occluded workpiece feature326F8, while still being able to view some context and position of the workpiece feature326F3. For example, during review of a workpiece feature inspection plan, it may be desirable for a user to be able to view and understand the relative positioning and context between the workpiece features326F3and326F8and/or associated inspection operations, even after the workpiece feature326F3has been rendered as at least partially transparent. In various implementations, the level and type of transparency (e.g., including different levels of transparency and/or different types of patterns or representations for outlines and fills of workpiece features that are being rendered as at least partially transparent, etc.) may be set by the system and/or may otherwise be adjustable (e.g., including a user interface feature which allows a user to select and/or adjust the transparency levels and features, etc.) In various implementations, as part of the transparency operations or other operations, the target feature326F8may also be highlighted or otherwise marked in the 3D view. For example, as noted above, the workpiece feature that is indicated by the current feature-directed operation is the workpiece feature336F8in the editable plan representation334, and the current target feature that is correspondingly automatically identified is the workpiece feature326F8in the 3D view window320. In various implementations, after the workpiece feature326F8is automatically identified as the target feature, the 3D view window320may be updated to indicate that the workpiece feature326F8is the target feature (e.g., the workpiece feature326F8may be highlighted or otherwise marked or indicated and/or may become the active target of subsequent commands or operations, including inspection operations and/or the automatic rendering of any occluding workpiece features as at least partially transparent, etc.). It will be appreciated that in various implementations the aspect of the workpiece feature336F8being selected or otherwise indicated in the editable plan representation334as opposed to the workpiece feature326F8being selected in the 3D view window320may provide various advantages. For example, as illustrated in certain of the incorporated references, in certain prior systems a user may be required to make selections or perform actions in a 3D view (e.g., moving a user interface element or clicking on a feature in the 3D view, etc.) with respect to determining what transparency operations may be performed in the 3D view. In implementations such as those disclosed herein where a target feature that the user wishes to view is being determined, a selection of such a target feature in a 3D view may be difficult to perform in accordance with the fact that the target feature is at least partially occluded in the 3D view. More specifically, because the target feature is at least partially occluded in the 3D view, it may be difficult for a user to see, identify, and/or otherwise find the portion of the target feature that is visible in the 3D view (e.g., the portion of the target feature326F8that is visible in the 3D view window320ofFIG.3). In some instances, the occluding workpiece feature may be occluding the entire current target feature in the 3D view, such that no portion of the current target feature is visible or selectable in the 3D view. In accordance with various principles disclosed herein, by enabling a performance of a feature-directed operation (e.g., a selection operation) for selecting or otherwise indicating a workpiece feature or inspection operation representation in the editable plan representation314or334, a target feature may be identified to be viewed in the 3D view window320even when the target feature is partially or entirely occluded in the 3D view window320before the transparency operations are automatically performed. In various implementations, the transparency operations may not include rotating or otherwise adjusting the orientation of the 3D view after the current target feature is identified. More specifically, in an alternative implementation, a 3D view may be rotated or otherwise adjusted to improve viewing of a target feature that is otherwise occluded by another workpiece feature in a current orientation of a 3D view. For example, in the orientation of the 3D view illustrated inFIG.3, the workpiece feature326F3is occluding the workpiece feature326F8, for which an alternative implementation may include rotating or otherwise adjusting the 3D view to where the workpiece feature326F3is no longer occluding the workpiece feature326F8(e.g., rotating the 3D view 180 degrees to where the workpiece feature326F3is behind rather than in front of the workpiece feature326F8). In contrast to an implementation in which such orientation adjustments are performed, in accordance with principles disclosed herein, the orientation of the 3D view may not be adjusted and an occluding workpiece feature in the current orientation of the 3D view may be automatically rendered as at least partially transparent. In certain alternative implementations, hybrid techniques may be utilized (e.g., as part of the transparency operations and/or as part of other operations) in which the orientation of the 3D view may be adjusted and transparency operations may be performed. More specifically, in various implementations, when an orientation of a 3D view is adjusted or considered for adjustment, at least some of the above described transparency operations may be performed to determine and/or render as at least partially transparent workpiece features that may otherwise be occluding a target feature in the adjusted orientation of the 3D view. In various implementations, when the current feature-directed operation is terminated, the transparency operations associated with the current target feature in the 3D view may be automatically terminated. For example, in the implementation ofFIG.4, if the current feature-directed operation is a selection operation, the user may terminate the selection operation by performing an action for unselecting the workpiece feature336F8in the editable plan representation334. In various implementations, such an action for unselecting a workpiece feature may include clicking on or otherwise selecting the workpiece feature an additional time, or clicking on or otherwise selecting a different workpiece feature in the editable plan representation334, etc. If the feature-directed operation is a hover operation or a pass-over operation, the action for unselecting the workpiece feature336F8may include a positioning of the selection indicator in the editable plan representation334to no longer be over the workpiece feature336F8. Once the workpiece feature336F8has been unselected in the editable plan representation334so as to terminate the current feature-directed operation, the transparency operations associated with the current target feature326F8in the 3D view may also automatically be terminated. In the example ofFIG.4, this may correspond to the workpiece feature326F8in the 3D view no longer being designated as a current target feature, and the workpiece feature326F3in the 3D view no longer being rendered as at least partially transparent (e.g., reverting to the state illustrated in the 3D view ofFIG.3). FIG.5is a diagram of the user interface305illustrating performance of certain inspection operations on a target feature326F8. In the example ofFIG.5, the 3D view window320shows a touch probe21′ having a stylus tip21T′, which is positioned relative to a workpiece10′. In the state illustrated, the touch probe stylus tip21T′ is contacting the workpiece feature326F8. A measurement path MP indicates a motion path representation for the stylus tip21T′ and corresponding sampling points SP, corresponding to a portion of the current inspection plan represented in the various windows. Each of the sampling points SP indicates a point where the stylus tip21T′ contacts the workpiece feature326F8, for obtaining measurement coordinates in accordance with the corresponding portion of the current inspection plan, as will be understood by one skilled in the art. In various implementations, the measurement path MP and/or sampling points SP illustrated inFIG.5may correspond to one or more inspection operations. For example, the inspection operation representation336IN8B in the editable plan representation334may correspond at least in part to the measurement path MP and sampling points SP illustrated in the 3D view window320. In one implementation, such an inspection operation may be utilized for determining the cylindricity of the workpiece feature326F8(e.g., in part by determining how closely the upper circumference of the workpiece feature326F8approximates a circle, etc.) In various implementations, certain inspection operation representations may additionally or alternatively include information about specific sampling points, movements, angles, etc. for the performance of the inspections of the designated workpiece features. In the example ofFIG.5, it will be appreciated that in various implementations one or more inspection operations may include additional measurement paths and sampling points for measuring the workpiece feature326F8and other workpiece features (e.g., including additional tracks of measurement paths in middle, bottom, top, side, exterior, interior, etc. portions of workpiece features, such as are described in more detail in the previously incorporated '083 patent). As noted above, in response to a performance of a current feature-directed operation that is included in a first set of feature-directed operations, a set of transparency operations may be performed including initially identifying as a current target feature a workpiece feature in the 3D view that corresponds to a workpiece feature or inspection operation representation that is indicated by the current feature-directed operation. InFIG.5, in one example implementation an inspection operation representation that is indicated by the current feature-directed operation may be the inspection operation representation336IN8B. For example, a user may have performed a selection operation for selecting the inspection operation representation336IN8B. In various implementations, since the inspection operation representation336IN8B corresponds to an inspection operation performed on the workpiece feature336F8, the corresponding workpiece feature326F8in the 3D view may be identified as the current target feature, for which the occluding workpiece feature326F3may be automatically rendered as at least partially transparent. It will be appreciated that such transparency operations not only improve the visibility of the workpiece feature326F8in the 3D view, but also improve the visibility of any inspection operations that are illustrated as being performed on the workpiece feature326F8(e.g., such as the measurement path MP and corresponding sampling points SP, etc.). In various implementations, in addition to a selection of an inspection operation representation, the first set of feature-directed operations may further include a performance of an inspection operation on a workpiece feature. In such a configuration, when an inspection operation is performed, a workpiece feature that the inspection operation is directed to may be automatically identified as a current target feature by the transparency operations. In various implementations, the inspection operation that is performed may be included in an inspection sequence, and may include measuring and/or touching a sampling point on the workpiece feature that the inspection operation is directed to (e.g., using a CMM measuring probe). In various implementations, when an inspection operation is automatically performed as part of an active program simulation, a workpiece feature that the inspection operation is performed on may be automatically identified as a current target feature by the transparency operations. In various implementations, when an inspection operation is performed as part of manually or semi-automatically stepping through a program simulation, a workpiece feature that the inspection operation is performed on may be automatically identified as a current target feature by the transparency operations. In any of these examples, once a current target feature is identified (e.g., the workpiece feature326F8), as described above the transparency operations may further include automatically rendering as at least partially transparent in the 3D view an occluding workpiece feature (e.g., the workpiece feature326F3) that would otherwise be occluding at least a portion of the current target feature in the 3D view. With respect to illustrating and otherwise facilitating a user's understanding of where a workpiece feature and/or inspection operation (e.g., as selected by the user or otherwise activated, etc.) fits into an overall current workpiece feature inspection plan, as noted above, the simulation status and control portion380may include a simulation status portion381and a simulation animation control portion390. Using synchronization techniques, the simulation status portion381may be configured to characterize a state of progress through the current workpiece feature inspection plan corresponding to a currently displayed 3D view of the workpiece inspection program simulation portion322and to the corresponding states of progress through the editable plan representations314and334. In various implementations, the simulation status portion381may include a current time indicator382that moves along a graphical total time range element383to characterize a state of progress through the current workpiece feature inspection plan corresponding to the currently displayed 3D view and to the corresponding states of progress through the editable plan representations314and334, and the execution time indicator372may be displayed in association with the graphical total time range element383. In one implementation, as illustrated in the example ofFIG.5, the execution time indicator372may be displayed in the vicinity of the right-hand end of the graphical total time range element383. In various implementations, the simulation status portion381may further include a current time display384displayed in the vicinity of at least one of the current time indicator382or the graphical total time range element383, and the current time display384may include a numerical time representation that is automatically updated corresponding to the current time indicator382or the currently displayed 3D view, and that further characterizes the state of progress through the current workpiece feature inspection plan corresponding to the currently displayed 3D view and to the corresponding states of progress through the editable plan representations314and334. In the example ofFIG.5, the current time display384indicates a time of “0:02:02” out of a total time indicated by the execution time indicator372of “0:18:06”, and the current time indicator382is shown at a proportional position along the graphical total time range element383. This position of the current time indicator382and the time of the current time display384correspond to the current state of progress through the current workpiece feature inspection plan, which relative to the state of progress through the editable plan representation314indicates that the workpiece feature316F8is being inspected (e.g., after the planned inspections of workpiece features316F1-316F7in the inspection plan). Correspondingly, relative to the state of progress through the editable plan representation334, this indicates that the workpiece feature336F8is being inspected (e.g., after the planned corresponding inspections of the workpiece features336F1-336F7). In one implementation, the simulation animation control portion390may include elements that are usable to control an animated display of simulated progress through the current workpiece feature inspection plan as displayed in the 3D view. For example, a start element391, stop element393, reverse element395and loop element396are illustrated in the simulation animation control portion390, although it will be appreciated that in other implementations other elements (e.g., corresponding to pause, reset, increase speed, decrease speed, etc.) may also be included. In various implementations, the simulation status portion381may be adjustable by a user. For example, the position of the current time indicator382along the graphical total time range element383may be directly adjustable by a user and/or may be indirectly adjusted (e.g., through operation of control elements such as elements391,393,395,396, etc.), and when the position of the current time indicator382is adjusted the currently displayed 3D view may be altered to correspond to the state of progress through the current workpiece feature inspection plan that is indicated by the position of the current time indicator382. In an instance where the current time indicator382is actively being slid by a user along the graphical total time range element383, a progression through the current workpiece feature inspection plan may be displayed in the 3D view window320at a speed that corresponds to the speed at which current time indicator382is being slid. In one implementation, the first set of feature-directed operations may include a feature-directed operation comprising a selection operation that comprises a selection or adjustment of the position of the current time indicator (e.g., by the user). In such an implementation, the workpiece feature or inspection operation representation that is indicated by the current feature-directed operation is the workpiece feature or inspection operation representation that corresponds to the state of progress through the editable plan representation314or334. In the example ofFIG.5, with respect to the editable plan representation314, this would correspond to the workpiece feature316F8or a corresponding inspection operation representation that is being performed or inspected at the indicated time as part of the inspection plan. With respect to the editable plan representation334, this would correspond to the workpiece feature336F8or a corresponding inspection operation representation that is being performed or inspected at the indicated time as part of the inspection plan. As noted above, inFIG.5the measurement path MP indicates a motion path representation for the stylus tip21T′ and corresponding sampling points SP, corresponding to a portion of the current inspection plan represented in the various windows. As described in more detail in the previously incorporated '083 patent, in various implementations at least a portion of a measurement path MP for a coordinate measuring machine (e.g. for a touch probe, or other surface location sensor), including a plurality of sampling points, may be generated using a generic sampling pattern which corresponds to the type of surface feature that is being measured. For example, the measurement path MP illustrated inFIG.5may initially be generated utilizing a generic sampling pattern for a circle or cylinder, similar to the cylindrical workpiece feature326F8. FIG.6is a diagram of a user interface illustrating a target feature326F2or326F8and occluding workpiece features that have been rendered as at least partially transparent in the 3D view window320by performance of transparency operations. In one example implementation, the workpiece feature326F2may be identified as the current target feature (e.g., in accordance with a user's selection or other indication of the corresponding workpiece feature336F2in the editable plan representation334). In accordance with the identification of the workpiece feature326F2as the target feature, the workpiece features326F3,326F7,326F8and326F11may each be rendered as at least partially transparent by the transparency operations. More specifically, since each of the workpiece features326F3,326F7,326F8and326F11would otherwise be occluding at least a portion of the current target feature326F2, each of those workpiece features may be rendered as at least partially transparent. Corresponding to a selection or other indication of the workpiece feature336F2, the current time display384is shown to indicate a time of “0:01:01” out of a total time indicated by the execution time indicator372of “0:18:06”, and the current time indicator382is shown to be at a proportional position across the graphical total time range element383. This indicates that the inspection of the workpiece feature326F2occurs approximately at the time “0:01:01” (e.g., after the planned inspection of the workpiece feature336F1in the inspection plan). In the illustration ofFIG.6, the touch probe stylus tip21T′ may be positioned to be contacting a sampling point SP on the workpiece feature326F2(i.e., wherein the workpiece feature326F8is open at the bottom and is resting on the workpiece feature326F2). As part of a measurement path (not shown) other sampling points on the workpiece feature326F2may also be determined (e.g., including some potentially behind or otherwise occluded by the workpiece features326F7and326F11). In general, it will be appreciated that the rendering of the workpiece features326F3,326F7,326F8and326F11as at least partially transparent by the transparency operations improves the visibility of the workpiece feature326F2and any corresponding inspection operations related to the workpiece feature326F2that may otherwise be occluded by the workpiece features326F3,326F7,326F8and326F11. In an alternative example implementation, the workpiece feature326F8may be identified as the current target feature (e.g., in accordance with a user's selection or other indication of the workpiece feature336F8in the editable plan representation334), for which only the workpiece features326F3and326F8may each be rendered as at least partially transparent by the transparency operations. It will be appreciated that in this particular example implementation, the workpiece features326F7and326F11may not be rendered as at least partially transparent by the transparency operations, as they may be determined to not be occluding the workpiece feature326F8in the 3D view. In such an example implementation, the touch probe stylus tip21T′ may be determined to be contacting a sampling point SP on the workpiece feature326F8(e.g., wherein the workpiece feature326F8has a capped or otherwise solid bottom end that the sampling point SP is contacting). In such an example implementation, the transparency operations may further include determining that at least the sides of the current target feature326F8may be a foreground portion of the current target feature326F8which would otherwise be occluding at least a background portion (e.g., the bottom) of the current target feature326F8. In regard to such a determination, the transparency operations may further include automatically rendering as at least partially transparent in the 3D view the sides of the target feature326F8(i.e., as a foreground portion) that would otherwise be occluding at least the bottom of the target feature326F8(i.e., as a background portion) in the 3D view. In other implementations, the transparency operations may include automatically rendering the entire target feature326F8as at least partially transparent (e.g., to improve the visibility of inspection operations relative to various portions of the target feature326F8). InFIG.6, as part of a measurement path (not shown), other sampling points on the workpiece feature326F8may also be determined (e.g., such as those illustrated inFIG.5, etc.). It will be appreciated that in such an example implementation, in addition to the rendering of the workpiece feature326F3as at least partially transparent, the rendering of the workpiece feature326F8(or portions thereof) as at least partially transparent may improve the visibility of certain otherwise occluded portions of the workpiece feature326F8and certain otherwise occluded corresponding inspection operations related to the workpiece feature326F8(e.g., with respect to various sampling points on the workpiece feature326F8, etc.). FIG.7is a flow diagram illustrating one exemplary implementation of a routine700for operating a system for programming workpiece feature inspection operations for a coordinate measuring machine. At a block710, a determination is made that a selection has been made for selecting at least one of a workpiece feature or inspection operation representation in an editable plan representation. At a block720, a workpiece feature in a 3D view is automatically designated as a target feature that corresponds to the selected workpiece feature or inspection operation representation. At a decision block730, a determination is made as to whether there are one or more workpiece features that are occluding at least a portion of the target feature in the 3D view. If there are no occluding workpiece features, the routine ends. If it is determined that there are one or more occluding workpiece features, the routine proceeds to a block740, where the one or more occluding workpiece features that would otherwise be occluding at least a portion of the target feature in the 3D view are automatically rendered as at least partially transparent in the 3D view. While preferred implementations of the present disclosure have been illustrated and described, numerous variations in the illustrated and described arrangements of features and sequences of operations will be apparent to one skilled in the art based on this disclosure. Various alternative forms may be used to implement the principles disclosed herein. In addition, the various implementations described above can be combined to provide further implementations. All of the U.S. patents and U.S. patent applications referred to in this specification are incorporated herein by reference, in their entirety. Aspects of the implementations can be modified, if necessary to employ concepts of the various patents and applications to provide yet further implementations. The disclosure of U.S. provisional patent application Ser. No. 62/611,833, filed Dec. 29, 2017, is incorporated herein in its entirety. These and other changes can be made to the implementations in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific implementations disclosed in the specification and the claims, but should be construed to include all possible implementations along with the full scope of equivalents to which such claims are entitled. | 75,010 |
11860603 | DETAILED DESCRIPTION The present disclosure is herein described in detail with reference to embodiments illustrated in the drawings, which form a part hereof. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented herein. Reference will now be made to the exemplary embodiments illustrated in the drawings, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the inventions as illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the present disclosure. Referring toFIG.1, a system100for manufacturing facility information management is shown. As shown inFIG.1, the system100includes a work area102, such as a plant floor area, having a plurality of work cells104, which are designated for a particular type of manufacturing activity. For instance, in the illustrated embodiment, the system100is a metal fabrication facility where the work cells104are welding work cells and each include at least one metal fabrication apparatus, such as a welding apparatus106. The system100is configured to determine the degree of efficiency of the use of the welding apparatus106by the operator staff, as discussed in further detail below. In alternate embodiments, however, the work cells104include any other type of manufacturing machinery or tools that are configured to send a signal indicating its activation to the data management server108via the electronic network110, such as the local area network (LAN), a Wide Area Network (WAN), or the Internet. In the illustrated embodiment, the welding apparatus106is connected to a communication interface112, such as an AXCESS interface of the INSIGHT CORE welding information management system available from Miller Electric Manufacturing Co., 1635 W Spencer St., Appleton, Wis. 54914, an embodiment of such system being described in further detail in U.S. Application Publication Nos. 2014/0278243 and 2012/0085741, which are assigned to the present assignee and each incorporated by reference herein in its entirety. In one embodiment, the communication interface112includes one or both a wired (e.g., Ethernet based) and a wireless (e.g., Wi-Fi, Bluetooth, Zigbee and the like) connection to the welding apparatus106. In another embodiment, the communication interface112is integrated into the welding apparatus106. When the operator strikes an arc (i.e., the welding apparatus is being used during the process of welding), the welding apparatus106sends to the data management server108via the network110an arc on indicator message, including an associated time stamp indicating the beginning and an end of a time period when an arc has been struck by the operator. Additionally, in accordance with the present disclosure, the work cell104includes one or more presence sensors114connected to the communication interface112via a wired and/or wireless network connection. In various embodiments, the presence sensors114include one or more of proximity or presence sensors including a motion sensor, such as a passive infrared (PIR) sensor, an acoustic sensor, a video camera, a photo camera, an air flow sensor, a pressure sensor (e.g., a pressure switch under a floor mat), a current sensor (e.g., detecting use of ancillary equipment), a sensor detecting use of a material clamp, a wireless device such as an RFID sensor, a mobile phone, a wearable sensor, a light current sensor, a safety floor mat, or any other electronic or mechanical sensors or combinations thereof configured to detect presence of a person by way of detecting movement, sound, pressure, air-flow and/or heat, and the like, including the sensors described in the incorporated U.S. Application Publication No. 2012/0085741. In an embodiment, upon detecting physical presence of a welding apparatus operator (e.g., plant worker) inside the work cell104, the presence sensor(s)114communicate an operator present indicator signal, along with associated time stamps indicating beginning and end of operator presence, to the data management server108via the communication interface112and network110. Based on the received arc on and operator present indicator messages, the data management server108, in turn, is configured to accurately determine an arc on percentage that takes into account the time periods (e.g., throughout a shift) when the operator was actually present in the work cell104(an operator factor) and the welding apparatus106was available for use, as discussed in further detail below with reference toFIG.2. In the illustrated embodiment, the arc on and operator present indicators and associated time stamps are received and stored in a database at the data management server108, which calculates an operator factor-based arc on percentage for the time period specified by the user and presents it via a user interface of the user computing device116. In various embodiments, the data management server108includes one or more computer processors executing computer readable instructions for determining an operator factor-based arc on percentage metric, as discussed inFIG.2below. Those skilled in the art will realize that one embodiment of the data management server108may be a single server machine, while other embodiments may include multiple network connected computer servers as a distributed computing implementation, with such servers located in the same or different physical locations. Based on the operator present data, the data management server108additionally estimates the number of employees/operators working in the work area102during a user-defined time period by determining an average number of simultaneously active work cells104in which operator presence was detected. This removes the need to access time card records to determine this information for purposes of manufacturing efficiency reporting and analysis. While the illustrated embodiment depicts a data management server108that stores and processes the are on and operator present messages and performs the associated calculations via one or more processors, those skilled in the art will realize that in alternate embodiments, this information may be stored and processed directly via the user computing device116, such as a computer or a mobile computing device, including a dedicated portable computing terminal or a smart phone. Referring toFIG.2, an embodiment of a method for determining usage efficiency of a welding apparatus via an arc on percentage determination based on an operator presence factor is shown. In the illustrated embodiment, in step200, the data management server108receives user input of a work period, such as a work shift time duration or a partial increment thereof, during which an arc on percentage and other operator presence based calculations need to be determined. A work period is generally an amount of time that is allocated to account for the activities in the work area. In step202, the data management server108receives from each welding apparatus106, an arc on indicator message along with an associated time stamp indicating the time and the duration of the time period during which an arc has been struck during the work period specified by the user in step200. In step204, the data management server108receives from the presence sensors114in each of the work cells104an operator present indicator message along with the time and the duration of the time period corresponding to the operator's presence in the work cell during the work period specified by the user. In step206, the data management server108continues to accumulate the arc on and operator present messages and associated time information until the work period time duration specified by the user is over. Next, in step208, the data management server108calculates an arc on percentage indicative of the usage of each welding apparatus106based on the operator present and arc on messages received in steps202,204during the work period specified by the user in step200. In an embodiment, for a given work period specified by the user, the operator factor-based arc on percentage is calculated by dividing the total amount of time during which arc on indicator messages were received (i.e., based on a sum of associated arc on durations) by the operator factor—i.e., by the total amount of time during which the operator was detected to be present in the work cell104. This provides an accurate estimation of welding apparatus utilization and operator efficiency. Finally, in step210, the data management server108determines the number of employees or operators working in the work area102throughout the work period defined by the user. In particular, if activity is detected on average in “n” work cells104throughout the shift or another user defined time period (e.g., as a trend over multiple work periods), then the data management server108determines that “n” is the number of employees that worked during that shift. As discussed above in connection withFIG.1, the data management server causes the user device116to display a report having a graphical and/or numerical representation of the operator factor-based arc on percentage and the number of employees working during the user specified time interval for tracking equipment utilization rate and manufacturing efficiency. While, the embodiments of the operator presence based arc on percentage and number of employees determinations described above are associated with a user-selected work period, additional embodiments include calculations of the above-described metrics without user input of a work period. For instance, in such embodiments, the above-described metrics are determined whenever activity is detected in a work cell. In general, operator factor based calculations utilize detection of operator presence and/or activity type for determining a number of efficiency metrics, such as an arc on percentage, a number of people working during a time period, among others. For instance, in various embodiments, additional information can be gathered, calculated, and displayed. For example, the active time of tools, machines, or processes may be individually tabulated along with or in lieu of arc on time. Thereby, a more complete understanding of all activities of the work area may be gained. In one illustrative example, an operator was in the work area for four hours and spent one hour welding (arc on), one hour on non-value added grinding, one hour fitting parts, while the operator's activity for another hour is unknown. Thus, a welding operator efficiency of 25% arc on time percentage would be displayed. The additional gathered data (e.g., identifying remaining activity) could be used to understand what is preventing the operator from welding. For example, when the operator is spending an equal amount of time on non-value added grinding, as in the above example. In various embodiments, determining how many operators worked during a shift (e.g., via presence and/or activity type detection), enhances the accuracy of determination of various additional efficiency metrics, such as the arc on percentage, as well as deposition rate per person and/or per hour (deposition is the amount of filler metal deposited in a piece or work, weldment or individual weld). The operator factor based calculation can account for job shops where people move around to different work cells depending on orders or on the work that needs to be done. For example, if presence is detected in 10 of the 15 work cells in a predetermined percentage (e.g., 60 percent or more) of the 32 work periods in a given shift, it could be determined that there are 10 operators working that shift. In one embodiment, knowing the number of people within each shift takes the guess work out of an arc on percentage calculation where: (number of operators)×(number of hours in a shift) is in the denominator of the arc on percentage equation. We will know the denominator in the equation if we can accurately tell how many people are there as operators. In another embodiment, a true arc on percentage may be calculated as follows: Total number of minutes multiple machines are active (i.e., arc is on)/number of minutes multiple people are working on any given shift. Operator factor is relevant in the above calculation because there could be people that weld but also operate other equipment or have other duties within an environment. In such a case, operator factor based analysis avoids counting in the above equation the time the operators are operating bending equipment or attending to non-welding duties. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed here may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description here. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed here may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used here, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make and use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined here may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown here but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed here. | 18,614 |
11860604 | EXAMPLE EMBODIMENT Example Embodiment The following describes an analysis assistance apparatus, an analysis assistance method, and a program according to an example embodiment of the present invention with reference toFIG.1toFIG.10. Apparatus Configuration First, a schematic configuration of the analysis assistance apparatus according to the present example embodiment will be described usingFIG.1.FIG.1is a block diagram showing a schematic configuration of the analysis assistance apparatus according to an example embodiment of the present invention. An analysis assistance apparatus10according to the present example embodiment shown inFIG.1is an apparatus for assisting the analysis on a control program for controlling respective parts of a plant. As shown inFIG.1, the analysis assistance apparatus10includes a control program obtainment unit11, an event information obtainment unit12, and a safety barrier search unit13. The control program obtainment unit11obtains a control program for controlling respective parts of the plant based on sensor data from a sensor installed in the plant. The event information obtainment unit12obtains event information, which includes a variable that defines a state of the plant when a predetermined event has occurred and a value thereof, as information necessary for searching the control program for a safety barrier for avoiding the occurrence of the predetermined event in the plant. The safety barrier search unit13first extracts, from the control program obtained by the control program obtainment unit11, a causal relationship between an input variable and an output variable of the control program. Then, the safety barrier search unit13searches the control program for a safety barrier based on the variable and the value thereof included in the event information obtained by the event information obtainment unit12, and on the extracted causal relationships. As described above, the analysis assistance apparatus10according to the present example embodiment can automatically extract a safety barrier from the control program. Therefore, according to the analysis assistance apparatus10, the analysis on the control program can be assisted in a case where the safety of the plant is assessed. Next, the specifics of the configuration and functions of the analysis assistance apparatus according to the present example embodiment will be described usingFIG.2toFIG.8.FIG.2is a configuration diagram showing an example of a configuration of a plant in which the analysis is to be assisted in the example embodiment of the present invention.FIG.3is a block diagram showing a relationship between the analysis assistance apparatus and a control system of the plant shown inFIG.2in the example embodiment of the present invention.FIG.4is a block diagram showing the configuration of the analysis assistance apparatus according to the example embodiment of the present invention in a more specific manner. As shown inFIG.2, in the present example embodiment, a plant20includes a water reservoir tank21, a water level sensor (LIT101)22, a supply line23, a water discharge line24, a pump (PMP101)25, and a valve (MV101)26. The plant20also includes a PLC (PLC1)30that, as a control device, executes a control program. The water level sensor22measures the water level of water reserved in the water reservoir tank21in four stages (H2, H, L, L2), and outputs sensor data indicating the measured water level. The supply line23is a line for supplying water to the water reservoir tank21. The pump25is mounted on the supply line23. The water discharge line24is a line for discharging water in the water reservoir tank21. The valve26is mounted on the water discharge line24. The PLC30adjusts the water level by placing the pump25in operation, or by opening or closing the valve26, in accordance with sensor data output from the water level sensor22. Furthermore, as shown inFIG.3, in the plant20, the PLC30is connected to an engineering workstation33and a terminal apparatus31that is used by an operator via a network switch32and a control network (NW_c1) in such a manner that data can be communicated. Moreover, the PLC30is also connected to the analysis assistance apparatus via the network switch32and the control network (NW_c1). The terminal apparatus31provides an HMI to the operator. The operator performs operations on the HMI of the terminal apparatus31. The engineering workstation33administers the operation states of respective PLCs, and also holds control programs thereof. Furthermore, the engineering workstation33updates the control programs in accordance with an instruction from, for example, the operator. In addition, as shown inFIG.3, the PLC30is connected to the water level sensor22, the pump25, and the valve26via a field network f1 (NW_f1). As described above, the plant20, which is to be monitored in the present example embodiment, includes the PLC that executes the control program and the networks via which the PLC and other apparatuses are connected. Furthermore, the configuration of the analysis assistance apparatus10according to the present example embodiment will be described in a more specific manner usingFIG.4. As shown inFIG.4, the analysis assistance apparatus10includes an event derivation unit14, a display unit15, and a causal relationship storage unit16in addition to the control program obtainment unit11, the event information obtainment unit12, and the safety barrier search unit13shown inFIG.1. In the present example embodiment, the control program obtainment unit11obtains, from the engineering workstation33shown inFIG.2, a control program held therein.FIG.5shows an example of a control program used in the example embodiment of the present invention. Also, the control program shown inFIG.5is a part of a control program of the PLC30. InFIG.5, each of “LIT101.H2”, “LIT101.H”, “LIT101.L2”, “LIT101.L”, and “PMP101.ON” denotes a signal used in the plant20. Specifically, “LIT101.H2” is a signal that becomes “High (1)” when the water level is the highest of the four stages, and becomes “Low (0)” otherwise. “LIT101.H” is a signal that becomes “High (1)” when the water level has decreased from the highest water level by one stage, and becomes “Low (0)” otherwise. “LIT101.L” is a signal that becomes “High (1)” when the water level has decreased from the highest water level by two stages, and becomes “Low (0)” otherwise. “LIT101.L2” is a signal that becomes “High (1)” when the water level is the lowest of the four stages, and becomes “Low (0)” otherwise. Also, inFIG.5, “D_OUT_CH0” is a signal (flag) that controls the pump25, and becomes “1” when issuing a driving instruction and “0” when issuing a stopping instruction. Furthermore, “PLANT_SD” is a signal (flag) indicating whether stopping of the entirety of the plant20has been requested, and becomes “1” when a shutdown of the entirety of the plant20has been requested and “0” otherwise. “PLANT_STOP” is a signal (flag) indicating whether an emergency stop button has been pressed, and becomes “1” when the emergency stop button has been pressed and “0” otherwise. Note that, a signal which has been received by the PLC30from the sensor and the like and which communicates, for example, a state of a manufacturing process is written into a location corresponding to this signal inside a storage device included in this PLC. Also, a signal which has been transmitted by the PLC to the actuator and the like and which, for example, activates or stops the actuator and the like is read out from a location corresponding to this signal inside the storage device included in the PLC. Furthermore, a location inside the storage device is generally referred to as a variable, a tag, a register, and so forth; in the present example embodiment, it is referred to as a variable. In addition, a variable into which a received signal is written is referred to as an input variable, and a variable from which a signal to be transmitted is read out is referred to as an output variable. In the present example embodiment, the event information obtainment unit12obtains event information related to a failure event (e.g., “a failure to stop the pump”). A failure event is an event in which a safety barrier did not function and the occurrence was not able to be deterred. Also, a safety barrier is means for making a transition from a state where a failure event has occurred to a state where the occurrence of the failure event has been avoided, and is specifically a program module implemented in a control program. Furthermore, a safety barrier includes an origin point and an end point of the transition, as well as a transition condition that causes the transition. In the present example embodiment, the safety barrier search unit13first extracts causal relationships from the control program of the PLC30, and stores the extracted causal relationships into the causal relationship storage unit16. A causal relationship indicates, for example, the input variables whose values served as the bases for determining the value of an output variable output from each PLC. The safety barrier search unit13also stores the extracted causal relationships into the causal relationship storage unit16. A description is now given of the specifics of processing in which the safety barrier search unit13extracts causal relationships usingFIG.6.FIG.6is a diagram showing examples of the causal relationships extracted in the example embodiment of the present invention. Specifically, as shown inFIG.6, for each processing step (row) in the control program (seeFIG.5), the safety barrier search unit13first specifies an output variable from among respective variables that are read out and written as analysis processing. An output variable is a special variable that is used by the PLC to transmit signals for controlling respective parts of the plant; in many cases, a naming rule describing that the start of a variable name of an output variable be “D_OUT”, for example, is separately set. For example, the safety barrier search unit13can specify a variable that matches the naming rule of output variables as an output variable. Subsequently, the safety barrier search unit13specifies, from the control program, input variables that can influence the determination of the value of the output variable. Furthermore, when there are other input variables that can influence the determination of the values of the specified input variables, the other input variables are specified as well; for this reason, the safety barrier search unit13specifies input variables recursively as long as a new input variable is specified. Input variables that can influence the determination of the value of a certain output variable can be specified as follows. For example, when the control program includes assignment statements that assign a value to an output variable, the safety barrier search unit13specifies, as input variables, other variables that are referred to by the assignment statements as the values to be assigned. Furthermore, when whether to execute the assignment statements in the control program is determined by the result of evaluation of a condition expression in a conditional branch, such as an IF statement, the safety barrier search unit13also specifies, as input variables, other variable that are referred to by the condition expression. Next, the safety barrier search unit13searches the control program for assignment statements that assign constants to the input variables, or condition expressions that compare the input variables with constants, determines whether the input variables can take constants from the assignment statements or the condition expressions that were able to be specified through the search, and specifies the constants (values) if the input variables can take the constants. Then, the safety barrier search unit13exhaustively generates a plurality of input value patterns from the input variables that have been specified thus far and from the constants (values) that can be taken by these input variables. Next, for each of the input value pattern, the safety barrier search unit13executes the control program once under a state where this input value pattern is given, and obtains an output value. At this time, input variables that were referred to by the control program during the execution are recorded as well. Then, after the execution of the control program, the safety barrier search unit13extracts the recorded input variables and the values thereof, as well as the values of output variables output by the control program, and uses them as causal relationships. The extracted causal relationships are as shown inFIG.6. The safety barrier search unit13extracts signals included in assignment processing and in branch conditions during conditional branching processing, as well as the values thereof, as causal relationships among a plurality of signals used in the plant. In other words, the safety barrier search unit13extracts causal relationships by specifying an output variable, specifying input variables, specifying the values that can be taken by the input variables, generating input value patterns, and obtaining output values as well as input variables that were referred to during the execution of the control program. Once the causal relationships have been extracted, the safety barrier search unit13subsequently generates a state transition graph with use of the variable and the value included in the event information and the extracted causal relationships. The state transition graph indicates state transitions from a state defined by the variable and the value included in the event information, and also includes an origin point and an end point of the transitions, as well as transition conditions that cause the transitions. Then, the safety barrier search unit13searches for safety barriers by specifying, from the generated state transition graph, transitions to a state where the occurrence of the failure event has been avoided. Furthermore, the safety barrier search unit13may generate state transition graphs for the entirety of the plant20from the causal relationships in advance, and specify, from among the generated state transition graphs for the entirety, a necessary state transition graph with use of the variable and the value included in the event information. A description is now given of the specifics of processing in which the safety barrier search unit13searches for safety barriers usingFIG.7andFIG.8.FIG.7is a diagram showing examples of a state transition graph and a fault tree generated in the example embodiment of the present invention.FIG.8is a diagram showing other examples of the state transition graph and the fault tree generated in the example embodiment of the present invention. The following description will be provided using an example case where the event information includes a variable (D_OUT_CH0) for controlling the pump25and its value “1”. As shown in the upper level ofFIG.7, the safety barrier search unit13generates the state transition graph from the variable (D_OUT_CH0) for controlling the pump25and the causal relationships shown inFIG.6. This state transition graph includes two states: “D_OUT_CH0=1” and “D_OUT_CH0=0”. It also includes three transitions from the state “D_OUT_CH0=1” to the state “D_OUT_CH0=0”. In other words, in the example ofFIG.7, a state where the failure event occurs is a state where the issuance of an instruction for driving the pump25is continued. Therefore, in order to avoid this state, the safety barrier search unit13generates the state transition graph that includes transitions from a state where the instruction for driving the pump25is issued to a state where a stopping instruction is issued. Furthermore, the transitions between states in the state transition graph include an origin point, an end point, and transition conditions, which are generated from the causal relationships. The origin point is generated from a variable that is included among input variables in the causal relationships and is also included in the event information, and the value thereof. The end point is generated from a variable that is included among output variables in the causal relationships and is also included in the event information, and the value thereof. The transition conditions are generated from variables that are included among input variables in the causal relationships and are not included in the event information, and the values thereof. For example, transition A inFIG.7is generated from the second and the fourth causal relationships inFIG.6, and includes an origin point “D_OUT_CH0=1”, an end point “D_OUT_CH0=0”, and a transition condition “LIT101.H=1”. Transition B inFIG.7is generated from the third causal relationship inFIG.6, and includes an origin point “D_OUT_CH0=1”, an end point “D_OUT_CH0=0”, and a transition condition “LIT101.H2=1”. Transition C inFIG.7is generated from four causal relationships composed of the fourth to the eighth causal relationships inFIG.6, and includes an origin point “D_OUT_CH0=1”, an end point “D_OUT_CH0=0”, and a transition condition “PLANT_STOP=1”. In addition, when there is a causal relationship in which none of values of the variable included in the event information is designated as a value of an input variable, the safety barrier search unit13can generate a plurality of transitions with origin points represented by respective states in the state transition graph. Similarly, when there is a causal relationship in which none of values of the variable included in the event information is designated as a value of an output variable, the safety barrier search unit13can generate a plurality of transitions with end points represented by respective states in the state transition graph. For example, when there is a causal relationship in which the value of “D_OUT_CH0” is not designated as an input variable and “0” is designated as the value of “D_OUT_CH0” as an output variable (not shown), the safety barrier search unit13can generate a transition from an origin point “D_OUT_CH0=1” to an end point “D_OUT_CH0=0”, and a transition from an origin point “D_OUT_CH0=0” to an end point “D_OUT_CH0=0” (in a case where the values that can be taken by these variables are binary numbers 0 and 1). Then, the safety barrier search unit13uses transition A to transition C shown in the upper level ofFIG.7as safety barriers. Using information included in the safety barriers extracted by the safety barrier search unit13, the event derivation unit14derives new events in which the safety barriers have a possibility of not functioning, that is to say, new events that cause the occurrence of the failure event (undesired event). As stated earlier, the extracted safety barriers are means that have been implemented in the control program to avoid the occurrence of the failure event (undesired event); therefore, the events in which these safety barriers do not function appropriately, that is to say, the events in which when the actual states of respective parts of the plant satisfy the state of the origin point and transition conditions for the state transitions in the state transition graph, the PLC does not execute processing corresponding to these state transitions, and consequently the states of respective parts of the plant do not change to the state of the end point in the state transition graph, can be the new events that cause the occurrence of the failure event. As these new events compose the fault tree viewed by the operator, the event derivation unit14derives the events that are described in a natural language that can be intuitively understood by humans. The lower level ofFIG.7shows the new events that have been derived by the event derivation unit14from the safety barriers extracted by the safety barrier search unit13, namely “transition A”, “transition B”, and “transition C”. In the example of the lower level ofFIG.7, the new events are “failure in transition A”, “failure in transition B”, and “failure in transition C” that cause the occurrence of the failure event, namely “continuation of driving of PMP101”. Furthermore, the event derivation unit14may add, to the derived events, a text describing that these events have a possibility of being triggered by a cyberattack. For example, there is a possibility that these events occur as a result of falsification of the control program, or falsification of, for example, input signals that are transferred from the sensor and the like to the PLC in the networks, by a cyberattacker who infiltrated the plant. Furthermore, as indicated by the upper level ofFIG.8, the safety barrier search unit13can also generate a state transition graph that indicates states by combining the value of the variable (D_OUT_CH0) for controlling the pump25and the value of a variable (PLANT_SD) that represents a request for a plant shutdown. This state transition graph includes three states: “D_OUT_CH0=1, PLANT_SD=0”, “D_OUT_CH0=1, PLANT_SD=1”, and “D_OUT_CH0=0, PLANT_SD=*” (where * represents 0 and 1). Furthermore, in the example ofFIG.8, the safety barrier search unit13generates four transitions composed of transition A to transition D based on the causal relationships (not shown) extracted from the control program (seeFIG.5). Transition A includes an origin point “D_OUT_CH0=1, PLANT_SD=0”, an end point “D_OUT_CH0=0, PLANT_SD=*”, and a transition condition “LIT101.H=1”. Transition B includes an origin point “D_OUT_CH0=1, PLANT_SD=0”, an end point “D_OUT_CH0=1, PLANT_SD=1”, and a transition condition “LIT101.H2=1”. Transition C includes an origin point “D_OUT_CH0=1, PLANT_SD=0”, an end point “D_OUT_CH0=1, PLANT_SD=1”, and a transition condition “PLANT_STOP=1”. Transition D includes an origin point “D_OUT_CH0=1, PLANT_SD=1”, an end point “D_OUT_CH0=0, PLANT_SD=*”, and a transition condition (unconditional)“. Thereafter, the safety barrier search unit13uses transition A to transition D as safety barriers. Furthermore, although one PLC30is included in the plant20in the example ofFIG.2andFIG.3described above, the plant20may include a plurality of PLCs and a network via which the PLCs are interconnected, in the present example embodiment. In this example aspect, the control program obtainment unit11obtains control programs respectively for the PLCs. Furthermore, the safety barrier search unit13searches for safety barriers from the control programs executed by respective PLCs on a per-PLC basis. Once the safety barrier search unit13has searched for the safety barriers, the display unit15transmits information for specifying the safety barriers that have been searched for (referred to as “safety barrier information”) to the terminal apparatus31, and causes the terminal apparatus31to display the safety barriers on a screen thereof. The display unit15can also display the safety barriers as respective transitions that compose the state transition graph as shown in the upper level ofFIG.7and the upper level ofFIG.8. Furthermore, when a new event has been derived by the event derivation unit14, the display unit15also transmits information for specifying this new event to the terminal apparatus31, and causes the terminal apparatus31to display the new event as well on the screen thereof. In this case, too, the display unit15can also display the failure event (undesired event) and the derived new event as a partial tree of the fault tree with use of quadrilaterals and straight lines as shown in the lower level ofFIG.7and the lower level ofFIG.8. For example, by referring to the state transition graph in the upper level ofFIG.7and the partial tree of the fault tree in the lower level ofFIG.7as displayed by the display unit15, the operator can confirm that the causes of the occurrence of an event in which driving of the pump25is abnormally continued include: firstly, an event in which the pump25does not stop even though the water level of the water reservoir tank has changed from the highest water level to one stage below (failure in transition A); secondly, an event in which the pump25does not stop even through the water level of the water reservoir tank has reached the highest level (failure in transition B); and thirdly, an event in which the pump25does not stop even though the operator has pressed the emergency stop button (failure in transition C). [Apparatus Operations] Next, the operations of the analysis assistance apparatus10according to the present example embodiment will be described usingFIG.9.FIG.9is a flow diagram showing the operations of the analysis assistance apparatus according to the example embodiment of the present invention. In the following description,FIG.1toFIG.8will be referred to as appropriate. Furthermore, in the present example embodiment, the analysis assistance method is carried out by causing the analysis assistance apparatus10to operate. Therefore, the following description of the operations of the analysis assistance apparatus10applies to the analysis assistance method according to the present example embodiment. As shown inFIG.9, first, the control program obtainment unit11obtains the control program (seeFIG.5) of the PLC30of the plant20from the engineering workstation33(step A1). Next, the event information obtainment unit12obtains event information, which includes a variable that defines a state of the plant when a failure event has occurred and a value thereof, as information necessary for searching the control program for safety barriers for avoiding the occurrence of the failure event in the plant20(step A2). Next, the safety barrier search unit13extracts, from the control program of the PLC30obtained in step A1, causal relationships between input variables and output variables in the control program (step A3). Specifically, the safety barrier search unit13first specifies, from the control program, input variables that can influence the determination of the value of an output variable. Subsequently, the safety barrier search unit13searches the control program for assignment statements that assign constants to the input variables or condition expressions that compare the input variables with constants, and exhaustively generates a plurality of input value patterns from the input variables specified through the search and constants (values) that can be taken by these input variables. Thereafter, for each of the input value patterns, the safety barrier search unit13executes the control program once under a state where this input value pattern is given, and obtains an output value; at this time, input variables that were referred to by the control program during the execution are recorded. The causal relationships are represented by the obtained input values and output values. Next, once the causal relationships have been extracted, the safety barrier search unit13generates a state transition graph with use of the variable and the value included in the event information obtained in step A2 and the causal relationships extracted in step A3 (step A4). Next, the safety barrier search unit13searches for safety barriers with use of the state transition graph generated in step A4 (step A5). Specifically, the safety barrier search unit13specifies, from the state transition graph, transitions to a state where the occurrence of the failure event has been avoided, and uses the specified transitions as the safety barriers. Next, the event derivation unit14determines whether there is a new event in which the safety barriers that have been searched for in step A5 have a possibility of not functioning; when it is determined that the new event exists, this new event is derived (step A6). Thereafter, the display unit15transmits safety barrier information for specifying the safety barriers that have been searched for in step A5, as well as information for specifying the new event derived in step A6, to the terminal apparatus31, and causes the terminal apparatus31to display the safety barriers and the new event on the screen thereof (step A7). As described above, according to the present example embodiment, safety barriers are automatically extracted from the control program of the PLC. Therefore, there is no need for the technician of the plant20to analyze the program code of the PLC as in conventional cases, and a burden on the technician is lowered. According to the present example embodiment, the analysis on the control program can be assisted in a case where the safety of the plant is assessed. [Program] It is sufficient for the program according to the present example embodiment to be a program that causes a computer to execute steps A1 to A7 shown inFIG.9. The analysis assistance apparatus and the analysis assistance method according to the present example embodiment can be realized by installing this program in the computer and executing this program. In this case, a processor of the computer functions as the control program obtainment unit11, the event information obtainment unit12, the safety barrier search unit13, the event derivation unit14, and the display unit15, and performs processing. Also, the causal relationship storage unit16is realized by storing a data file of causal relationships into a storage device, such as a hard disk, included in the computer. Furthermore, the program according to the present example embodiment may be executed by a computer system constructed with a plurality of computers. In this case, for example, each computer may function as one of the control program obtainment unit11, the event information obtainment unit12, the safety barrier search unit13, the event derivation unit14, and the display unit15. Also, the causal relationship storage unit16may be constructed on a computer that is different from the computer that executes the program according to the present embodiment. UsingFIG.10, the following describes a computer that realizes the analysis assistance apparatus10by executing the program according to the present example embodiment. FIG. is a block diagram showing an example of a computer that realizes the analysis assistance apparatus10according to the example embodiment of the present invention. As shown inFIG.10, a computer110includes a CPU (Central Processing Unit)111, a main memory112, a storage device113, an input interface114, a display controller115, a data reader/writer116, and a communication interface117. These components are connected in such a manner that they can perform data communication with one another via a bus121. Note that the computer110may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU111, or in place of the CPU111. The CPU111carries out various types of calculation by deploying the program (codes) according to the present example embodiment stored in the storage device113to the main memory112, and executing the codes in a predetermined order. The main memory112is typically a volatile storage device, such as a DRAM (dynamic random-access memory). Also, the program according to the present example embodiment is provided in a state where it is stored in a computer-readable recording medium120. Note that the program according to the present example embodiment may be distributed over the Internet connected via the communication interface117. Also, specific examples of the storage device113include a hard disk drive and a semiconductor storage device, such as a flash memory. The input interface114mediates data transmission between the CPU111and an input apparatus118, such as a keyboard and a mouse. The display controller115is connected to a display apparatus119, and controls display on the display apparatus119. The data reader/writer116mediates data transmission between the CPU111and the recording medium120, reads out the program from the recording medium120, and writes the result of processing in the computer110to the recording medium120. The communication interface117mediates data transmission between the CPU111and another computer. Specific examples of the recording medium120include: a general-purpose semiconductor storage device, such as CF (CompactFlash®) and SD (Secure Digital); a magnetic recording medium, such as a flexible disk; and an optical recording medium, such as a CD-ROM (Compact Disk Read Only Memory). Note that the analysis assistance apparatus10according to the present example embodiment can also be realized by using items of hardware that respectively correspond to the components, rather than the computer in which the program is installed. Furthermore, a part of the analysis assistance apparatus10may be realized by the program, and the remaining part of the analysis assistance apparatus10may be realized by hardware. A part or an entirety of the above-described example embodiment can be represented by (Supplementary Note 1) to (Supplementary Note 12) described below, but is not limited to the description below. (Supplementary Note 1) An analysis assistance apparatus, including:a control program obtainment unit configured to obtain a control program for controlling respective parts of a plant based on sensor data from a sensor installed in the plant;an event information obtainment unit configured to obtain event information as information necessary for searching the control program for a safety barrier for avoiding an occurrence of a predetermined event in the plant, the event information including a variable that defines a state of the plant when the predetermined event has occurred and a value thereof; anda safety barrier search unit configured to extract, from the obtained control program, a causal relationship between an input variable and an output variable in the control program, and search the control program for the safety barrier based on the variable and the value included in the obtained event information and on the extracted causal relationship. (Supplementary Note 2) The analysis assistance apparatus according to Supplementary Note 1, further including:an event derivation unit configured to derive a new event in which the safety barrier that has been searched for has a possibility of not functioning; anda display unit configured to display the specified safety barrier and the derived new event on a screen. (Supplementary Note 3) The analysis assistance apparatus according to Supplementary Note 1 or 2, whereinthe safety barrier search unitgenerates, with use of the variable and the value included in the event information and the extracted causal relationship, a state transition graph which indicates a state transition from the state defined by the variable and the value included in the event information, and which includes an origin point and an end point of the transition and a transition condition that causes the transition, andsearches for the safety barrier by specifying, from the generated state transition graph, a transition to a state where the occurrence of the predetermined event has been avoided. (Supplementary Note 4) The analysis assistance apparatus according to any one of Supplementary Notes 1 to 3, whereinin a case where the plant includes a plurality of control devices that execute the control program and a network via which the plurality of control devices are interconnected,the control program obtainment unit obtains the control program for each of the control devices, andfor each of the control devices, the safety barrier search unit searches for the safety barrier from the control program executed by the control device. (Supplementary Note 5) An analysis assistance method, including:(a) a step of obtaining a control program for controlling respective units of a plant based on sensor data from a sensor installed in the plant;(b) a step of obtaining event information as information necessary for searching the control program for a safety barrier for avoiding an occurrence of a predetermined event in the plant, the event information including a variable that defines a state of the plant when the predetermined event has occurred and a value thereof; and(c) a step of extracting, from the obtained control program, a causal relationship between an input variable and an output variable in the control program, and searching the control program for the safety barrier based on the variable and the value included in the obtained event information and on the extracted causal relationship. (Supplementary Note 6) The analysis assistance method according to Supplementary Note 5, further including:(d) a step of deriving a new event in which the safety barrier that has been searched for has a possibility of not functioning; and(e) a step of displaying the specified safety barrier and the derived new event on a screen. (Supplementary Note 7) The analysis assistance method according to Supplementary Note 5 or 6, whereinin the (c) step,a state transition graph is generated with use of the variable and the value included in the event information and the extracted causal relationship, the state transition graph indicating a state transition from the state defined by the variable and the value included in the event information, and including an origin point and an end point of the transition and a transition condition that causes the transition, andthe safety barrier is searched for by specifying, from the generated state transition graph, a transition to a state where the occurrence of the predetermined event has been avoided. (Supplementary Note 8) The analysis assistance method according to any one of Supplementary Notes 5 to 7, whereinin a case where the plant includes a plurality of control devices that execute the control program and a network via which the plurality of control devices are interconnected,in the (a) step, the control program is obtained for each of the control devices, andin the (c) step, for each of the control devices, the safety barrier is searched for from the control program executed by the control device. (Supplementary Note 9) A computer-readable recording medium in which a program is recorded, the program including an instruction that causes a computer to carry out:(a) a step of obtaining a control program for controlling respective parts of a plant based on sensor data from a sensor installed in the plant;(b) a step of obtaining event information as information necessary for searching the control program for a safety barrier for avoiding an occurrence of a predetermined event in the plant, the event information including a variable that defines a state of the plant when the predetermined event has occurred and a value thereof; and(c) a step of extracting, from the obtained control program, a causal relationship between an input variable and an output variable in the control program, and searching the control program for the safety barrier based on the variable and the value included in the obtained event information and on the extracted causal relationship. (Supplementary Note 10) The computer-readable recording medium according to Supplementary Note 9, whereinthe program further includes an instruction that causes the computer to carry out:(d) a step of deriving a new event in which the safety barrier that has been searched for has a possibility of not functioning; and(e) a step of displaying the specified safety barrier and the derived new event on a screen. (Supplementary Note 11) The computer-readable recording medium according to Supplementary Note 9 or 10, whereinin the (c) step,a state transition graph is generated with use of the variable and the value included in the event information and the extracted causal relationship, the state transition graph indicating a state transition from the state defined by the variable and the value included in the event information, and including an origin point and an end point of the transition and a transition condition that causes the transition, andthe safety barrier is searched for by specifying, from the generated state transition graph, a transition to a state where the occurrence of the predetermined event has been avoided. (Supplementary Note 12) The computer-readable recording medium according to any one of Supplementary Notes 9 to 11, whereinin a case where the plant includes a plurality of control devices that execute the control program and a network via which the plurality of control devices are interconnected,in the (a) step, the control program is obtained for each of the control devices, andin the (c) step, for each of the control devices, the safety barrier is searched for from the control program executed by the control device. Although the invention of the present application has been described above with reference to the example embodiment, the invention of the present application is not limited to the above-described example embodiment. Various changes that can be understood by a person skilled in the art within the scope of the invention of the present application can be made to the configuration and the details of the invention of the present application. INDUSTRIAL APPLICABILITY As described above, according to the present invention, the analysis on a control program can be assisted in a case where the safety of a plant is assessed. The present invention is useful for various types of plants that are controlled by a control device. REFERENCE SIGNS LIST 10analysis assistance apparatus11control program obtainment unit12event information obtainment unit13safety barrier search unit14event derivation unit15display unit16causal relationship storage unit20plant21water reservoir tank22water level sensor23supply line24water discharge line25pump26valve30PLC31terminal apparatus32network switch33engineering workstation110computer111CPU112main memory113storage device114input interface115display controller116data reader/writer117communication interface118input apparatus119display apparatus120recording medium121bus | 42,089 |
11860605 | DETAILED DESCRIPTION Described in detail herein is a mobile assembly apparatus. In one embodiment, the mobile assembly apparatus provides support for the assembly of items, such as but not limited to, commonly purchased retail products. The mobile assembly apparatus provides a cabinet with wheels attached for mobility. The cabinet also includes an expandable assembly surface. The mobile assembly apparatus further includes an articulated arm for grasping partially assembled retail products. The mobile assembly apparatus includes outrigger supports to compensate for heavier retail products grasped by the articulated arm. A power supply also is included in the mobile assembly apparatus. The mobile assembly apparatus additionally includes sensors and a computing device. The sensors may be used to detect the retail product in various states of assembly. The computing device executes an assembly validation module that determines a correct or incorrect state of assembly based on information provided by the sensors. The assembly validation module alerts the assembler to miss-assembly or correct assembly and may provide additional instructions to complete assembly of the retail product. FIG.1is a block diagram illustrating a system100supporting a mobile assembly apparatus according to an exemplary embodiment. System100includes an item identification system102and one or more databases103A,103B communicatively connected via network116with mobile assembly apparatus118. Mobile assembly apparatus118includes, a computing device104, a controller106that controls positioning of outrigger supports107, one or more sensors108, a power supply110, a hardware dispenser114, a cabinet structure112, and articulated arm116. The item identification system102provides a mechanism for the mobile assembly apparatus118to retrieve item information. For example, the item identification system102may provide a platform for interfacing with a mobile assembly apparatus118within a retail environment. The item identification system102provides support software and hardware for the operation of the mobile assembly apparatus. The item identification system102is configured to receive a machine readable identifier associated with an item read by a sensor on the mobile assembly apparatus118, retrieve information from databases103A,103B related to the item, and provide information relating to the associated item back to the submitting mobile assembly apparatus. Mobile assembly apparatus118includes a computing device104equipped with a processor executing an assembly validation module105that communicates with the item identification system102. The assembly validation module105receives a machine readable identifier associated with an item (read by one of the sensors108), transmits the machine readable identifier to the item identification system102, and receives information from the item identification system102regarding the item associated with the machine readable identifier in return. Computing device104and/or mobile assembly apparatus118may include various input and output methods for human interaction including but not limited to video displays, touchscreens, cameras, speakers, and microphones. The assembly validation module105may communicate, directly or indirectly with a controller106that controls positioning of outrigger supports107, one or more sensors108, a hardware dispenser114and an articulated arm116. Computing device104and mobile assembly apparatus118are powered by a power supply110. The controller106includes a hardware and/or software solution for interfacing with various other components of the mobile assembly apparatus118. The controller106is used at the direction of the assembly validation module105for extending and retracting the outrigger supports107used to stabilize mobile assembly apparatus118based on a size and/or weight of an item being assembled. The controller106may also adjust the position of outrigger supports by transmitting commands to extend and retract the supports based on the state of assembly of the item. Additionally, in one embodiment, the controller106may be used to automatically expand work surfaces of the mobile assembly apparatus118in response to command from assembly validation module105. One or more sensors108are communicatively coupled to the assembly validation module105. The one or more sensors108provide inputs related to the identification of, and state of assembly of the item. The one or more sensors108include, but are not limited, to cameras, sensors capable of reading machine-readable identifiers, RFID and NFC readers, weight sensors, accelerometers, thermometers, and vibration sensors. The one or more sensors108provide input to the assembly validation module105used to determine the state of assembly of the item. For example, cameras provide images or video depicting the item in a state of assembly as input for the assembly validation module105to use in determining the state of assembly of the item. The camera images may enable the assembly validation module105to recognize the individual parts based on their size, color and assembly markings (part b, part1a, etc.). The parts may also employ digital watermarking such that, in one embodiment, is only visible to the camera. In one embodiment, the watermarking may have orientation markings to help assist identifying if the part was installed the right way, backwards, upside down or is an incorrect part. Another sensor that may assist the assembly validation module105to detect the correctness of an assembly is a weight sensor. As more parts are added in sequence, the assembled item will get heavier. A check at the end of each step compared to a reference weight can determine if enough parts have been used and if the correct parts have been used. When a weight of the assembled item that is too heavy or too light, a wrong assembly is indicated. Another sensor that may aide the assembly validation module105to detect the correctness of an assembly is an ultrasound sensor. At the end of each step, ultrasound may be used to create a three dimensional representation of what has been assembled so far. After a check with a reference in the database, if there was deviation from the reference, then wrong assembly and/or wrong parts have been used. The ultrasound may also identify the remaining parts and if the wrong parts remain, a wrong assembly may be indicated. A power supply110is included in the mobile assembly apparatus. The power supply110provides electrical power to electrical devices on the mobile assembly apparatus such as the computing device104, the controller106, the outrigger supports107, the one or more sensors108, the hardware dispenser114and the articulated arm116. In one non-limiting example, the power supply110includes a rechargeable battery configured to provide electricity to the computing device, the sensors and electrical assembly tools. Alternatively, the power supply provides electrical power through a corded electrical interface designed to connect to standard power outlets. The power supply110provides support for converting the alternating current of a standard power outlet to the direct current utilized by electronic devices. A hardware dispenser114may be coupled to the cabinet structure112and configured to store hardware components needed for the assembly of items. In one embodiment, cabinet structure112may include a frame having an interior, an expandable work surface that is configured to extend and retract, and wheels that are disposed at distal corners of a bottom side of the cabinet structure. The hardware dispenser114may be communicatively coupled to the computing device104and the assembly validation module105. The hardware dispenser114stores hardware containment pods (packages) of commonly used hardware components utilized in assembling items. For example, if many items require a #8 screw for assembly, a pod containing #8 screws is inserted into the hardware dispenser114. Based on the size of the pod inserted to the hardware dispenser114, the hardware dispenser reports a number of that specific part inventoried in the hardware dispenser to the assembly validation module105. The hardware dispenser114is configured to release assembly hardware corresponding to the hardware containment pods based on the received instructions from the assembly validation module105. For example, if the item identified via a machine-readable identifier or other means is identified as a chair by the identification system102, the instructions for the chair are retrieved by the identification system and forwarded to the assembly validation module105. If a chair assembly requires four #8 screws, the assembly validation module105instructs the hardware dispenser114to release four #8 screws. The assembly validation module105instructs the hardware dispenser114to dispense all of the necessary hardware for a complete assembly of an item at once, or the assembly validation module105releases the hardware as a subset of assembly hardware based on the received instructions after a previous step of assembly of the item has been verified as completed by the assembly validation module105. A network116connects the item identification system102with the computing device104inclusive to the mobile assembly apparatus. The network116can be a local area network (LAN), a wide area network (WAN), or the internet. The network116facilitates communication between the devices utilizing common transport layer protocols such as transmission communication protocol (TCP) and user datagram protocol (UDP). The data utilized to support the mobile assembly apparatus is contained in the application layer portion of the network protocol stack. The databases103A,103B hold indexed and retrievable data relating to the items that are to be assembled. The databases103A,103B are communicatively coupled with the item identification system102and are able to receive queries from the item identification system102for specific items. Upon a query by the item identification system102, the databases103A,103B provide information relating to the item. For example, the query may be triggered by a scan of a machine-readable identifier associated with an item. The information includes descriptions, photographs, assembly instructions, and data used to verify the assembly of the item. FIGS.2A and2Bare a diagram illustrating mobile assembly apparatus according to an exemplary embodiment. The mobile assembly apparatus118includes a cabinet202, an articulated arm204, wheels206, outrigger supports208, and an extendable surface210. The cabinet202provides a frame or support structure for the other components of the mobile assembly apparatus118. The frame of the cabinet202includes an interior space configured to hold tools related to the assembly of an item. Tools can include but are not limited to screwdrivers, wrenches, power tools (e.g. drills, saws), and air compressors. The cabinet202may include discrete drawers internal to the cabinet for the storage of the tools. An articulated arm204is coupled to the top of the cabinet. The articulated arm204includes one or more joints, allowing positioning of the arm to accommodate the assembly of items of different sizes and shapes. The joints are able to be individually manipulated to change the positioning of the articulated arm204. The joints include locking mechanisms to hold the articulated arm204in a desired position. The articulated arm204is configured to include a vice, claw, or similar grasping device to hold the item during assembly. In one embodiment the articulated arm may be manually adjusted. In another embodiment, the articulated arm may controlled by the controller pursuant to direction by the assembly validation module. A set of wheels206are affixed to the bottom of the cabinet202. The set of wheels206may be installed at distal corners of the bottom of the cabinet to provide stability during transport of the mobile assembly apparatus118. Each of the wheels206includes locking mechanisms to secure the mobile assembly apparatus118in one location during the assembly of an item. A set of outrigger supports208are coupled to the frame and disposed at distal corners of the frame. The outrigger supports are configured to extend from the frame and retract to the frame. The set of outrigger supports208are configured to extend and retract from the cabinet202. As an assembler of the item progresses in assembly, the set of outrigger supports208extend and retract independently from one another to compensate for the weight associated from any item grasped by the articulated arm204. In one embodiment the outrigger supports may be manually adjusted. In another embodiment, the outrigger supports may controlled by the controller pursuant to direction by the assembly validation module. An extendable surface210may be coupled to the top surface of the cabinet202. The extendable surface210is configured to extend or retract onto the surface of the cabinet202in a compact position. The compact position is utilized during transport of the mobile assembly apparatus118as well as when assembling smaller items. The extendable surface210is actuated to fold open from the sides of the cabinet202, to provide a larger work surface for the assembly of larger items. The extendable surface210may be configured to fold open from one or more sides of the cabinet202. In one embodiment the extendable surface may be manually adjusted. In another embodiment, the extendable surface may controlled by the controller pursuant to direction by the assembly validation module. FIG.3is a flow diagram300illustrating a process of assembling an item on a mobile assembly apparatus according to an exemplary embodiment. The flow diagram300presents a typical use case for the assembly of an item on the mobile assembly apparatus118according to one embodiment. At step302, the assembly validation module105receives a scan of an item. The scan is of a machine readable identifier such as, but not limited to, a barcode, Universal Product Code (UPC), or a quick response (QR) code. Alternatively, the scan is a scan of a near field communication (NFC) technology such as a radio frequency identifier (RFID) tag. The scan is read through the set of sensors108. The set of sensors108may include an NFC transceiver and a RFID reader. The machine readable identifier is affixed to or printed on the packaging of the item to be assembled and associates the machine readable identifier with the item. The scanned machine readable identifier is transmitted to the item identification system102. In one embodiment the machine-readable identifier is transmitted to the identification system using an application programming interface (API). Upon receipt of the machined readable identifier, the item identification system102accesses the databases103A,103B to retrieve information related to the item associated with the machine readable identifier. The item identification system102transmits the retrieved information including assembly instructions to the assembly validation module105. At step304, the assembly validation module105retrieves an assembly instruction step from the received information. The assembly validation module105decodes the retrieved information. The retrieved information includes but is not limited to descriptions, photos, assembly instruction steps, and assembly verification data. The descriptions may include descriptions of the manufacturer of the retail item, the item's purpose, and photos of the item. Assembly instruction steps may include photographs, graphical animations, mobile assembly apparatus configuration data (e.g. outrigger support control information), workspace configuration information and written instructions on how to assemble the item. The assembly verification data may include, but is not limited to, imaging and weight data for comparison. The assembly validation module105retrieves a current assembly instruction step, and presents the assembly instruction step visually or audibly through a display surface or speaker integrated into, or otherwise accessible to the mobile assembly apparatus118. The assembler may then follow the instruction step. At step306, the assembly validation module105queries sensors to determine assembly state. The sensors may provide imaging data and/or weight data corresponding to the partially assembled item. At step308, the assembly validation module105determines whether the assembly is correct. The assembly validation module105utilizes the assembly verification data in conjunction with the imaging data and weight data from the sensors to determine whether the item was assembled correctly. The assembly validation module105compares images in the assembly verification data against the imaging data to determine whether any differences between the images of the assembled item deviate within an acceptable threshold. Additionally the assembly validation module105may compares weight data within the verification data to the weight data corresponding to the partially assembled item to determine whether any differences between the weights of the assembled item deviate within an acceptable threshold. At step310, the assembly validation module105issues a notification that the assembly was incorrect, when any deviation is outside an acceptable threshold. The notification is a visual notification such as an icon or image rendered on a display of the assembly validation module105. Alternatively, the notification is an auditory notification indicating a misassembled item. At step312, if this step was validated as correct, the assembly validation module105determines whether the assembly is complete. The assembly validation module105determines whether there are remaining steps for assembly from the retrieved information related to the item. At step314, when the assembly is not complete, the assembly validation module105retrieves the next assembly step and display and iterates over the remaining assembly steps. At step316, when the assembly is complete, the assembly validation module105presents a notification upon completion of the assembly of the item. Additionally the assembly validation module105notifies the item identification system102of the complete assembly. The item identification system102may update the databases103A,103B utilizing the machine readable identifier to update a record indicating that the item has been successfully assembled at the mobile assembly apparatus118. FIG.4is a flow diagram400illustrating a process of assembling an item on a mobile assembly apparatus according to an exemplary embodiment. The flow diagram400demonstrates a process of dynamically preparing the mobile assembly apparatus118for the assembly of an item. As described above at step302, the assembly validation module105, receives a scan of item at step402. Additionally as described above at step304, at step404the assembly validation module105retrieves assembly instructions. The assembly validation module105may parse the mobile assembly apparatus configuration data from the assembly instructions into steps. The mobile assembly apparatus configuration data may include dimension of the fully assembled item which can be compared to the dimensions of the fully assembled item against the assembly area of the mobile assembly apparatus118. Additionally, the assembly instructions may also include information indicating the size of the workspace surface needed for assembly. At step406, the assembly validation module105determines whether an extended surface is necessary for assembly of the item. The assembly validation module105utilizes the dimensions of the item compared against the size of the extendable surface, or alternatively the assembly validation module may utilize the information already determined that the item requires a full or partially extended surface for assembly. At step408, following a determination of need, the assembly validation module105may send instructions to the controller to deploy an extendable surface. The assembly validation module105instructs the controller106to activate electro-mechanical devices including but not limited to motors to extend the extendable surface. Alternatively, the assembly validation module may issue a visual or audible notification to the assembler to extend the surface manually. At step410, the assembly validation module105determines whether a counterbalance is needed to support the mobile apparatus requiring a repositioning of the outrigger supports. The assembly validation module105utilizes the dimensions (e.g. weight) of the item as applied to the articulated arm204compared against the weight and center of gravity of the mobile assembly apparatus. If the weight of the item would cause the center of gravity of the mobile assembly apparatus to shift thereby causing the mobile assembly apparatus118to become unstable, then the assembly validation module105may transmit instructions to the controller to reposition the outrigger supports to move to another location to provide more stability. The assembly validation module105instructs the controller106to activate electro-mechanical devices including but not limited to motors to extend one or more outrigger supports to shift the center of gravity of the mobile assembly apparatus plus the item to a stable point. Alternatively, the assembly validation module may issue a visual or audible notification to the assembler to extend the outrigger supports manually. At step412, the one or more outrigger supports are deployed to provide a counterbalance. At step414, the assembly validation module105presents assembly instructions to the assembler. The assembly validation module105present assembly instructions through a display device, through auditory prompts, or through video projections. FIG.5is a flow diagram500illustrating a process of assembling an item on a mobile assembly apparatus according to an exemplary embodiment. At step502, the assembly validation module105receives scan data from a sensor of a machine-readable identifier associated with an item being assembled. As step504, the data is sent to the item identification system as discussed above, so the item can be identified. For example, the identity of an item to be assembled may be determined based on indexing the scan data into a database. At step506, the assembly validation module105receives assembly instructions for the identified item from the item identification system. Based on the indexing or querying, the assembly instructions for the identified item may be retrieved from a database. The instructions may include auditory or visual instructions on how to assemble pieces or sub-assemblies of the item. At step508, the assembly validation module105sends instructions to at least one controller to adjust a position of a set of outrigger supports based on the identification of the item. Upon retrieving the assembly instructions for the item, a controller may be activated to extend one or more outrigger supports from the frame of the mobile assembly apparatus. As described above, the determination may be embedded in the instructions explicitly, or alternatively, a computing device may determine a set of outrigger supports to extend based on the product dimensions and the center of gravity of the mobile assembly apparatus. At step510, the assembly validation module105sends instructions to at least one controller to adjust a position of an expandable work surface based on the identification of the item. As described above, the controller adjusts the expandable work surface based on the dimensions of the item, or alternatively, based on the instructions themselves. At step512, the assembly validation module105monitors progress of an assembly of the item using assembly data received from the sensors related to a step of assembly. The assembly validation module may monitor progress of the assembly by taking sensor readings including imaging and weight readings. The computing device scans intermediate states of assembly of the item. The assembly validation module105determines a subset of assembly hardware required to complete a next step of assembly. The assembly validation module105transmits instructions to dispense, from the hardware dispenser114, the subset of assembly hardware needed for the next assembly step. The assembly validation module105may inventory the hardware within the hardware dispenser. The assembly validation module105may also compare the number of hardware components required to assemble an item, and the number of components within the hardware dispenser. The assembly validation module105may present a notification indicating insufficient hardware to complete assembly of the item when such a condition is noted. At step514, the assembly validation module105validates the step of assembly based on the assembly data and the assembly instructions. The assembly validation module105compares the current state of assembly of the item to a reference state of the identified item contained in the verification data. The imaging and weight readings for the assembly step are compared against known values in the verification data. If any discrepancies in the comparison result in a deviance lower than a threshold, the step of assembly is validated. The assembly validation module105may present a notification indicating a delta relative to the deviation threshold. FIG.6is a block diagram of an example computing device for implementing exemplary embodiments of the present disclosure. Embodiments of the computing device600can implement embodiments of the mobile assembly apparatus. For example, the computing device600can be embodied as the computing device104or the item identification system102. The computing device600includes one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments. The non-transitory computer-readable media include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives, one or more solid state disks), and the like. For example, memory606included in the computing device600store computer-readable and computer-executable instructions or software for implementing exemplary operations of the computing device600. The computing device600also includes configurable and/or programmable processor602and associated core(s)604, and optionally, one or more additional configurable and/or programmable processor(s)602′ and associated core(s)604′ (for example, in the case of computer systems having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory606and other programs for implementing exemplary embodiments of the present disclosure. Processor602and processor(s)602′ each be a single core processor or multiple core (604and604′) processor. Either or both of processor602and processor(s)602′ are configured to execute one or more of the instructions described in connection with computing device600. Virtualization is employed in the computing device600so that infrastructure and resources in the computing device600can be shared dynamically. A virtual machine612is provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines also are used with one processor. Memory606includes a computer system memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory606include other types of memory as well, or combinations thereof. The computing device600can receive data from input/output devices. A user interact with the computing device600through a visual display device614, such as a computer monitor, which display one or more graphical user interfaces616, multi touch interface620and a pointing device618. The computing device600also includes one or more storage devices626, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that implement exemplary embodiments of the present disclosure. For example, exemplary storage device626can include one or more applications630and one or more databases628for storing information relating to items and assembly instructions. The databases628are updated manually or automatically at any suitable time to add, delete, and/or update one or more data items in the databases. The computing device600can include a network interface608configured to interface via one or more network devices624with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, Ti, T3, 56 kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. In exemplary embodiments, the computing system can include one or more antennas622to facilitate wireless communication (e.g., via the network interface) between the computing device600and a network and/or between the computing device600and other computing devices. The network interface608include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device600to any type of network capable of communication and performing the operations described herein. The computing device600run any operating system610, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing device600and performing the operations described herein. In exemplary embodiments, the operating system610is run in native mode or emulated mode. In an exemplary embodiment, the operating system610is run on one or more cloud machine instances. In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes multiple system elements, device components or method steps, those elements, components, or steps can be replaced with a single element, component, or step. Likewise, a single element, component, or step can be replaced with multiple elements, components, or steps that serve the same purpose. Moreover, while exemplary embodiments have been shown and described with references to particular embodiments thereof, those of ordinary skill in the art will understand that various substitutions and alterations in form and detail can be made therein without departing from the scope of the present disclosure. Further, still, other aspects, functions, and advantages are also within the scope of the present disclosure. Exemplary flowcharts are provided herein for illustrative purposes and are non-limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods can include more or fewer steps than those illustrated in the exemplary flowcharts and that the steps in the exemplary flowcharts can be performed in a different order than the order shown in the illustrative flowcharts. | 32,040 |
11860606 | When practical, similar reference numbers denote similar structures, features, or elements. DETAILED DESCRIPTION The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims. While certain features of the currently disclosed subject matter may be described for illustrative purposes in relation to using machine-vision for aiding automated manufacturing processes (e.g. a CNC process), it should be readily understood that such features are not intended to be limiting. As used herein, the term “cutting” can generally refer to altering the appearance, properties, and/or state of a material. Cutting can include, for example, making a through-cut, engraving, bleaching, curing, burning, etc. Engraving, when specifically referred to herein, indicates a process by which a CNC machine modifies the appearance of the material without fully penetrating it. For example, in the context of a laser cutter, it can mean removing some of the material from the surface, or discoloring the material e.g. through an application of focused electromagnetic radiation delivering electromagnetic energy as described below. As used herein, the term “laser” includes any electromagnetic radiation or focused or coherent energy source that (in the context of being a cutting tool) uses photons to modify a substrate or cause some change or alteration upon a material impacted by the photons. Lasers (whether cutting tools or diagnostic) can be of any desired wavelength, including for example, microwave lasers, infrared lasers, visible lasers, UV lasers, X-ray lasers, gamma-ray lasers, or the like. Also, as used herein, “cameras” includes, for example, visible light cameras, black and white cameras, IR or UV sensitive cameras, individual brightness sensors such as photodiodes, sensitive photon detectors such as a photomultiplier tube or avalanche photodiodes, detectors of infrared radiation far from the visible spectrum such as microwaves, X-rays, or gamma rays, optically filtered detectors, spectrometers, and other detectors that can include sources providing electromagnetic radiation for illumination to assist with acquisition, for example, flashes, UV lighting, etc. Also, as used herein, reference to “real-time” actions includes some degree of delay or latency, either programmed intentionally into the actions or as a result of the limitations of machine response and/or data transmission. “Real-time” actions, as used herein, are intended to only approximate an instantaneous response, or a response performed as quickly as possible given the limits of the system, and do not imply any specific numeric or functional limitation to response times or the machine actions resulting therefrom. Also, as used herein, unless otherwise specified, the term “material” is the material that is on the bed of the CNC machine. For example, if the CNC machine is a laser cutter, lathe, or milling machine, the material is what is placed in the CNC machine to be cut, for example, the raw materials, stock, or the like. In another example, if the CNC machine is a 3D printer, then the material is either the current layer, or previously existent layers or substrate, of an object being crafted by the 3D printing process. In yet another example, if the CNC machine is a printer, then the material can be the paper onto which the CNC machine deposits ink. Introduction A computer numerical controlled (CNC) machine is a machine that is used to add or remove material under the control of a computer. There can be one or more motors or other actuators that move one or more heads that perform the adding or removing of material. For CNC machines that add material, heads can incorporate nozzles that spray or release polymers as in a typical 3D printer. In some implementations, the heads can include an ink source such as a cartridge or pen. In the case of 3D printing, material can be built up layer by layer until a fully realized 3D object has been created. In some implementations, the CNC machine can scan the surface of a material such as a solid, a liquid, or a powder, with a laser to harden or otherwise change the material properties of said material. New material may be deposited. The process can be repeated to build successive layers. For CNC machines that remove material, the heads can incorporate tools such as blades on a lathe, drag knives, plasma cutters, water jets, bits for a milling machine, a laser for a laser cutter/engraver, etc. FIG.1is an elevational view of a CNC machine100with a camera positioned to capture an image of an entire material bed150and another camera positioned to capture an image of a portion of the material bed150, consistent with some implementations of the current subject matter.FIG.2is a top view of the implementation of the CNC machine100shown inFIG.1. The CNC machine100shown inFIG.1corresponds to one implementation of a laser cutter. While some features are described in the context of a laser cutter, this is by no means intended to be limiting. Many of the features described below can be implemented with other types of CNC machines. The CNC machine100can be, for example, a lathe, engraver, 3D-printer, milling machine, drill press, saw, etc. While laser cutter/engravers share some common features with CNC machines, they have many differences and present particularly challenging design constraints. A laser cutter/engraver is subject to regulatory guidelines that restrict the egress of electromagnetic radiation from the unit when operating, making it challenging for light to enter or escape the unit safely, for example to view or record an image of the contents. The beam of a laser cutter/engraver must be routed from the emitter to the area to be machined, potentially requiring a series of optical elements such as lenses and mirrors. The beam of a laser cutter/engraver is easily misdirected, with a small angular deflection of any component relating to the beam path potentially resulting in the beam escaping the intended path, potentially with undesirable consequences. A laser beam may be capable of causing material destruction if uncontrolled. A laser cutter/engraver may require high voltage and/or radio frequency power supplies to drive the laser itself. Liquid cooling is common in laser cutter/engravers to cool the laser, requiring fluid flow considerations. Airflow is important in laser cutter/engraver designs, as air may become contaminated with byproducts of the laser's interaction with the material such as smoke, which may in turn damage portions of the machine for example fouling optical systems. The air exhausted from the machine may contain undesirable byproducts such as smoke that must be routed or filtered, and the machine may need to be designed to prevent such byproducts from escaping through an unintended opening, for example by sealing components that may be opened. Unlike most machining tools, the kerf—the amount of material removed during the operation—is both small and variable depending on the material being processed, the power of the laser, the speed of the laser, and other factors, making it difficult to predict the final size of the object. Also unlike most machining tools, the output of the laser cutter/engraver is very highly dependent on the speed of operation; a momentary slowing can destroy the workpiece by depositing too much laser energy. In many machining tools, operating parameters such as tool rotational speed and volume of material removed are easy to continuously predict, measure, and calculate, while laser cutter/engravers are more sensitive to material and other conditions. In many machining tools, fluids are used as coolant and lubricant; in laser cutter/engravers, the cutting mechanism does not require physical contact with the material being affected, and air or other gasses may be used to aid the cutting process in a different manner, by facilitating combustion or clearing debris, for example. The CNC machine100can have a housing surrounding an enclosure or interior area defined by the housing. The housing can include walls, a bottom, and one or more openings to allow access to the CNC machine100, etc. There can be a material bed150that can include a top surface on which the material140generally rests. In the implementation ofFIG.1, the CNC machine can also include an openable barrier as part of the housing to allow access between an exterior of the CNC machine and an interior space of the CNC machine. The openable barrier can include, for example, one or more doors, hatches, flaps, and the like that can actuate between an open position and a closed position. The openable barrier can attenuate the transmission of light between the interior space and the exterior when in a closed position. Optionally, the openable barrier can be transparent to one or more wavelengths of light or be comprised of portions of varying light attenuation ability. One type of openable barrier can be a lid130that can be opened or closed to put material140on the material bed150on the bottom of the enclosure. Various example implementations discussed herein include reference to a lid. It will be understood that absent explicit disclaimers of other possible configurations of the operable barrier or some other reason why a lid cannot be interpreted generically to mean any kind of openable barrier, the use of the term lid is not intended to be limiting. One example of an openable barrier can be a front door that is normally vertical when in the closed position and can open horizontally or vertically to allow additional access. There can also be vents, ducts, or other access points to the interior space or to components of the CNC machine100. These access points can be for access to power, air, water, data, etc. Any of these access points can be monitored by cameras, position sensors, switches, etc. If they are accessed unexpectedly, the CNC machine100can execute actions to maintain the safety of the user and the system, for example, a controlled shutdown. In other implementations, the CNC machine100can be completely open (i.e. not having a lid130, or walls). Any of the features described herein can also be present in an open configuration, where applicable. As described above, the CNC machine100can have one or more movable heads that can be operated to alter the material140. In some implementations, for example the implementation ofFIG.1, the movable head can be the head160. There may be multiple movable heads, for example two or more mirrors that separately translate or rotate in able to locate a laser beam, or multiple movable heads that operate independently, for example two mill bits in a CNC machine capable of separate operation, or any combination thereof. In the case of a laser-cutter CNC machine, the head160can include optical components, mirrors, cameras, and other electronic components used to perform the desired machining operations. Again, as used herein, the head160typically is a laser-cutting head, but can be a movable head of any type. The head160, in some implementations, can be configured to include a combination of optics, electronics, and mechanical systems that can, in response to commands, cause a laser beam or electromagnetic radiation to be delivered to cut or engrave the material140. The CNC machine100can also execute operation of a motion plan for causing movement of the movable head. As the movable head moves, the movable head can deliver electromagnetic energy to effect a change in the material140that is at least partially contained within the interior space. In one implementation, the position and orientation of the optical elements inside the head160can be varied to adjust the position, angle, or focal point of a laser beam. For example, mirrors can be shifted or rotated, lenses translated, etc. The head160can be mounted on a translation rail170that is used to move the head160throughout the enclosure. In some implementations the motion of the head can be linear, for example on an X axis, a Y axis, or a Z axis. In other implementations, the head can combine motions along any combination of directions in a rectilinear, cylindrical, or spherical coordinate system. A working area for the CNC machine100can be defined by the limits within which the movable head can cause delivery of a machining action, or delivery of a machining medium, for example electromagnetic energy. The working area can be inside the interior space defined by the housing. It should be understood that the working area can be a generally three-dimensional volume and not a fixed surface. For example, if the range of travel of a vertically oriented laser cutter is a 10″×10″ square entirely over the material bed150, and the laser from the laser beam comes out of the laser cutter at a height of 4″ above the material bed of the CNC machine, that 400 in2 volume can be considered to be the working area. Restated, the working area can be defined by the extents of positions in which material140can be worked by the CNC machine100, and not necessarily tied or limited by the travel of any one component. For example, if the head160could turn at an angle, then the working area could extend in some direction beyond the travel of the head160. By this definition, the working area can also include any surface, or portion thereof, of any material140placed in the CNC machine100that is at least partially within the working area, if that surface can be worked by the CNC machine100. Similarly, for oversized material, which may extend even outside the CNC machine100, only part of the material140might be in the working area at any one time. The translation rail170can be any sort of translating mechanism that enables movement of the head160in the X-Y direction, for example a single rail with a motor that slides the head160along the translation rail170, a combination of two rails that move the head160, a combination of circular plates and rails, a robotic arm with joints, etc. Components of the CNC machine100can be substantially enclosed in a case or other enclosure. The case can include, for example, windows, apertures, flanges, footings, vents, etc. The case can also contain, for example, a laser, the head160, optical turning systems, cameras, the material bed150, etc. To manufacture the case, or any of its constituent parts, an injection-molding process can be performed. The injection-molding process can be performed to create a rigid case in a number of designs. The injection molding process may utilize materials with useful properties, such as strengthening additives that enable the injection molded case to retain its shape when heated, or absorptive or reflective elements, coated on the surface or dispersed throughout the material for example, that dissipate or shield the case from laser energy. As an example, one design for the case can include a horizontal slot in the front of the case and a corresponding horizontal slot in the rear of the case. These slots can allow oversized material to be passed through the CNC machine100. Optionally, there can be an interlock system that interfaces with, for example, the openable barrier, the lid130, door, and the like. Such an interlock is required by many regulatory regimes under many circumstances. The interlock can then detect a state of opening of the openable barrier, for example, whether a lid130is open or closed. In some implementations, an interlock can prevent some or all functions of the CNC machine100while an openable barrier, for example the lid130, is in the open state (e.g. not in a closed state). The reverse can be true as well, meaning that some functions of the CNC machine100can be prevented while in a closed state. There can also be interlocks in series where, for example, the CNC machine100will not operate unless both the lid130and the front door are both closed. Furthermore, some components of the CNC machine100can be tied to states of other components of the CNC machine, such as not allowing the lid130to open while the laser is on, a movable component moving, a motor running, sensors detecting a certain gas, etc. In some implementations, the interlock can prevent emission of electromagnetic energy from the movable head when detecting that the openable barrier is not in the closed position. Converting Source Files to Motion Plans A traditional CNC machine accepts a user drawing, acting as a source file that describes the object the user wants to create or the cuts that a user wishes to make. Examples of source files are: 1) .STL files that define a three-dimensional object that can be fabricated with a 3D printer or carved with a milling machine, 2) .SVG files that define a set of vector shapes that can be used to cut or draw on material, 3) .JPG files that define a bitmap that can be engraved on a surface, and 4) CAD files or other drawing files that can be interpreted to describe the object or operations similarly to any of the examples above. FIG.3Ais a diagram illustrating one example of an SVG source file310, consistent with some implementations of the current subject matter.FIG.3Bis an example of a graphical representation320of the cut path330in the CNC machine, consistent with some implementations of the current subject matter.FIG.3Cis a diagram illustrating the machine file340that would result in a machine creating the cut path330, created from the source file310, consistent with some implementations of the current subject matter. The example source file310represents a work surface that is 640×480 units with a 300×150 unit rectangle whose top left corner is located 100 units to the right and 100 units down from the top-left corner of the work surface. A computer program can then convert the source file310into a machine file340that can be interpreted by the CNC machine100to take the actions illustrated inFIG.3B. The conversion can take place on a local computer where the source files reside on the CNC machine100, etc. The machine file340describes the idealized motion of the CNC machine100to achieve the desired outcome. Take, for example, a 3D printer that deposits a tube-shaped string of plastic material. If the source file specifies a rectangle then the machine file can instruct the CNC machine to move along a snakelike path that forms a filled in rectangle, while extruding plastic. The machine file can omit some information, as well. For example, the height of the rectangle may no longer be directly present in the machine file; the height will be as tall as the plastic tube is high. The machine file can also add some information. For example, the instruction to move the print head from its home position to a corner of the rectangle to begin printing. The instructions can even depart from the directly expressed intent of the user. A common setting in 3D printers, for example, causes solid shapes to be rendered as hollow in the machine file to save on material cost. As shown by the example ofFIGS.3A-C, the conversion of the source file310to the machine file330can cause the CNC machine to move the cutting tool from (0,0) (inFIG.3B) to the point at which cutting is to begin, activate the cutting tool (for example lower a drag knife or energize a laser), trace the rectangle, deactivate the cutting tool, and return to (0,0). Once the machine file has been created, a motion plan for the CNC machine100can be generated. The motion plan contains the data that determines the actions of components of the CNC machine100at different points in time. The motion plan can be generated on the CNC machine100itself or by another computing system. A motion plan can be a stream of data that describes, for example, electrical pulses that indicate exactly how motors should turn, a voltage that indicates the desired output power of a laser, a pulse train that specifies the rotational speed of a mill bit, etc. Unlike the source files and the machine files such as G-code, motion plans are defined by the presence of a temporal element, either explicit or inferred, indicating the time or time offset at which each action should occur. This allows for one of the key functions of a motion plan, coordinated motion, wherein multiple actuators coordinate to have a single, pre-planned affect. The motion plan renders the abstract, idealized machine file as a practical series of electrical and mechanical tasks. For example, a machine file might include the instruction to “move one inch to the right at a speed of one inch per second, while maintaining a constant number of revolutions per second of a cutting tool.” The motion plan must take into consideration that the motors cannot accelerate instantly, and instead must “spin up” at the start of motion and “spin down” at the end of motion. The motion plan would then specify pulses (e.g. sent to stepper motors or other apparatus for moving the head or other parts of a CNC machine) occurring slowly at first, then faster, then more slowly again near the end of the motion. The machine file is converted to the motion plan by the motion controller/planner. Physically, the motion controller can be a general or special purpose computing device, such as a high performance microcontroller or single board computer coupled to a Digital Signal Processor (DSP) or it may be arranged as a cloud-based, distributed system. The job of the motion controller is to take the vector machine code and convert it into electrical signals that will be used to drive the motors on the CNC machine100, taking in to account the exact state of the CNC machine100at that moment (e.g. “since the machine is not yet moving, maximum torque must be applied, and the resulting change in speed will be small”) and physical limitations of the machine (e.g. accelerate to such-and-such speed, without generating forces in excess of those allowed by the machine's design). The signals can be step and direction pulses fed to stepper motors or location signals fed to servomotors among other possibilities, which create the motion and actions of the CNC machine100, including the operation of elements like actuation of the head160, moderation of heating and cooling, and other operations. In some implementations, a compressed file of electrical signals can be decompressed and then directly output to the motors. These electrical signals can include binary instructions similar to 1's and 0's to indicate the electrical power that is applied to each input of each motor over time to effect the desired motion. In the most common implementation, the motion plan is the only stage that understands the detailed physics of the CNC machine100itself, and translates the idealized machine file into implementable steps. For example, a particular CNC machine100might have a heavier head, and require more gradual acceleration. This limitation is modeled in the motion planner and affects the motion plan. Each model of CNC machine can require precise tuning of the motion plan based on its measured attributes (e.g. motor torque) and observed behavior (e.g. belt skips when accelerating too quickly). The CNC machine100can also tune the motion plan on a per-machine basis to account for variations from CNC machine to CNC machine. The motion plan can be generated and fed to the output devices in real-time, or nearly so. The motion plan can also be pre-computed and written to a file instead of streamed to a CNC machine, and then read back from the file and transmitted to the CNC machine100at a later time. Transmission of instructions to the CNC machine100, for example, portions of the machine file or motion plan, can be streamed as a whole or in batches from the computing system storing the motion plan. Batches can be stored and managed separately, allowing pre-computation or additional optimization to be performed on only part of the motion plan. In some implementations, a file of electrical signals, which may be compressed to preserve space and decompressed to facilitate use, can be directly output to the motors. The electrical signals can include binary instructions similar to 1's and 0's to indicate actuation of the motor. The motion plan can be augmented, either by precomputing in advance or updating in real-time, with the aid of machine vision. Machine vision is a general term that describes the use of sensor data, and not only limited to optical data, in order to provide additional input to machine operation. Other forms of input can include, for example, audio data from an on-board sound sensor such as a microphone, or position/acceleration/vibration data from an on-board sensor such as a gyroscope or accelerometer. Machine vision can be implemented by using cameras to provide images of, for example, the CNC machine100, the material being operated on by the CNC machine, the environment of the CNC machine100(if there is debris accumulating or smoke present), or any combination of these. These cameras can then route their output to a computer for processing. By viewing the CNC machine100in operation and analyzing the image data it can, for example, be determined if the CNC machine100is working correctly, if the CNC machine100is performing optimally, the current status of the CNC machine100or subcomponents of the CNC machine100, etc. Similarly, the material can be imaged and, for example, the operation of the CNC machine100can be adjusted according to instructions, users can be notified when the project is complete, or information about the material can be determined from the image data. Error conditions can be identified, such as if a foreign body has been inadvertently left in the CNC machine100, the material has been inadequately secured, or the material is reacting in an unexpected way during machining. Camera Systems Cameras can be mounted inside the CNC machine100to acquire image data during operation of the CNC machine100. Image data refers to all data gathered from a camera or image sensor, including still images, streams of images, video, audio, metadata such as shutter speed and aperture settings, settings or data from or pertaining to a flash or other auxiliary information, graphic overlays of data superimposed upon the image such as GPS coordinates, in any format, including but not limited to raw sensor data such as a .DNG file, processed image data such as a .JPG file, and data resulting from the analysis of image data processed on the camera unit such as direction and velocity from an optical mouse sensor. For example, there can be cameras mounted such that they gather image data from (also referred to as ‘view’ or ‘image’) an interior portion of the CNC machine100. The viewing can occur when the lid130is in a closed position or in an open position or independently of the position of the lid130. In one implementation, one or more cameras, for example a camera mounted to the interior surface of the lid130or elsewhere within the case or enclosure, can view the interior portion when the lid130to the CNC machine100is a closed position. In particular, in some preferred embodiments, the cameras can image the material140while the CNC machine100is closed and, for example, while machining the material140. In some implementations, cameras can be mounted within the interior space and opposite the working area. In other implementations, there can be a single camera or multiple cameras attached to the lid130. Cameras can also be capable of motion such as translation to a plurality of positions, rotation, and/or tilting along one or more axes. One or more cameras mounted to a translatable support, such as a gantry210, which can be any mechanical system that can be commanded to move (movement being understood to include rotation) the camera or a mechanism such as a mirror that can redirect the view of the camera, to different locations and view different regions of the CNC machine. The head160is a special case of the translatable support, where the head160is limited by the track220and the translation rail170that constrain its motion. Lenses can be chosen for wide angle coverage, for extreme depth of field so that both near and far objects may be in focus, or many other considerations. The cameras may be placed to additionally capture the user so as to document the building process, or placed in a location where the user can move the camera, for example on the underside of the lid130where opening the CNC machine100causes the camera to point at the user. Here, for example, the single camera described above can take an image when the lid is not in the closed position. Such an image can include an object, such as a user, that is outside the CNC machine100. Cameras can be mounted on movable locations like the head160or lid130with the intention of using video or multiple still images taken while the camera is moving to assemble a larger image, for example scanning the camera across the material140to get an image of the material140in its totality so that the analysis of image data may span more than one image. As shown inFIG.1, a lid camera110, or multiple lid cameras, can be mounted to the lid130. In particular, as shown inFIG.1, the lid camera110can be mounted to the underside of the lid130. The lid camera110can be a camera with a wide field of view112that can image a first portion of the material140. This can include a large fraction of the material140and the material bed or even all of the material140and material bed150. The lid camera110can also image the position of the head160, if the head160is within the field of view of the lid camera110. Mounting the lid camera110on the underside of the lid130allows for the user to be in view when the lid130is open. This can, for example, provide images of the user loading or unloading the material140, or retrieving a finished project. Here, a number of sub-images, possibly acquired at a number of different locations, can be assembled, potentially along with other data like a source file such as an SVG or digitally rendered text, to provide a final image. When the lid130is closed, the lid camera110rotates down with the lid130and brings the material140into view. Also as shown inFIG.1, a head camera120can be mounted to the head160. The head camera120can have a narrower field of view122and take higher resolution images of a smaller area, of the material140and the material bed, than the lid camera110. One use of the head camera120can be to image the cut made in the material140. The head camera120can identify the location of the material140more precisely than possible with the lid camera110. Other locations for cameras can include, for example, on an optical system guiding a laser for laser cutting, on the laser itself, inside a housing surrounding the head160, underneath or inside of the material bed150, in an air filter or associated ducting, etc. Cameras can also be mounted outside the CNC machine100to view users or view external features of the CNC machine100. Multiple cameras can also work in concert to provide a view of an object or material140from multiple locations, angles, resolutions, etc. For example, the lid camera110can identify the approximate location of a feature in the CNC machine100. The CNC machine100can then instruct the head160to move to that location so that the head camera120can image the feature in more detail. While the examples herein are primarily drawn to a laser cutter, the use of the cameras for machine vision in this application is not limited to only that specific type of CNC machine100. For example, if the CNC machine100were a lathe, the lid camera110can be mounted nearby to view the rotating material140and the head160, and the head camera120located near the cutting tool. Similarly, if the CNC machine100were a 3D printer, the head camera120can be mounted on the head160that deposits material140for forming the desired piece. An image recognition program can identify conditions in the interior portion of the CNC machine100from the acquired image data. The conditions that can be identified are described at length below, but can include positions and properties of the material140, the positions of components of the CNC machine100, errors in operation, etc. Based in part on the acquired image data, instructions for the CNC machine100can be created or updated. The instructions can, for example, act to counteract or mitigate an undesirable condition identified from the image data. The instructions can include changing the output of the head160. For example, for a CNC machine100that is a laser cutter, the laser can be instructed to reduce or increase power or turn off. Also, the updated instructions can include different parameters for motion plan calculation, or making changes to an existing motion plan, which could change the motion of the head160or the gantry210. For example, if the image indicates that a recent cut was offset from its desired location by a certain amount, for example due to a part moving out of alignment, the motion plan can be calculated with an equal and opposite offset to counteract the problem, for example for a second subsequent operation or for all future operations. The CNC machine100can execute the instructions to create the motion plan or otherwise effect the changes described above. In some implementations, the movable component can be the gantry210, the head160, or an identifiable mark on the head160. The movable component, for example the gantry210, can have a fixed spatial relationship to the movable head. The image data can update software controlling operation of the CNC machine100with a position of the movable head and/or the movable component with their position and/or any higher order derivative thereof. Because the type of image data required can vary, and/or because of possible limitations as to the field of view of any individual camera, multiple cameras can be placed throughout the CNC machine100to provide the needed image data. Camera choice and placement can be optimized for many use cases. Cameras closer to the material140can be used for detail at the expense of a wide field of view. Multiple cameras may be placed adjacently so that images produced by the multiple cameras can be analyzed by the computer to achieve higher resolution or wider coverage jointly than was possible for any image individually. The manipulation and improvement of images can include, for example, stitching of images to create a larger image, adding images to increase brightness, differencing images to isolate changes (such as moving objects or changing lighting), multiplying or dividing images, averaging images, rotating images, scaling images, sharpening images, and so on, in any combination. Further, the system may record additional data to assist in the manipulation and improvement of images, such as recordings from ambient light sensors and location of movable components. Specifically, stitching can include taking one or more sub-images from one or more cameras and combining them to form a larger image. Some portions of the images can overlap as a result of the stitching process. Other images may need to be rotated, trimmed, or otherwise manipulated to provide a consistent and seamless larger image as a result of the stitching. Lighting artifacts such as glare, reflection, and the like, can be reduced or eliminated by any of the above methods. Also, the image analysis program can perform edge detection and noise reduction or elimination on the acquired images. Edge detection can include performing contrast comparisons of different parts of the image to detect edges and identify objects or features in the image. Noise reduction can involve averaging or smoothing of one or more images to reduce the contribution of periodic, random, or pseudo-random image noise, for example that due to CNC machine100operation such as vibrating fans, motors, etc. FIG.4Ais a diagram illustrating the addition of images, consistent with some implementations of the current subject matter. Images taken by the cameras can be added, for example, to increase the brightness of an image. In the example ofFIG.4A, there is a first image410, a second image412, and a third image414. First image410has horizontal bands (shown in white against a black background in the figure). The horizontal bands can conform to a more brightly lit object, though the main point is that there is a difference between the bands and the background. Second image412has similar horizontal bands, but offset in the vertical direction relative to those in the first image410. When the first image410and second image412are added, their sum is shown in by the third image414. Here, the two sets of bands interleave to fill in the bright square as shown. This technique can be applied to, for example, acquiring many image frames from the cameras, possibly in low light conditions, and adding them together to form a brighter image. FIG.4Bis a diagram illustrating the subtraction of images, consistent with some implementations of the current subject matter. Image subtraction can be useful to, for example, isolate dim laser spot from a comparatively bright image. Here, a first image420shows two spots, one representative of a laser spot and the other of an object. To isolate the laser spot, a second image422can be taken with the laser off, leaving only the object. Then, the second image422can be subtracted from the first image420to arrive at the third image424. The remaining spot in the third image424is the laser spot. FIG.4Cis a diagram illustrating the differencing of images to isolate a simulated internal lighting effect, consistent with some implementations of the current subject matter. There can be an object in the CNC machine100, represented as a circle in first image430. This could represent, for example an object on the material bed150of the CNC machine100. If, for example, half of the material bed150of the CNC machine100was illumined by outside lighting, such as a sunbeam, the second image420might appear as shown, with the illuminated side brighter than the side without the illumination. It can sometimes be advantageous to use internal lighting during operation, for example to illuminate a watermark, aid in image diagnostics, or simply to better show a user what is happening in the CNC machine. Even if none of these reasons apply, however, internal lighting allows reduction or elimination of the external lighting (in this case the sunbeam) via this method. This internal lighting is represented in the third image434by adding a brightness layer to the entire second image432. To isolate the effect of the internal lighting, the second image432can be subtracted from434to result in fourth image436. Here, fourth image436shows the area, and the object, as it would appear under only internal lighting. This differencing can allow image analysis to be performed as if only the controlled internal lighting were present, even in the presence of external lighting contaminants. Machine vision processing of images can occur at, for example, the CNC machine100, on a locally connected computer, or on a remote server connected via the internet. In some implementations, image processing capability can be performed by the CNC machine100, but with limited speed. One example of this can be where the onboard processor is slow and can run only simple algorithms in real-time, but which can run more complex analysis given more time. In such a case, the CNC machine100could pause for the analysis to be complete, or alternatively, execute the data on a faster connected computing system. A specific example can be where sophisticated recognition is performed remotely, for example, by a server on the internet. In these cases, limited image processing can be done locally, with more detailed image processing and analysis being done remotely. For example, the camera can use a simple algorithm, run on a processor in the CNC machine100, to determine when the lid130is closed. Once the CNC machine100detects that the lid130is closed, the processor on the CNC machine100can send images to a remote server for more detailed processing, for example, to identify the location of the material140that was inserted. The system can also devote dedicated resources to analyzing the images locally, pause other actions, or diverting computing resources away from other activities. In another implementation, the head160can be tracked by onboard, real-time analysis. For example, tracking the position of the head160, a task normally performed by optical encoders or other specialized hardware, can be done with high resolution, low resolution, or a combination of both high and low resolution images taken by the cameras. As high-resolution images are captured, they can be transformed into lower resolution images that are smaller in memory size by resizing or cropping. If the images include video or a sequence of still images, some may be eliminated or cropped. A data processor can analyze the smaller images repeatedly, several times a second for example, to detect any gross misalignment. If a misalignment is detected, the data processor can halt all operation of the CNC machine100while more detailed processing more precisely locates exactly the head160using higher resolution images. Upon location of the head160, the head160can be adjusted to recover the correction location. Alternatively, images can be uploaded to a server where further processing can be performed. The location can be determined by, for example, looking at the head160with the lid camera, by looking at what the head camera120is currently imaging, etc. For example, the head160could be instructed to move to a registration mark. Then the head camera120can then image the registration mark to detect any minute misalignment. Basic Camera Functionality The cameras can be, for example, a single wide-angle camera, multiple cameras, a moving camera where the images are digitally combined, etc. The cameras used to image a large region of the interior of the CNC machine100can be distinct from other cameras that image a more localized area. The head camera160can be one example of a camera that, in some implementations, images a smaller area than the wide-angle cameras. There are other camera configurations that can be used for different purposes. A camera (or cameras) with broad field of view can cover the whole of the machine interior, or a predefined significant portion thereof. For example, the image data acquired from one or more of the cameras can include most (meaning over 50%) of the working area. In other embodiments, at least 60%, 70%, 80%, 90%, or 100% of the working area can be included in the image data. The above amounts do not take into account obstruction by the material140or any other intervening objects. For example, if a camera is capable of viewing 90% of the working area without material140, and a piece of material140is placed in the working area, partially obscuring it, the camera is still considered to be providing image data that includes 90% of the working area. In some implementations, the image data can be acquired when the interlock is not preventing the emission of electromagnetic energy. In other implementations, a camera mounted outside the machine can see users and/or material140entering or exiting the CNC machine100, record the use of the CNC machine100for sharing or analysis, or detect safety problems such as an uncontrolled fire. Other cameras can provide a more precise look with limited field of view. Optical sensors like those used on optical mice can provide very low resolution and few colors, or greyscale, over a very small area with very high pixel density, then quickly process the information to detect material140moving relative to the optical sensor. The lower resolution and color depth, plus specialized computing power, allow very quick and precise operation. Conversely, if the head is static and the material is moved, for example if the user bumps it, this approach can see the movement of the material and characterize it very precisely so that additional operations on the material continue where the previous operations left off, for example resuming a cut that was interrupted before the material was moved. Video cameras can detect changes over time, for example comparing frames to determine the rate at which the camera is moving. Still cameras can be used to capture higher resolution images that can provide greater detail. Yet another type of optical scanning can be to implement a linear optical sensor, such as a flatbed scanner, on an existing rail, like the sliding gantry210in a laser system, and then scan it over the material140, assembling an image as it scans. To isolate the light from the laser, the laser may be turned off and on again, and the difference between the two measurements indicates the light scattered from the laser while removing the effect of environmental light. The cameras can have fixed or adjustable sensitivity, allowing them to operate in dim or bright conditions. There can be any combination of cameras that are sensitive to different wavelengths. Some cameras, for example, can be sensitive to wavelengths corresponding to a cutting laser, a range-finding laser, a scanning laser, etc. Other cameras can be sensitive to wavelengths that specifically fall outside the wavelength of one or more lasers used in the CNC machine100. The cameras can be sensitive to visible light only, or can have extended sensitivity into infrared or ultraviolet, for example to view invisible barcodes marked on the surface, discriminate between otherwise identical materials based on IR reflectivity, or view invisible (e.g. infrared) laser beams directly. The cameras can even be a single photodiode that measures e.g. the flash of the laser striking the material140, or which reacts to light emissions that appear to correlate with an uncontrolled fire. The cameras can be used to image, for example, a beam spot on a mirror, light escaping an intended beam path, etc. The cameras can also detect scattered light, for example if a user is attempting to cut a reflective material. Other types of cameras can be implemented, for example, instead of detecting light of the same wavelength of the laser, instead detecting a secondary effect, such as infrared radiation (with a thermographic camera) or x-rays given off by contact between the laser and another material. The cameras may be coordinated with lighting sources in the CNC machine100. The lighting sources can be positioned anywhere in the CNC machine100, for example, on the interior surface of the lid130, the walls, the floor, the gantry210, etc. One example of coordination between the lighting sources and the cameras can be to adjust internal LED illumination while acquiring images of the interior portion with the cameras. For example, if the camera is only capable of capturing images in black and white, the internal LEDs can illuminate sequentially in red, green, and blue, capturing three separate images. The resulting images can then be combined to create a full color RGB image. If external illumination is causing problems with shadows or external lighting effects, the internal lighting can be turned off while a picture is taken, then turned on while a second picture is taken. By subtracting the two on a pixel-by-pixel basis, ambient light can be cancelled out so that it can be determined what the image looks like when illuminated only by internal lights. If lighting is movable, for example on the translation arm of the CNC machine100, it can be moved around while multiple pictures are taken, then combined, to achieve an image with more even lighting. The brightness of the internal lights can also be varied like the flash in a traditional camera to assist with illumination. The lighting can be moved to a location where it better illuminates an area of interest, for example so it shines straight down a slot formed by a cut, so a camera can see the bottom of the cut. If the internal lighting is interfering, it can be turned off while the camera takes an image. Optionally, the lighting can be turned off for such a brief period that the viewer does not notice (e.g. for less than a second, less than 1/60th of a second, or less than 1/120th of a second). Conversely, the internal lighting may be momentarily brightened like a camera flash to capture a picture. Specialized lights may be used and/or engaged only when needed; for example, an invisible but UV-fluorescent ink might be present on the material. When scanning for a barcode, UV illumination might be briefly flashed while a picture is captured so that any ink present would be illuminated. The same technique of altering the lighting conditions can be performed by toggling the range-finding and/or cutting lasers as well, to isolate their signature and/or effects when imaging. If the object (or camera) moves between acquisitions, then the images can be cropped, translated, expanded, rotated, and so on, to obtain images that share common features in order to allow subtraction. This differencing technique is preferably done with automatic adjustments in the cameras are overridden or disabled. For example, disabling autofocus, flashes, etc. Features that can ideally be held constant between images can include, for example, aperture, shutter speed, white balance, etc. In this way, the changes in the two images are due only to differences from the lighting and not due to adjustment in the optical system. Multiple cameras, or a single camera moved to different locations in the CNC machine100, can provide images from different angles to generate 3D representations of the surface of the material140or an object. The 3D representations can be used for generating 3D models, for measuring the depth that an engraving or laser operation produced, or providing feedback to the CNC machine100or a user during the manufacturing process. It can also be used for scanning, to build a model of the material140for replication. The camera can be used to record photos and video that the user can use to share their progress. Automatic “making of” sequences can be created that stitch together various still and video images along with additional sound and imagery, for example the digital rendering of the source file or the user's picture from a social network. Knowledge of the motion plan, or even the control of the cameras via the motion plan directly, can enable a variety of optimizations. In one example, given a machine with two cameras, one of which is mounted in the head and one of which is mounted in the lid, the final video can be created with footage from the head camera at any time that the gantry is directed to a location that is known to obscure the lid camera. In another example, the cameras can be instructed to reduce their aperture size, reducing the amount of light let in, when the machine's internal lights are activated. In another example, if the machine is a laser cutter/engraver and activating the laser causes a camera located in the head to become overloaded and useless, footage from that camera may be discarded when it is unavailable. In another example, elements of the motion plan may be coordinated with the camera recording for optimal visual or audio effect, for example fading up the interior lights before the cut or driving the motors in a coordinated fashion to sweep the head camera across the material for a final view of the work result. In another example, sensor data collected by the system might be used to select camera images; for example, a still photo of the user might be captured from a camera mounted in the lid when an accelerometer, gyroscope, or other sensor in the lid detects that the lid has been opened and it has reached the optimal angle. In another example, recording of video might cease if an error condition is detected, such as the lid being opened unexpectedly during a machining operation. The video can be automatically edited using information like the total duration of the cut file to eliminate or speed up monotonous events; for example, if the laser must make400holes, then that section of the cut plan could be shown at high speed. Traditionally, these decisions must all be made by reviewing the final footage, with little or no a priori knowledge of what they contain. Pre-selecting the footage (and even coordinating its capture) can allow higher quality video and much less time spent editing it. Video and images from the production process can be automatically stitched together in a variety of fashions, including stop motion with images, interleaving video with stills, and combining video and photography with computer-generated imagery, e.g. a 3D or 2D model of the item being rendered. Video can also be enhanced with media from other sources, such as pictures taken with the user's camera of the final product. Additional features that can be included individually, or in any combination, are described in the sections below. As used herein, the term “pattern” is intended to refer to features that are independent from any position, orientation, translation, rotation, or the like, of such a pattern. For example, a triangle can be a pattern that is still a triangle even when the triangle is to be cut at any location in the CNC machine, rotated in a software visualization, imaged from an object in the CNC machine or provided in an image from another computing system, etc. A pattern can be transformed by methods that communicate data unrelated to the pattern, such as specifying a location, a translation, or a rotation. For example, a command to shift a location of a pattern by five pixels does not contain any information about the pattern being shifted, other than the change in location. A pattern can also be modified by operations that preserve at least a portion of patterns appearance, for example outlining and sharpening. Similarly, a pattern can be scaled, mirrored, filled, colored, distorted (e.g. to correct image aberration), and still refer to the initial pattern. As such the “pattern” does not refer to a fiducial (i.e. a mark, cut, or other reference indicator used to locate and/or orient a pattern that is to be cut or engraved). Image Aberration Correction FIG.5is a diagram illustrating correcting aberrations in images acquired by a camera with a wide field of view, consistent with some implementations of the current subject matter. A principal challenge of wide-angle imaging inside a small working space with the unit closed is the distortion introduced by the wide-angle lens required. Images from cameras, particularly those with a wide field of view, can suffer from multiple types of distortions. In one implementation, an image correction program can be executed to convert distorted image data510(which can be considered to be the sum of a perfect image and a distortion) to corrected image data560(which can be either a perfect image or at least an image with reduced distortion). The distortion correction can include processing an image to achieving one or more (or optionally all of) removing the distortion, enhancing the image by increasing contrast, and mapping pixels in the image to corresponding physical locations within the working area, or other areas in the CNC machine. This includes approaches where global parametric models of the camera distortion are applied for enhanced mapping stability. The distortions can be due to optical components in a camera, such as a wide angle lens, the de-centration of an imaging sensor within said lens, chromatic aberrations, reflections or reflectivity, damage or undesirable coatings on the lens, etc. These distortions can be compounded given external factors related to the orientation of the camera110with respect to the material bed150it is observing as a result of its mount on the lid130including the camera's position, rotation and tilt. After making the corrections, the image data can be replaced with, or used instead of, the corrected image data prior to identifying conditions in the CNC machine100or performing further image analysis. In another implementation, the conversion can be performed by imaging one or more visible features520shown in the distorted image data. In the example shown inFIG.5, the visible features520can be crosses distributed with a known distance separation across a surface of an object. The distorted image510, which includes the visible features520, can be acquired. A partially de-distorted image530can be generated by applying a barrel de-distortion function to the distorted image510. The partially de-distorted image530can be separated into smaller images540, with each of the smaller images540including only one of the visible features520. The plurality of smaller images540can be sorted (as shown by the numbering in the smaller images540), based on coordinates of the visible features520, into at least one set of visible features, the set of visible features being approximately co-linear. For example, smaller images1,2,3, and4can be determined to be co-linear (in the X direction) and smaller images1and5can be determined to be co-linear (in the Y direction). Mathematical expressions for a line550that passes through each of the coordinates can be calculated for each of the set of visible features and based on the coordinates of the visible features520in the corresponding set. The line550can be, for example, a polynomial fit to the set of visible features520, a spline, etc. The distorted image data510, at any point in the image data, can be converted to the corrected image data560by applying a correction to the distorted image data510based on an interpolation of the mathematical expressions to other points in the distorted image data510. For example, the interpolation can be between lines550that extend in two orthogonal directions (i.e. a grid pattern shown inFIG.5). The linear distance between the interpolated lines can correspond to less than 5 pixels, less than 3 pixels, or a single pixel. Optionally, coarser interpolation can be used that extends over more pixels than those mentioned previously. Material Outline The cameras can also be used to determine the size and outline of material140in the CNC machine100. This allows the user to place material140at arbitrary locations within the CNC machine100and to use material140with unusual shapes, for example scrap material140with holes already cut in it. Additional methods for performing edge detection includes, but is not limited to, selection of contrasting elements in the image gradient (i.e., directional change of intensity in an image), selection or removal of elements in the frequency domain by means a transformation (e.g., Discrete Fourier Transforms), and removal of data likely to be a backdrop (e.g., the CNC machine bed). Images from the cameras can be compared against images of the device without any material140. Differencing these images can provide an indication of the material140, such as an outline or a 3D shape. In another implementation, the bottom of the CNC machine100and/or the material bed150can be designed to appear in a particular manner to facilitate digital removal from an image. In a further implementation, data removal of the likely backdrop (e.g., the CNC machine bed) may be performed via image derivative analysis, for instance, as shown inFIGS.6A-I). In another example, the bottom of the CNC machine100could be a predetermined color (e.g., green and/or a different color) and that color can be digitally removed from the image to identify the material140. Alternatively and/or additionally, a “flash” can be used by onboard LEDs to illuminate with a color that would not reflect from the material140. In another example, the material140can be recognized by its appearance and/or by the presence of distinguishing markings. For example, barcodes, unique glyphs, fiducial markers, and/or the like may be placed across the surface of the material140. The material140can be identified based at least on the detection and/or decoding of one or more distinguishing markings including, for example, one or more barcodes, glyphs, fiducial markers, and/or the like. In one implementation, the material140can be pre-coated with ink that reacts to ultraviolet light and/or infrared light such that the material140can be identified based on the presence of the ink as indicated by the reaction of the material140when subject to different types of illumination (e.g., ultraviolet light, infrared light, and/or the like). In another example, one or more distinguishing markings can be printed on the material140with ink that is sensitive to ultraviolet or infrared light. These distinguishing markings can be subsequently subject to the appropriate type of illumination (e.g., ultraviolet light and/or infrared light) and the resulting reaction can be captured with a camera and used to identify the material140. In another example implementation, the edges of the material140can be detected as one or more continuous closed shapes, even when the center of the material140might be invisible (e.g., in the case of clear material such as acrylic, glass, and/or the like). Material Outline Via Image Derivative Analysis FIGS.6A-6Iare diagrams illustrating determination of a material outline by derivative analysis, consistent with some implementations of the current subject matter. Image derivative analysis can aid in achieving acceptable imagery for material outline detection. A variety of stepwise derivative approaches can be applied, one of which is illustrated inFIGS.6A-6I. In one implementation, the rate of change of local pixel regions can be measured and/or calculated. Subsequently, a comparison between the pixels can be used to convolve the gradient of the image. Areas and/or locations of large magnitude gradients in the acquired images can be interpreted as edges. The gradient can be determined in a single direction and/or omni-directionally, examples of each are shown inFIGS.6A-6I. By combining differencing with gradient detection, the position of edges in a design, an objects, a pattern, and/or the like, can be calculated and stored as additional data and/or coordinates for use in a machine file. The example shown inFIGS.6A-6Iis not intended to be a limiting example. Any of the operations described byFIGS.6A-6Ican be performed in any combination, and in any order. Also, not all of the operations described need occur, and any subset of the operations can be implemented in any order. Furthermore, as described in greater detail below, other features that are described elsewhere in the instant application, such as those not directly related to edge detection, can be implemented when performing edge detection and/or any other operation described herein. FIG.6Ais a diagram illustrating a distortion-corrected image of the material bed150without the material140, consistent with some implementations of the current subject matter. In some implementations, an image of the material bed150, without the material140, can be acquired by any of the cameras inside the CNC machine100. As discussed herein, some images can be distorted, for example due to a wide-angle lens and/or the angle at which the image is taken. Any of the distortion or aberration correction techniques described herein can be applied to generate a distortion-corrected image of the material bed150. In the example shown inFIG.6A, the shape in the center of the image can be any type of aberration and/or distortion including, for example, an artifact, an area of varying light, a reflection, and/or the like. FIG.6Bis a diagram illustrating a distortion-corrected image of the material bed150with the material140, consistent with some implementations of the current subject matter. Similar to that shown inFIG.6A, a distortion-corrected image of the material bed150, with the material140, can be generated. In some implementations, the distortion correction can be the same as that applied to generate the image shown inFIG.6A. In other implementations, different and/or additional methods of distortion correction can be used based on the material140that was added. As shown inFIG.6B, the material can include several patterns (e.g. two animals and a hexagon) in addition to the rectangular shape of material itself. FIG.6Cis a diagram illustrating a high-pass filtered, y-axis-only image gradient of the material bed150, consistent with some implementations of the current subject matter. As discussed above, any of the images can be digitally analyzed to calculate a gradient at each pixel in the image. In a two dimensional space, the gradient can be expressed as the sum of the partial derivatives between adjacent pixels in two orthogonal basis directions (e.g. the horizontal direction and the vertical direction). When determining gradients, the intensity of a pixel can be expressed as a numerical value based on, for example, a conversion of the image to a grayscale representation, conversion of the image color to a brightness, and/or the like. The gradient can then be based on the detected change in the intensity between two pixels in the image in the direction between the two pixels. InFIG.6C, only a portion of the gradient of the image of the material bed150in the vertical direction (e.g., y-axis) is shown. In some implementations, the image processing can filter the image to show or indicate only pixels with a gradient and/or a component of the gradient that exceed a threshold value. This can be referred to as high-pass filtering and can be used to distinguish an object, pattern, and/or other shape from a generally non-changing background (e.g. a flat surface of a material bed and/or a face of an object or pattern). Other forms of filtering can also be applied including, for example, bandpass filtering to prevent image saturation where large gradients may be present. FIG.6Dis a diagram illustrating a high-pass filtered, y-axis-only image gradient of the material bed150with the material140, consistent with some implementations of the current subject matter.FIG.6Dis similar toFIG.6C, but shown with the material140, which includes the patterns shown inFIG.6B(e.g., the two animals and the hexagon). FIG.6Eis a diagram illustrating the high-pass filtered image gradient in all directions, consistent with some implementations of the current subject matter. In some implementations, the gradient can be determined based on the change in image intensity in all directions, e.g. the horizontal direction (e.g., x-axis) and the vertical direction (e.g., y-axis). Again, filtering can be applied to exclude pixels where the calculated gradient is below a threshold value and/or not within a specified range. FIG.6Fis a diagram illustrating a high-pass filtered image gradient of a portion of the material bed150, consistent with some implementations of the current subject matter. In some implementations, a portion of the material bed150can be imaged as described above, instead of imaging the entire material bed150. Alternatively and/or additionally, an image can be cropped or otherwise reduced to a desired portion or (“swatch”) of the material bed150. This portion can be selected to represent a particularly accurate representation of the material bed150. The swatch may be indicated by the user, or it may be programmatically determined by software. For example, the software may use a simpler and/or faster algorithm such as, for example, eliminating all black regions, before using a more sophisticated and/or slower algorithm such as, for example, searching for edges in the remaining region. In another example, a characteristic pattern such as, for example, the honeycomb pattern of the tray upon which the material140is placed, is used to indicate areas to be removed from the swatch before material detection algorithms evaluate what is remaining. FIG.6Gis a diagram illustrating a degree of matching between the image gradient in all directions shown inFIG.6Eand the image gradient of the portion of the material bed150shown inFIG.6F, consistent with some implementations of the current subject matter. In some implementations, a comparison and/or a match can be calculated between the gradient calculated for the portion of the material bed150inFIG.6Fand the image gradient determined for FIG.6D. Here, a poor match is present between the material140and material bed150(e.g. perhaps because the material140exhibits variations in surface height which may be the case, for example, if the material is a piece of wood and/or has a slightly curved surface). The other regions, shown by the hexagon and the two animal shapes, have a very similar gradient to that of the material bed (e.g. because they are both flat). FIG.6His a diagram illustrating a thresholded image based on gradient match inFIG.6G, consistent with some implementations of the current subject matter. To definitively establish the determination of the material edge, or the edge of a pattern or object present in the acquired images, thresholding can be applied to the gradient match described in the example ofFIG.6G. Applying the thresholding can provide a bitwise distinction between objects and/or patterns present in the acquired image. FIG.6Iis a diagram illustrating a final image showing only the material, consistent with some implementations of the current subject matter. Once the objects or patterns present in an image are distinguished from each other by any combination of the above methods, the objects and patterns can be selectively isolated from each other as desired. In this example, the areas of the image that correspond to the patterns (e.g., the animals and/or the hexagon) on the material140and/or the material bed150can be digitally subtracted from the distortion-corrected image of the material bed150with the material140, as shown inFIG.6B. As shown here, only the material140remains in the image. Such models, for example those applied in the example shown inFIGS.6A-6I, can be tuned based on pre-determined or adaptive confidence settings which are programmed to determine what is considered background versus material. Furthermore, these approaches can be underpinned by a variety of methods, including a histogram of gradient (HOG) detector, whereby a swatch sample of a confidently detected area of the image (e.g., an area detected with high confidence to be background), can be passed over every other part of image and compared with respect to how much a given sample appears to be transitioning in a similar manner. Conversely, rather than seeking to detect the image background and/or the machine bed150of the CNC machine100, this same approach can be applied to direct detection of the material140itself, particularly if the surface properties of the material140lend themselves to this purpose (e.g., if the material140has a consistent and/or readily-imaged paper cover on its surface). Alternatively and/or additionally, combining multiple detection approaches can provide material outline detection with resolution superior to that provided by either method alone. For example, applications requiring detection of scrap materials that have already been cut in intricate and/or complicated patterns may benefit from a high level of precision in material edge detection to better enable users and/or computer-vision aided algorithms to organize subsequent prints and make optimal use of remaining space on the scrap material. Interpreting User Intent Via Image Processing There are multiple strategies and techniques that may be employed to make intelligent decisions with respect to interpreting user intent from traced images in preparation for downstream processing. Such strategies include, for example, techniques for artifact clean-up, edge enhancement, image component recognition, thresholding, and/or the like. Artifact Clean-Up FIG.7Ais a diagram illustrating application of an artifact threshold set too low, consistent with some implementations of the current subject matter.FIG.7Bis a diagram illustrating application of an artifact threshold set correctly, consistent with some implementations of the current subject matter.FIG.7Cis a diagram illustrating application of an artifact threshold set too high, consistent with some implementations of the current subject matter. Artifacts may be introduced into the image from a range of sources including, for example, lighting conditions (e.g., dark shadows), dust and/or particulate interference, material defects (e.g., folds and/or inconsistencies), and/or the like. Analyses that seek to understand the user's intent for traced images may assist in the probabilistic determination of whether an image detail is more consistent with a component of the desired design, or with an artifact such as a speckle, an unwanted jag, and/or the like (seeFIGS.7A-7C). Artifact Detection Based on Local Pixel Density In one implementation, to detect features indicative of the presence of small artifacts such as, for example, dust and/or the like, the image can be searched by an algorithm trained to identify sparse regions in the image. A sparse region in the image may be an area having few pixels (e.g., that correspond to a pattern or design) that is immediately adjacent to another area that is more densely populated with pixels. Alternatively and/or additionally, a spare region in the image may be an area where the physical parameters of the laser would be ineffective at processing that level of detail. For instance, the rate of changes in that area may exceed the capabilities of the laser. In some implementations of the current subject matter, artifacts can be detected and removed using a “erode and dilate” technique in which local regions of image sparseness are detected and removed from the traced image. The “erode and dilate” technique can be applied to isolate and remove the noise in the image relative to user intent. Alternatively and/or additionally, the “erode and dilate” technique can be applied to remove artifacts from traced lines. To further illustrate,FIG.17is a diagram illustrating the “erode and dilate” technique for detecting and removing artifacts, consistent with some implementations of the current subject matter. In the example shown inFIG.17, a traced source image is eroded by removing a small distance (e.g. a pixel) from every black pixel in the image. This is shown in Panel1ofFIG.17, in which a simple exemplar trace image is overlaid with a grid. The optics used in the trace have detected two regions for tracing: (A) a 1-pixel speckle which may have been introduced as noise by any number of imaging anomalies (e.g., lighting and/or drawing artifacts); and (B) the target image to be traced (e.g., a 9-pixel square). The process of erosion shown in Panel2ofFIG.17can be morphological, in that erosion includes selectively removing the exterior-most pixels from the entire traced image. In the case of (A), erosion is a manipulation that causes the speckle to be removed entirely from the trace, whilst the effect on (B) is that the target image is shrunken to a 1-pixel square. The final step in this method is to then dilate the eroded image. In this step, which is shown in Panel3ofFIG.17, all elements of the eroded image are expanded by one pixel. This results in sustained removal of the 1-pixel speckle from region (A), and the restoration of the target image in region (B) to its original state. Artifact Detection Based on Line Thickness and Global Pixel Density In some implementations of the current subject matter, the artifact threshold can be adjusted to match to the level of detail detected in the design itself. For instance, a more relaxed artifact threshold may be applied to designs with a higher proportion of intricate detail, such as those produced by a fine-tipped pen while a more relaxed artifact threshold may be applied to designs with a lesser proportion of intricate details, such as those produced with a medium or bold pen. In some implementations of the current subject matter, the average pixel intensity (e.g., greyscale brightness) within a vicinity can be used as a threshold that is applied to individual pixels within that vicinity to determine whether a particular pixel should be considered black or white. The adaptive threshold kernel size used for this approach may be determined using a number of techniques. For instance, the adaptive threshold kernel size can be determined based on the average pixel intensity across every line and column on a theoretical grid overlaid on the traced image. Alternatively and/or additionally, the summation of all black pixels and/or all white pixels can be used as a metric to determine adaptive threshold kernel size. Regions detected by such methods as image artifacts can be flagged for manual and/or automatic removal. Multiple representations of a traced image applying various artifact acceptance thresholds, for instance, as shown inFIGS.7A-7C, can be presented to the user for selection. Alternatively and/or additionally, the algorithm itself may automatically assess the image and decide which representation to select. Once the image is assessed and a specific artifact threshold is determined, the image used to generate the motion plan can be replaced by the representation corresponding to the selected artifact threshold. The motion plan can then be generated based on the selected representation, or the selected representation can be further processed or manipulated before generating the motion plan. Edge Enhancement FIG.8Ais a diagram illustrating a region of an image where edge enhancement can be applied, consistent with some implementations of the current subject matter.FIG.8Bis a diagram illustrating an expanded view of a portion ofFIG.8A, highlighting two areas for corrections, consistent with some implementations of the current subject matter.FIG.8Cis a diagram illustrating the features ofFIG.8Bafter edge enhancement has been applied, consistent with some implementations of the current subject matter. Similar to the approaches described above for artifact detection, the lines and/or edges of an imaged design may be assessed algorithmically for certain properties including, for example, smoothness, completeness, consistency with surrounding edges, simplicity, and/or the like. Furthermore, one or more corrective algorithms may be applied to these lines and/or edges in order to achieve one or more of these properties (e.g., smoothness, completeness, consistency with surrounding edges, simplicity, and/or the like). Table 1 below provides a description of the algorithms that can be used to assess and/or achieve one or more of the aforementioned properties. TABLE 1PropertyDescription of Algorithm and Corresponding ApplicationSmoothingAlgorithms that analyze for changes that are consistent across anarea of pixels and interpolates a path within some degree ofconfidence. For example, a diagonal line drawn in fine pen by a usermay be captured by the optical system as a pixelated path. Asmoothing algorithm may be applied to analyze the differential of thechange of the line to determine when intermediate points ofinformation may be confidently ignored in favor or interpolating asingle line across a given area. See FIG. 24.CompletenessAlgorithms that analyze images by row and column to detect whenclusters of pixels with particular trajectories that separated only bynarrow gaps. The algorithm analyzes the vector of the initial clusterand the vector of another cluster; if the gap is below a particularconfidence threshold, the trace algorithm will instruct this area to befilled. For example, a drawn letter “O” that is not fully connectedwhen drawn in pen may be automatically completed using thisalgorithm.ConsistencySimilar to completeness, but clusters do not necessarily have to havewiththe same trajectory, or column or row of data clusters. For example,Surroundingconsistency algorithms may seek to make connections from tracedEdgescomponents situated at right angles to each other, may look forconsistency of shape, lighting, etc. It may also trigger re-analysis ofthe adaptive threshold; for example, if the consistency algorithmdetects a partial line or shape, but lacks sufficient data, it mayprompt the software to re-compute the raw data using a differentadaptive threshold.SimplificationAnalyzing traced lines for complexity; e.g. for lines intuited asstraight, removal of any additional points along pixels within thatstraight line such that only the outer two points remain. In some implementations of the current subject matter, multiple remedies may be available for correcting an inconsistency, aberration, and/or distortion in an image. Alternatively and/or additionally, where the image is subject to sophisticated analysis, there may be less certainty in the outcome of the analysis (e.g., the identification of a particular inconsistency, aberration, and/or distortion), which can prevent the automatic selection of one or more remedies. As such, instead of automatically performing a manipulation to correct the inconsistency, aberration, and/or distortion in the image, a user of the CNC machine100may be provided, via a user interface for example, a selection of the different remedies that can be applied to correct the image. The user can select one or more of the different remedies offered to the user via the user interface. Alternatively and/or additionally, the user can create a customized remedy, for example, based on one or more of the remedies presented to the user via the user interface. To further illustrate,FIG.18is a diagram illustrating a user interface displaying different remedies for correcting an inconsistency, aberration, and/or distortion in an image, consistent with implementations of the current subject matter. As shown inFIG.18, the user interface can present four options for correcting an inconsistency, aberration, and/or distortion in a source image. As noted, to correct the inconsistency, aberration, and/or distortion in the source image, the user can choose one or more of the four options and/or create a customized remedy based on one or more of the four options. Alternatively and/or additionally, as shown inFIGS.8A-C, inconsistencies, aberrations, and/or distortions can be corrected automatically, for example, upon detection. In cases where a remedy is selected automatically, each available option can be computed and subjected to a ranking system. This ranking can be based on factors such as, for example, probabilistic data (e.g., indicating which analyses can be applied with the greatest degree of confidence), historical data (i.e., indicating the historical rate of success of each remedy), and/or the like. Image Component Recognition FIG.9Ais a diagram illustrating an image with components corresponding to different types of desired cuts, consistent with some implementations of the current subject matter.FIG.9Bis a diagram illustrating a visual example of machine instructions based on identifying of the components identified from the image ofFIG.9A, consistent with some implementations of the current subject matter. There are many kinds of components that the vision-system can recognize to effectively compute sensible machine processing solutions for users. A design can have components, for example, outer lines, inner lines, and filled in areas (solid or pattern filled to indicate a desired etch/engrave). In one implementation, the image can be searched for such components by a variety of methods that may seek to delineate such areas from one another with confidence and provide intelligent default settings for the most appropriate action for given components. In some implementations, various components can be identified by the image processing software, for example, solid lines, dashed lines, lines of specified thickness or within a particular thickness range, interior lines, exterior lines, areas of solid fill, areas of pattern fill, lines of a particular color, etc. A particular type of machine instruction can be generated based on the identified components, for example, cutting, engraving, scoring (cutting to a specified depth (e.g. less than 25%, 50%, or 75% of the local material thickness) to enable the piece to be easily separated from another portion of the material), perforating, or the like. Any type of cut or machine instruction can be paired with any identified component. The particular pairing can be specified by user-supplied instructions at a graphical user interface or received from another computing device in the form of a data file assigning the pairings. In other implementations, the computer may automatically pair the identified component to the machine instruction based on a default set. For example, all exterior lines can be prepared as cut marks, all interior lines may be scores, and all filled areas may be engraved using gradient-matched settings. Certain components can also default to special analysis and/or processing conditions in order to intelligently vectorize user intent. For example, text recognition can be applied to the image to identify areas containing text that can benefit from special edge enhancements or machine settings. Alternatively and/or additionally, object recognition may be employed to detect components within a design and make informed procession decisions as defaults. Such approaches may leverage kernel or swatch-based search methods in conjunction with image databases and recognition keys. Alternatively and/or additionally detectors may search in the frequency domain to identify regions of commonality and/or difference. This approach may be particularly useful in distinguishing between filled areas and non-filled areas. A histogram analysis can also be conducted on image cross-sections in order to separate the components of a complicated image into smaller portions. In some implementations of the current subject matter, histogram analysis can be applied when performing a targeted search, for example, of something having one or more known characteristics. For instance, histogram analysis can be used to detect a red laser diode when Light Detection and Ranging (LIDAR) is used for material height sensing. Here, a histogram analysis of the red light intensity across the columns and rows of a given area is performed. The intersection of the peaks of red light identified can provide the location of the red dot laser. In some implementations of the current subject matter, a swatch-based search technique can be used to identify and/or locate a known image. The swatch-based search can include cross-correlating images taken by the CNC machine100with the source or image template. The source image may be slid digitally over the image in question to ascertain positional information. The resulting location that returns the highest value of correlation (i.e., the pixels provide the best match) can define the actual location. For instance, one application of a swatch-based search includes using a fiducial marker on the head160of the CNC machine100for homing the head160to a calibrated position within the CNC machine100. In some implementations of the current subject matter, a frequency based search can include convolving an image with a kernel and using the resulting data to identify the high frequency components of the image. For example, if the traced image is a simple checkerboard pattern, regions of similarity and/or difference can be identified by looking for sudden changes along the y axis (i.e. high frequency jumps from black to white would be observed). By convolving this information with a kernel that derives the y-axis edges, a map of intensity and/or likelihood of the edges along the y-axis can be created. The same technique can also be applied along the x-axis to create a map of intensity and/or likelihood of edges along the x-axis. The map for the x-axis can be combined with the map for the y-axis to build up a picture of shape. According to some implementations of the current subject matter, various components of text, objects, foreground and background imagery, in addition to how they are situated in relation to one another, can also be analyzed, for example, using one or more machine learning algorithms, to gain a probabilistic understanding of user intent. For instance, probabilities can be assigned to various outcomes and prioritized decisions can be made based on these assigned probabilities. Fine-Tuning Material Outline with Adaptive Thresholds Data from the image that is identified through various analyses as “background,” may be temporarily or permanently cropped to facilitate downstream machine vision-driven image processing activities, reducing processing overhead and required memory resources. This cropping may occur in bulk and/or in stages. As used herein, bulk cropping may refer to a general, broad-brush crop that is performed at a faster computational speed. By contrast, staged cropping can refer to a higher resolution crop in which multiple adaptive thresholds can be applied to different local regions within an image and/or to determine identify areas within an image that can be traced as separate objects as well as moved and/or manipulated independently. Furthermore, this cropping may include the use of adaptive thresholding, whereby different confidence settings are applied to different regions of an image based on some pre-entered or vision-detected knowledge of user intent. Different thresholds may be applied to fine-tune regions where relative acceptance has been established, but where adaptive threshold may be useful in providing a determination of where the material outline is. For example, areas deemed with high confidence to be background, may be eliminated early to allow areas of lower confidence to be the focus of adaptive thresholding approaches. This has potential utility both in better material outline detection, as well as greater downstream image processing accuracy. An image can be analyzed to detect one or more regions of the image that may require special adjustments relative to other parts of the image to deliver an optimal output. For example, regions designated as background components of a design may require different analyses compared to foreground design elements to deliver optimal interpretation of user intent for improved downstream processing. Similarly, the dark areas of an image can benefit from different thresholding than lighter regions. In one implementation, this can result in user-configurable or automated tools that adapt and adjust to produce optimal downstream processing settings. An example of this can be in applications relating to bitmap processing, where a greyscale image can be analyzed and the most appropriate machine settings that will produce a compatible color and/or shading range that the laser can provide for the given design, taking into consideration constraints of the design, the material, the laser, the duty cycle of the machine or other machine components, etc. For example, to convert intensities in a grayscale image into machine instructions for execution by the CNC machine100, each pixel intensity can be normalized to a value between 0, which may correspond to white, and 1, which may correspond to black. Machine instructions can be generated based on these normalized pixel intensity values. For instance, the machine instructions can cause the CNC machine100to cut to different depths within the material140in accordance with the normalized pixel intensity value. Contrast alternations, for example use of Global Histogram Equalization may be effective methods for improving detail and downstream processing results when users are working with grey images with a limited color range. The adjustment of contrast may be performed at the user's direction or via automated vision-based decisions. For example, the system may detect local areas that differ in their maximum and minimum contrast values and seek to stitch together regions in ways that better align with the user intent. Here, separate vector areas can be generated for each sub-divided region of an image such that different analyses and/or thresholding can be applied to each sub-divided region. Differing levels of sophistication may be applied to thresholds that are used in imaging; these may range from the simplest use of a raw image, or a raw image with a set threshold, through to solutions that can involve some level of user input (e.g., user toggles plus or minus settings at a user interface to adjust particular threshold values until they are satisfied with the visual result), or whereby such adjustments are made via changing the contrast either by hand or via automation based on some useful knowledge (e.g. desirable histogram properties). This implementation also extends to machine learning approaches, which for example, may employ sampling methods, or leverage local or global repositories of historical image and print data to generate models of intelligent decision-making with respect to interpreting user intent for a traced image. For example, when faced with a new design, the system may apply machine-learned probabilistic models that intuit the likely image category and components and then recommend the most likely settings to derive printing success based on a database of historical prints that rated, reviewed or otherwise measured success of the print based on pertinent parameters. In some implementations of the current subject matter, analysis of a raw image may include a priori knowledge of past traces that have been successfully processed, for example, by the CNC machine100, other CNC machines, and/or networks of CNC machines that includes and/or excludes the CNC machine100. The machine processing instructions generated upon completion of a trace may be based on settings that were previously applied with success to a past trace that is determined to be similar. The proposed plan for machine processing may be visually presented to the user who may then be offered an opportunity to manually adjust and/or manipulate different components of the proposed plan (e.g., toggling thresholding values up and/or down to improve planned output). If the machine-derived settings for a particular category of traced image is being consistently adjusted by an end-user or multiple end-users, this may indicate the machine-derived parameters are suboptimal. Accordingly, the difference between the machine-generated output and the user-adjusted output may be used to derive future sets of default machine parameters, thereby improving overall user experience. Such approaches may also further identify sub-groups of image types and enable their processing to be automatically optimized. The likely inputs used in determining such adjustments may include print success rate based on machine outputs, user opinion (e.g., five-star post-print rating system), print failure rate and/or cancellations, other user feedback, deviations from the machine default processing settings, and/or the like. Selection of Material Detection Methods The method for material outline detection for the material140placed within the CNC machine100can vary based on the diverse surface or imaging properties of that material140. For example, methods based on image derivative analysis may be less effective for transparent objects (e.g. glass) or objects with edges that closely match that of their surroundings (e.g. black acrylic on a dark bed). Therefore, the method for material detection that the CNC machine100chooses to apply to a given case may be a parameter that is user-configurable or machine-selected. One such implementation of this concept, a material barcode scanner (or a scan performed by an onboard camera) can be used to identify the certain properties of the material140(e.g. a transparent material less well-suited to edge detection, but of known dimensions based on the barcode) laid in the material bed150of the CNC machine100, or may even individually identify the piece of the material140including all historical processing records. In some implementations, when the outline of the material140is already known (e.g. from a barcode scan that associates the scanned information with a specific material outline), the material outline can be defined for use with further operations as described herein. The barcode scan is only one example of specifying a material outline without edge detection. Other methods can include, for example, a user entering data identifying the material140placed in the CNC machine100, a user manually defining (or selecting from a list) a specific material outline (e.g., providing dimensions or coordinates that define the material outline), and/or the like. Once the material outline has been captured or otherwise determined, the outline of the material140can be displayed to the user. The material outline can be used as an input to automated layout algorithms that attempt to place all parts of a desired pattern or cuts/engravings within the material140confines. The material140outline can be used as a set of constraints in a virtual simulation that would allow the user to drag around parts that would collide with the edges of the material140, so the user could experiment with positioning of parts without dragging them off the material140. Some implementations can include counting discrete chunks of present material to provide feedback (e.g. visual alerts, audio alerts, electronic messages) to the user; for example, when a design requires two different materials, but the CNC machine100only detects the presence of a single material type. Also, by identifying the area of the materials, the CNC machine100may be inform the user if the areas, as positioned, are either too small to be safely cut, or need to be cleaned out. Project Preview FIG.10is a process flow chart illustrating generating a project preview on a distributed computing system, consistent with some implementations of the current subject matter. In one implementation, a user can generate or supply patterns to be converted into a motion plan that the CNC machine100can execute. A pattern can be any visual representation of what it is that the CNC machine100is to cut, print, form, etc. The pattern can be a drawing, image, outline of a part, etc. The process can include: starting with a pattern, manipulating the pattern into a modified pattern, converting the modified pattern into a motion plan, and executing the motion plan by the CNC machine100. At1010, the pattern can be supplied, for example by the user, to a preview device in the form of a pre-existing image file. The image file can be in any file format, for example, a JPEG, PDF, bitmap, Scalable Vector Graphics (SVG), etc. The preview device can be, for example, a mobile device, laptop computer, tablet computer, desktop computer, CNC machine100, etc. The image file can also be the same as a source file, and vice versa. Here, we distinguish between an image file and a source file by generally assuming that the image file is not in a finalized state to be used as a source file. Consistent with implementations of the current subject matter, an image file can be created from an image captured by one or more cameras that view the working area of the laser CNC machine. A machine file can subsequently be generated form the image file, optionally based on user input or manipulation of the image file received via a user interface. Also, image files corresponding to the material140on which the pattern is to be cut can be specified by, for example, the user, imaged by the cameras in the CNC machine100, identified from the watermark, barcode, etc. If images of the real material140are not being acquired by the cameras, then images of the material140can be accessed from a library or other database that stores images of materials. If the material140can be identified, for example by a barcode or by image recognition, then specific post-fabrication material140properties can be accessed. For example, a light colored wood may turn dark when a certain amount of laser power is applied; this can be determined for a more accurate preview of what the material140will look like post-fabrication. In one implementation, the material140is uniquely identified e.g. with a barcode and photographed at its point of distribution. The material140is then obscured with a protective layer. When the material140is inserted into the CNC machine100, instead of showing the protective layer, the machine determines the material140's unique identifier and retrieves the photograph, showing what the material140looks like underneath the protective layer. If called upon to preview a project with that material, it will use the stored image of the material's appearance in the preview. At1020, the pattern can, if needed, be converted to an intermediate format, for example, an SVG format, and sent to a web browser or other software program capable of manipulating SVG files. The received SVG file (or other appropriate file type) can be rotated, expanded, translated, or the like, by the native software in the receiving computer or by a remote server, and then either those transformations or the resulting transformed file may be returned to the server. The image files for the material140can be combined with the pattern to provide a preview of what the cut will look like on the material140. Additionally, the type of cut can be selected by the user and the preview updated accordingly. For example, the user can specify a cut depth for each point on the pattern. The cut depth can range from, for example, zero (or no effect) to 100 indicating a cut through the material140. Again, a library or database of recorded cuts for the given material140can be accessed in order to show what the cut will look like, for example, with the specified depth, on the specified material140. Additional features of the preview can include showing effects of laser power (scorching), varying the spot size of the laser, etc. For applications involving 3D printing, the preview can include, for example, displaying the pattern as it is built up, showing the necessary buttressing to support the pattern, etc. Notes or other instructions can be generated during the preview and appended as metadata to the source files used to generate the motion plan, for example, comments, instructions, etc. Some metadata can be converted into a portion of the motion plan while other metadata can be stored for later reference and associated with the motion plan. At1030, the manipulated or updated preview can be used to modify or update the source files, for example the SVG files, used to generate the motion plan. Data corresponding to the preview can be transmitted to a remote computing device, for example, a server, mainframe, cloud network, etc. Once transmitted, the manipulated preview can be converted to back to its original file type, another file type, or to a motion plan. At1040, the motion plan can be transmitted to one or more computing systems, CNC machines, etc. The motion plan can then be executed or modified by the local machine as needed. Additional images of the working area can be captured in conjunction with causing a change in operation of a component of the computer numerically controlled machine. The change in operation can include changing a light output of lights between taking the image and the additional images. The position of the component can be changed between taking the image and additional images, and vibrating the camera while taking the image and/or the additional images. The improved image can include sharpening, correction of lighting artifacts, averaging, edge detection, and noise elimination relative to the image. The images can be generated with differing lighting conditions created by light sources inside the interior space. The light sources can include light resulting from operation of laser. Material Preview FIG.11is a diagram illustrating a collection of 2-D patterns1110previewed as a three dimensional object1120, consistent with some implementations of the current subject matter. The cameras can capture the appearance of the material140in the CNC machine100before machining. For example, the system can display to the user what the final product will look like, rendered as a three dimensional object1120. The images of the material140can serve as a texture map, which can then also be rendered onto the three dimensional object1120. In addition, the three dimensional object1120can be rendered and rotated using various projections (perspective, orthographic, etc.). This means the user can accurately see what the final product will look like with the material140currently in the CNC machine100. Further, if there are defects in the material140, the user can see where they will appear on the three dimensional object1120. If, in software, the user repositions the location of the cuts on the material140, the result of the material140preview can change to reflect the repositioned cut locations. Among other possible benefits, this feature can allow the user to optimize the patterns to have poorer quality areas of the material140hidden from view, such as on an interior surface of the assembled product, or outside of the patterns entirely. It also allows the user to preview their creation using different materials, to help in material selection. The user can also indicate the position of the cuts across multiple materials140present on the material bed150. For example, the user can place a piece of maple and a piece of walnut plywood on the support, then use the image of the material140on a screen to arrange the location of the cuts such that some pieces are made out of maple and some are made out of plywood. In another manner, a user can select some shapes in the pattern to be cut from each type of material140depending on the appearance or physical properties desired. Different power levels for the output of the head160and head speeds can result in different appearances of the material140during processing. For example, the head160moving at different speeds can cause a burn pattern left behind by a laser to vary, the roughness of a cut made by a milling bit to vary, etc. The user can preview what the material140will look like after processing by using images captured from, for example, a previous calibration step. The appearance of a type of wood marked with 20% of maximum power during calibration, for example, can be used to predict what an engraving at 20% power will look like. The predicted appearance can be shown on a graphical display to the user to aid in project design or in the selection of settings to use. The cameras can also capture the appearance of the material140after being cut, engraved, turned, printed upon, etc. These captured images, accessed either from an image library or acquired from test cuts on the actual material140, can provide an accurate image of the response of the material140to machining using particular output parameters. For example, test cuts at a given power, head160speed, bit rotation, or the like, can be performed in a scrap area of the material140to provide examples of how the material140will appear if cut with those same settings. Similarly, the image of the material140after being cut may be used to assess the material's new position following some interaction with the user. For example, a large design whose size is roughly twice that of the material bed150can be completed in the form of two sequential cuts with a pause between them in which a user, or some material translating mechanism associated with the CNC machine100, repositions the material to expose further un-cut space. The camera can then determine from what point the cut was left off and provide an image that includes the position at which the cut should resume. If the material140is recognized from a library, image analysis, or a previous usage, pre-calculated or stored settings can be used to provide a desired result. The identification of the material140from a library can be accomplished in several ways. First, barcodes or other markings can be used to identify the type of material140. These can be visible or can be invisible to the naked eye and revealed only with an infrared camera and illumination, or under ultraviolet light provided by appropriate lights e.g. UV LEDs. They can also be printed in standard visible ink. Text can be printed on the material and recognized with optical character recognition software. The camera may also detect incidental markings, such as the brand of material140on a protective sheet. Second, the cameras can use image recognition to identify the material140. For example, the grain structures of maple, cherry, and walnut are all distinctive. Distinctive colors and patterns in the material140can be imaged and compared to known material140examples stored in a local memory of the CNC machine100or stored on a remote computer. Using Marks and Drawings to Indicate Cuts Scanning by the cameras can also be used for copying the pattern of an existing 2D object. In one example, the user can make a marking on a piece of material140using a black pen. They can then place the material140in the unit. The camera can scan the image and isolate the region with the black pen, using the image to create a source file. The system can then generate a machine file and a motion plan, instructing the machine to move into position, move the head across the indicated region along a calculated path, activate the engraving function, deactivate the function, and finish. The result would be that the material140is engraved in the same location and with the same marking that was applied with ink. Different colors of ink can be used to indicate different operations, for example a red line might be used to indicate cutting while a brown line indicated a light engraving. Functions can be specified in software between the scanning step and the creation of the machine file, for example the user might be asked if the black marks should be cut or scanned. Other indicators than ink might be used, for example the user can cut a paper snowflake and use the machine vision system to scan its perimeter and generate a source file from the image of the perimeter. In all these examples, the source file can be saved and modified, so the scanned image could be moved, resized, repeated, or preserved for later. In another implementation, the cameras can detect a pattern on the material140that corresponds to a design stored in memory, and then the CNC machine100can machine the stored design onto the material140. Note that this is different from alignment marks, which can be used when the motion planning software is told where the alignment marks are and what the corresponding design is. In this case, the cameras can image the material140and the alignment marks and determine what the design is from the images. In another example, the cameras can identify a piece of scrap left over from a previous operation by imaging the cut marks present as a result of the previous operation, or created intentionally on the scrap as alignment marks anticipating that the scrap could be used with further processing. In one implementation, material140can be inserted into the CNC machine100that has a certain pattern, for example, a red square circumscribing a filled in black circle. The material (and the pattern) can be imaged by the cameras. A particular operation can be selected based on the image, for example, red lines can be converted into vector cut paths and black areas can be converted into raster engravings. A motion plan can be generated based on the selected operations, for example, cut out the square and engrave the circle. The CNC machine100can then execute the motion plan to perform, for example, the cutting and the engraving. Different color marks can indicate different cutting operations—for example, a red line might indicate cut through, while a black filled in area might indicate to etch. A sheet of paper, or other suitable overlay, containing a pattern or picture can be fastened to a piece of material140, then that pattern or picture can be engraved directly on the material140, through the overlay. This form of design can be applied directly to the target material140with the overlay presumably destroyed by the machining operation. Alternately, the material can be removed before the operation commences. In either case, the pattern can be saved for later, modified and/or repeated. The type of output from the CNC machine100can vary with the color, line thickness, line type, etc. As one example, a blue line could indicate to etch at 40% power, and a green line could indicate to etch at 60% power. A user can also put a drawing in the CNC machine100separately, allow the camera(s) to scan the drawing, and then insert a separate piece of material140to be cut or engraved. The first pass of a scanning camera can scan the image while the second pass, with the head160, can cut the material140. The system can use a screen, projector, or other visual feedback unit to generate virtual overlays of programmed cut lines on images or video of the actual material140. Also, images gathered previously, for example, during a calibration pass where test cuts are made, can allow the preview of the cuts to appear realistically. For example, the actual material140texture and the typical “V” shape of the cut or singing at the cut edges can be displayed when previewing the product. The user can also opt to arrange pieces between multiple materials that may be present or rearrange them to take advantage of material properties, for example aligning pieces with the grain of the wood. Similarly, the user can insert both the drawing and the material140to be cut at the same time but in different locations on the machine bed, and indicate that one is the source and the other is the destination. In this example, the user ‘copies’ the image from one piece to the next. The user can optionally resize or otherwise modify the drawing in software before machining. For example, the user can specify that the destination is to be magnified, rotated, and/or translated, relative to the source. In the example of a cut into transparent material, such as glass or clear acrylic, the drawing may also be placed visibly on the underside of the material, thus minimizing these interactions. Generating Motion Plans on Distributed Systems FIG.12is a process flow chart illustrating generating a motion plan for a CNC machine100on distributed computing systems, consistent with some implementations of the current subject matter. At1210, source files can be generated by providing an electronic characterization of an object for manufacture by the CNC machine100. The source files can be photos, screen captures, electronic tracings, CAD drawings, etc. The source files can be sent to the first computing system by, for example, uploading to a server, etc. At1220, source files can be received by the first computing system. At1230, a machine file can be generated at the first computing system. The machine file, as described above, can include steps describing operations of the CNC machine100. At1240, the machine file can be converted into a motion plan by the first computing system. Also as described above, the motion plan can be a set of instructions for the CNC machine100. At1250, the motion plan can be transmitted to the CNC machine100over a network. Variable Cut Depth Because the distance between the head160and the material140is generally fixed, there are a limited number of ways in which cuts of different depth can be made. Three ways of varying the depth of a cut can be, for example, (i) varying the laser power, (ii) varying the speed with which the laser moves, (iii) varying the focal length of the laser, and (iv) a combination of these. By varying the laser power, for a given dwell time at a particular location, the depth of the cut can be varied. For example, to a simple approximation, if the laser power is doubled, then in a given time period, twice as much material140can be expected to be ablated during the cut. There can be factors which complicate this kind of simple approximation, for example, debris, material140, etc. One complicating factor is that the power density drops off with distance beyond the focal point, as the deeper material140is farther from the lens and thus out of focus. The speed at which the laser moves can also be varied as an approach to vary cut depth. This may be particularly useful for manipulating material to have a desired finished appearance or effect. For example, operating at faster laser speeds while maintaining constant application of power may configured to introduce scores, rather than cuts into the material surface. The focal length of the laser can also be varied in order to provide a constant, or known, power density at a surface with varying height. The focal length of the laser can be varied by adjusting focusing optics inside the head160to provide a cut specified by the motion plan. Also, the cameras can monitor a laser's spot size, either the primary cutting laser or a secondary one, as described above, to maintain a specified focal distance for the most precise cutting. Alternatively, the cameras can monitor the spot size during the cut as a measure of the depth of the cut. For example, if it is known that at the focal length a cut was to have a certain depth then the cameras could monitor the spot size to detect that the spot size is reached the expected size given the focal length. Once the spot size is the expected size, then it is known that the cut is a depth defined, in part, by the focal length. Because the laser power and focal length are generally independent parameters in laser operation, they can be varied together to widen the operating space (e.g. the range of potential visual and/or physical outcomes created on a material) of the CNC machine100. In a simple approach without feedback, a first pass may be engraved to a depth of 1 mm by focusing the laser on the surface of the material140. Then a second pass could be taken removing another 1 mm, this time focusing the lens 1 mm lower at the new top surface of the material140. In an iterative approach with feedback, there can be multiple passes by the head160and after each pass the depth of the cut can be measured by the cameras. The CNC machine100can then use the measured change in the material to calculate what combination of power variation and focal length settings are used for the next pass. The calculation can be optimized for the type of material140, cut time, laser power limits, etc. In one implementation, the average power output of the laser can be set to be below a predefined value. The duty cycle of the laser can be varied to provide alternating periods of the laser being on or off. In one implementation, the duty cycle of the laser can be varied in order to achieve the desired average power output. Cut Verification The optical system can image the cut and compare the images with those expected from a cut made on a material140with known material140properties. The comparison can then be used to determine laser power (or other cutting parameter). The comparison can be based on image features (lightness/darkness of engraving), a through cut that should have been an engraving, etc. Alignment can be confirmed by cutting a pattern that should have a particular shape, imaging the cut, and comparing it to an expected image. The motion plan can be updated to correct for any discrepancies and an alert can be sent to a user or other connected computing system that maintenance of the CNC machine100is required. Also, calibration cuts can be used to align one or more elements of the optical system. For example, making a cut pattern of a predefined size and adjusting one or more optical elements to confirm that the field-of-vision of the optical element conforms to the cut pattern. A cut pattern can be designated to go onto imaged portions of the material140based on imaged features of the material140. The designation can be made by user-input or according to pre-defined instructions. For example, the cameras can determine separate materials, colors, textures, etc. present on one or more materials in the CNC machine100. The motion plan can associate the cut pattern with a particular material140or portion of the material140based on the imaged features. Once associated, the cut pattern can be executed on the identified portion of the material140. This can allow, for example, a user to specify a particular cut to go on only certain color portions of a material140, etc. Project Preview (II) A more general application of a material preview can include previewing the appearance of a finished project. The finished project can include the material, and can also include cuts, etches, engraves, etc., that the user wants to be performed by the CNC machine. In some implementations, a machine file can be created from an image. The machine file can include motion and cutting instructions for a laser CNC machine to cause the laser CNC machine to create one or more cuts on a material located within a working area of the laser CNC machine. The image can be generated by a camera in the CNC machine, imported from a computing system, generated by an image manipulation program based on user input, or the like. In some implementations, distortions in the image, for example those introduced by optical elements, can be removed by any of the methods described herein. For example, different colors may project onto different chip pixels due to lens refractions, and consequently must be compensated for by aligning the color channels, or using a single channel for dedicated vision tasks (e.g. QR-code recognition). Also, other implementations can remove distortions or generally improve image quality by, for example, adjusting a white balance to compensate for the overall light level. The light level can result from, for example, combination of ambient light (e.g. light from outside the CNC machine), internal lighting, or a combination thereof. These lighting effects can also be corrected, for example, the effect of exterior lighting can be removed, as described herein. Digital manipulation of the images can be performed, for example, brightening, sharpening, converting to color images to greyscale, or the like. Lighting within the CNC machine can be adjusted to achieve a desired balance, brightness, shadowing, or the like. Thresholding can also be performed to capture only areas that are sufficiently bright or dark to generate a useful image. High-pass filters may be used to remove gradients of light or color. Low-pass filters may be used to remove noise, for example from the image sensor. The previewing can also include generating a preview that includes a representation of at least part of the material. The representation can also include an overlay of cuts superimposed on the material to indicate an appearance of the material after processing of the material by the laser CNC machine using the machine file. The representation can be, for example, a graphical computer file, a graphical output displayed on a computing device, or the like. There can be several implementations of methods or systems that describe how the image is generated and what operations can be performed based on the image. In a first implementation, the image can be of an item and/or the material in the CNC machine, where the item can serve as the basis of the CNC machine operations. In a second implementation, the image can be of a material in the CNC machine, where the image acts as a backdrop and a preview of user-specified CNC machine operations can be displayed. In one example of the second implementation, the machine file may be simulated graphically on a display interface. The simulated machine file can be represented by graphical elements that can represent the path of a cut, a type of cut, a depth of cut, an intensity of a laser, a speed of a tool moving across/over the material, or the like. As one example, a partially-opaque dot can be placed on a display interface for every millisecond that the laser is active. As a result, the user can see areas of slow motion as bright lines (with many overlapping translucent dots) and fast motion as dim lines (with fewer dots, thereby letting more of the underlying material show through). The machine file may be run in real-time, sped up, or processed as a batch at any time after previewing the result of executing the machine file. General Discussion of Machine Instruction Based on Captured Images Some implementations described herein relate to combining source images that represent marks, cuts, or other operations of a CNC machine with data on the material in the CNC machine. The CNC machine can then operate on the material to provide a desired reproduction or other effect on the material, based on the source image. First, in some example implementations, a camera, for example one or more cameras integrated into the CNC machine, can act as the image capture device for scanning both objects containing the source image and also the material. The cameras, as discussed elsewhere, can be wide-angle cameras (e.g. the lid camera), narrow field-of-view cameras (e.g. the head camera) or any other combination of cameras. These cameras, among other things, can provide not only data on the source image, but additional features of the source and/or target material. For example, the cameras can be used to determine surface features, such as depth, color, vertical surface angles, or the like. In this way, the CNC machine can function as a smart, special-purpose scanner that is optimized for creating designs based on laser cutting/engraving. Second, in other example implementations, the source image can originate from devices other than the cameras in the CNC machine. For example, the source image can be a user-supplied photograph, CAD file, bitmap, jpeg file, SVG file, or the like. The source image, in this example from a source besides cameras in the CNC machine, can be overlaid or previewed against images of the material in the CNC machine. By referencing the points in the source image to the points in on the material, the desired features of the source image can be cut/engraved onto the material according to user-supplied instructions or pre-established CNC machine settings. Third, in yet other implementations, a combination of the two methods above can be performed. Here, the source image and the image data on the material in the CNC machine can be combined, and then can optionally be seen with preview software. In the preview, the source image can be modified according to user-supplied instructions and features can be specified for particular CNC machine operations (e.g. cutting/engraving). In this way, the cut-file (containing the instructions for CNC machine operations) can be superimposed on an image of the material in the CNC machine, on an image of a source object, or both. The CNC machine can then operate on the material/source object as instructed by the cut-file. The unique information that the machine possesses of not only the source image's appearance but also its position in space (the location in the bed, the thickness at its various points, etc) allows the ability to position the cutfile precisely on the source image so as to cut out or engrave directly over the drawing shown. This extends to implementations involving full 3D-dewarping of the input image into, for example, an orthographic projection. Features of these implementations are expanded on in more detail throughout the application, and in particular in the following sections. Optimizing Image Capture for Downstream Processing in CNC Machine FIG.13is a diagram illustrating the removal of distortions from an image, consistent with some implementations of the current subject matter. Capturing and formatting images to facilitate generation of machine files for downstream processing by the CNC machine100may be accomplished through a variety of means. In one approach, additional imagery can be requested for the purpose of gathering sufficient or optimal information for cut-file creation after a first image or set of images is captured. In another approach, capturing a single image channel can improve image quality at the corners of an image, particularly when using wide-angle cameras whose colors are otherwise smeared due to different light wavelength refractions. Also, different imaging quality compression levels can be used to provide either lower quality images which transfer at a faster rate and/or higher quality images which transfer at a slower rate. In some implementations of the current subject matter, the cloud-based nature of the product has been robust in supporting lossless compression, thereby avoiding any reductions in data quality. However, one or more compression techniques may be employed for some computational tasks that require excessive resources for example, in processing power and/or time. The decision to employ compression may be determined by the client and/or the server, using techniques such as, for example, querying the user in the interface, making an automatic determination based on current traffic patterns, deciding based on the amount of payment provided by the user, and/or the like. Methods for collection of additional information can include, for example, moving the camera, increasing the requested resolution from the camera, changing camera settings (i.e. as reducing gain to capture additional detail on an area of interest such as a bright area that has oversaturated the sensor), or changing internal machine settings to make the environment conducive to capturing the desired imagery (i.e. turning on an internal machine-controlled source of illumination). The requirement for additional imaging information may be determined by a user by requesting better data, inferred for example by a user requesting an operation that would require better data for optimum results, or by a range of automated means including analysis of data from cameras or other sensors, or the like. A range of image corrections and manipulations may be applied for the purpose of optimizing imagery for downstream processing applications. Such corrections can be made starting with raw image data from the camera (e.g. ‘Camera Raw’ files) or standard picture files appropriate for human viewing (e.g. JPG files). In some implementations, a known or reference image can be used as the basis for correcting image elements; color correction is one application that exemplifies this approach. For example, if an imaged object is of a known color, techniques such as white-balancing based on this reference image can be applied. The color of a reference image can be known via a variety of methods, for example, including user-derived inputs, barcodes or identifiable features that indicate the nature of the material, the reference object forming part of the CNC machine100(e.g. the housing having a known color), or the like. Other examples of image correction can include selecting a threshold level for black and white image conversion, high-pass filtering or adaptive thresholding that can locally adapt to conditions in the CNC machine100in order to capture objects (for example dark or shadowed objects), high-dynamic-range (HDR) imaging such as merging multiple images with different camera settings to improve the capture of information, and application of smoothing techniques for enhanced visual results. The need for performing image correction or manipulation can be, for example, requested by a user, inferred from a user action, a range of automated means including analysis of data from cameras or other sensors. In capturing and preparing images for downstream processing by the CNC machine100, users can benefit from the ability of the software, for example that generating the machine file or the motion plan, to disregard or eliminate detection of particular color channels. This feature can be broadly useful for users wishing to mark materials or images for non-imaging or non-printing purposes. For example, users may wish to make annotations in a particular color pen for their own purposes that may be independent of the designs, images or other elements intended for sensor capture; this feature may have particular utility in aiding creation and construction of complex articles. For example, a user may pre-print a template in one color, then use that template to create artwork in another color. Upon imaging, the first color might be discarded and only the second color retained. Elimination of a color channel may be performed automatically (e.g. hardcoded in the software), in accordance with a set of conditional rules (that can be configured by a user or otherwise suppled to the CNC machine100or other connected computer), or upon explicit instruction from the user received at the CNC machine100or provided as additional instructions to the CNC machine100. In some implementations, aspects of the design can be interpreted to extract information about the design, or parts of the design. For example, one or more real dimensions of the design can be measured, in part, by imaging the design with a camera in the CNC machine100. One specific example can be a camera at a 45 degree angle to the surface where a design including an “A” is printed. Because of the angle, the “A” will appear contracted in one dimension. Knowing the geometry of the camera relative to the location of the design can allow the measurement of the real dimensions of design, as opposed to the apparent dimensions in the image. To implement this level of precision, the vision system may be calibrated together with the laser system. A full 3D model calibration of both vision and laser systems enables mapping between material surface points and pixels in all cameras, adjusting for even small variations that may impact precision. For example, the area of the bed that is captured by the lid camera may shift by a few pixels, depending upon the exact position of lid closure. Such shifts may be compensated for by using methods varying in degrees of sophistication including, for example, plain image shifts, elaborate 3D models of the lid joint referencing features inside or out of the CNC machine bed150to model parameters, and/or the like. The precision associated with surface mapping and the performance of trace functionalities either from or onto slanted, curved, or other more complex surfaces and objects, may depend on the calibrated environment of the CNC machine100. By establishing and maintaining spatial fixes on internal components with high precision. For example, spatial fixes can be established and/or maintained by calibrating the head160of the CNC machine100and its associated vision components with an appropriate degree of confidence for the level of trace fidelity that is required. It should be appreciated that spatial fixes can be established and/or maintained through a single calibration upon initialization for base-level precision and/or repeated calibrations to fine-tune spatial fixes after every print and/or each segment of a print (e.g., for over-sized and/or particularly detailed work). FIG.19is a flowchart illustrating a process for tracing from complex surfaces and making fine-tuning adjustments, consistent with implementations of the current subject matter. It should be appreciated that the process shown inFIG.19may apply to both image capture as well as image projection onto such surfaces for the purposes of printing. At1902, depending on the complexity of the surface, one or more initial distance measurements can be used to define the surface involved in the image trace. For example, a dot of visible light projected at a single point on the desired trace surface and the properties of that dot as observed through the system's vision components can be used to establish the distance between the trace surface and the camera. Moreover, measurements associated with this projected dot, by virtual of being made within a calibrated system, can further allow confident mapping of this point within the 3D space of the system itself. At1904, surface complexity and/or irregularities can be detected. In some implementations of the current subject matter, as additional point measurement are made across the surface, a 3D point cloud (or alternate methods, such as a 2.5D point cloud) may be constructed. The number of measurements necessary to map a surface with adequate precision often increases commensurately with the complexity of the surface, however such settings may be programmed by the user manually or via selection of a pre-programmed setting designed and optimized to produce good outcomes under particular conditions. Some examples of techniques for mapping the surface may include a ‘simple pass mode’ for flat surfaces, a ‘slant mode’ for tilted surfaces where at least three distance measures are taken to estimate the tilt of the plane, a ‘deep-scan’ mode for highly convoluted surfaces, a ‘warped material mode’ for working with imperfect materials where the user may input a subjective assessment of the warp (e.g., from minor dents or bends, through to major curves or divots) and the system applies an appropriate level of vision-based scrutiny of the surface, and/or the like. The system can also autonomously determine the level of precision required in obtaining the point cloud measurements for a given situation. For instance, a lower precision scan can be performed initially to identify areas of complexity and/or differences, which can then be reanalyzed with higher precision scans. The sophistication of how these distance measurements are obtained for the purpose of defining the shape of the target trace surface or image can also vary. At1906, active and/or passive vision techniques for mapping the surface of the can be deployed, for example, based on the complexity and/or irregularities exhibited by the surface. In some implementations of the current subject matter, active vision techniques may be used for applications involving surfaces with increased complexity. For example, with active vision techniques, additional distance measurements can be sampled simultaneously from a single light source via the use of a filter that splits a single beam of visible light into multiple beams that project a pattern with known geometry. The split light pattern may be used to illuminate the surface that is being traced. This is shown inFIG.20, which is a diagram illustrating an active vision technique for tracing 3D surfaces and/or images, consistent with implementations of the current subject matter. Adjunctive improvements to active vision techniques are also possible. One example of an improvement is to vary the size of the dots produced by the visible light source and projected onto the surface. It should be appreciated that smaller sized dots can enable and/or optimize the measurement of smaller and/or more intricate surface topographies. Another example of an improvement to active vision techniques includes assessing the shape of the dots, thereby obtaining additional information regarding the surface being traced. For example, if a symmetrical cone of visible light is known to shine a circular shaped dot from its mounted position, one can assess the shape of dot observed on the surface, which may range from being circular when the surface is flat to being oval when the surface is slanted. The degree of tilt of the surface can be determined based on the distortion to the original circular shape of the dot. In some implementations of the current subject matter, the refractive index of the material that form the surface being traced can also be used to improve the output of active vision-based measurements. Alternatively and/or additionally, ultra-precision active illumination may be achieved through fitting the cameras to have more degrees of freedom of movement (e.g., the ability to be tilted and rotated at various angles with very controlled movements, etc.), may further enhance precision of the measurements obtained using active vision techniques. Instead of and/or in addition to the aforementioned active vision techniques, the surface of the material can also be mapped using passive vision techniques. Stereovision is one example of a passive vision technique that can be used to define the 3D shape of the target trace surface and may be especially suitable for textured surfaces. Using a stereovision approach, two pictures from two adjacent locations (e.g. 1-3 millimeters apart) may be taken by at least one optical sensor (e.g., the lid camera110and/or the head camera120within the CNC machine110). To further illustrate,FIG.21is a diagram illustrating a passive vision technique for tracing 3D surfaces and/or images, consistent with implementations of the current subject matter. As shown inFIG.21, the head camera120, which can be coupled with the head160of the CNC machine110, can be moved from a first position P1to a second position P2. The points detected within these two images may then be compared, leveraging information from two known optical vantage points. The calibrated locations of the internal optics are known, as well as the relative spatial difference between the optics that captured each image. As such, by triangulation in space, the location of points of interest can be inferred, enabling population of a 3D point cloud corresponding to the 3D surface and/or image being traced. It should be appreciated that other approaches to improving the 3D tracing capabilities of the CNC machine100may be conferred by providing additional knowledge that aids recognition. For instance, a priori inputs from the user and/or detection of well-characterized objects by the CNC machine100may expedite the 3D scanning features of the machine. For example, if the target trace surface is a laptop computer, and the specific make and/or model of the laptop computer are input by the user, a 3D scan of this surface may be possible from as little as a single image. At1908, the sum of calculations obtained from applying the active vision techniques and/or passive vision techniques can be used to create a 3D model of the target trace surface. At1910, one or more de-warping algorithms may be applied. At1912, the user can be presented with the trace for editing, adjustments, and/or positioning on the material. It should be appreciated that the de-warping algorithm applied at operation1910can assist the presentation of the trace to the user. At1914, the motion plan to be executed by the CNC machine100can be adjusted to the slant of the target print surface. The de-warping algorithms applied at operation1910can be based on certain assumptions and/or known constraints relevant to 3D trace. For instance, an image traced from a curved surface may be de-warped under the assumptions that while the traced image may include bends, it is constrained in other respects (e.g., it cannot stretch). As such, a 2D image with proportions matching the original may be reconstructed. Alternatively and/or additionally, should the target surface for processing be curved or complex, the motion plan of the print executed at operation1914can be slanted to match the slant of the surface or surfaces involved. In particularly complex cases (e.g., printing across a spherical surface), the print may be conducted in segments where the user may be prompted by the CNC machine100to rotate and/or tilt the spherical material to a given degree, and then utilize vision system components to realign and continue printing. In such complex cases, segmentation of the 3D point cloud may be necessary to perform the trace of the 3D surface and/or image, and to with sufficient resolution. The precision of the trace can be further enhanced through calibration steps at the system-level. For example, there may be imperfections that develop over time, such as wear and tear on manufactured parts, and/or use-based inconsistencies. For instance, the lid of the CNC machine110may not close the same way each time, which may offset the internal optics and consequently, compromise the accuracy with which traced images appear on processed materials as directed by users (i.e., ‘drag-in-place’- and ‘trace-in-place’-type functionalities). One technique to correct for these kinds of inconsistencies, for instance, is to recalibrate the camera involved in the trace (e.g., the lid camera110) at more frequent intervals, such as after every print and/or during a cool-down period. This may be done by automatically instructing a secondary optical sensor (e.g., the head camera120) to take a picture at a known location, such as exactly where it last printed and/or traced. Any measured offset detected in the image provided by the secondary camera (i.e., the head camera120) relative to same area in the image provided by the primary camera (i.e., the lid camera110) can be used to adjust the primary camera images displayed to the user during performance of tracing functionalities. Other approaches may include constructing a 3D model of the lid joint and monitoring this over time for warping or deformity. Tracing Over-Sized Images and/or Tracing onto Over-Sized Materials Some implementations of the current subject matter can involve tracing from images, objects, and/or materials having at least one dimension that exceeds the maximum physical dimensions of what the CNC machine100is capable of scanning and/or capturing in a single pass. For example, the dimensions of a line drawing to be traced may be on a long banner whose length is greater than that of the material bed150of the CNC machine100. As such, the CNC machine100may employ various techniques to enable the capture of a contiguous trace from an over-sized source image. In some implementations of the current subject matter, the nature of the material may permit physical segmentation. For example, a banner composed of three large sheets of paper joined by semi-permanent means may be detached from one another and scanned individually by the CNC machine100. The resulting individual scans can subsequently be digitally amalgamated back into the original oversized image through, for instance, manual curation by the user, automatically by the machine, and/or the like. In the case of manual image curation, users may be offered editing features and/or controls (e.g., tools for moving, rotating, enlarging, shrinking, nudging, copying, cropping, flipping, and/or the like) that permit the user to arrange the individual scans in any manner into a single, seamless trace. Here, the amalgamated trace may be saved or otherwise stored locally and/or remotely as an intermediate or final product of the traced function. Alternatively and/or additionally, automated amalgamation of multiple scans can be performed via, for example, image-matching algorithms and/or the like. The image-matching algorithms may examine each scan in whole and/or in part (e.g., the edges of each individual scan) to determine the most parsimonious way to join the individual scans into a continuous whole. Text recognition algorithms may be used to identify where lettering and/or words may become intersected as an unintended byproduct of segmenting the image. Object recognition algorithms may be useful in matching trajectories of line art in order to identify areas of continuity that make for logical, seamless joins between individual scans. In some implementations of the current subject matter, reference features of the material itself (e.g. wood grain, previous cuts that are either part of the assembly-process or made using the laser, and/or the like) may be used to aid automated image amalgamation. It should be appreciated that users may benefit from having additional fine-tuning capabilities even after automated methods have been used to join image segments. For example, manual fine-tuning of automatically amalgamated scan can be suitable for highly detailed image segments and/or insertion of additional design or stylistic features (e.g., decorative borders and/or spacing in between segments). In some implementations of the current subject matter, physical separation of the segments to be scanned may not be desirable or possible. For instance, a long plank of wood with a target trace image on one side or a banner that is printed onto a single piece of paper cannot be physically separated into parts. Thus, in some cases, acquisition of the scans required to create a single, seamless trace may be achieved via the use of a pass-through slot as shown inFIG.22. The pass-through slot may be integrated with passive and/or motorized rollers, treads, etc. If motorized, the rollers can be coordinated with the optical system components performing the trace functionality, thereby allowing the trace to be conducted in real time across an oversize piece of material. For example, the oversized material may be advanced at a specified time and/or rate that facilitates the required capture of information across a single, continuous image. If a motorized feed is present, then the optical system can omit some or all y-axis movement and rely solely on the feed mechanism to continually position the image for the purpose of scanning the oversized image. It should be appreciated that the material feed can be omnidirectional, for example, to allow movement of the oversized material in both a horizontal direction (e.g., x-axis) and a vertical direction (e.g., y-axis). The over-sized material may require movement in more than one direction for purposes of maintaining alignment. In some cases, a user can manually feed the material to be traced through the pass-through slot with and/or without the aid of passive rollers. The machine vision system may guide the user with respect to the repositioning of the material in order to capture each segment with fidelity. For example, prompts and/or other indicators may be provided at the level of the user interface, and/or by the hardware itself. In some implementations of the current subject matter, visual camera views and/or renderings of which part of the material has already been imaged may be shown to the user, for example, via a graphic user interface. In another example, visual lighting cues that indicate where the leading edge and/or tailing edge of the image segments are relative to the material being scanned may be used as a guide for user-based movement of materials. Alternatively and/or additionally, audible prompts and/or other such human factors-based indicators may be used to enable the user to determine when they are moving the material closer or further away from the next desired frame for reposition during an oversized trace. For example, the audible prompt can include a continuous beeping noise that increases its periodicity when the material is moved more closely to the target area and decreases its beeping periodicity when material is moved further away from the target area. One or more additional sounds can also be played to indicate successful completion of all material movements into a desired location. To further illustrate,FIG.22is a diagram illustrating a user interface for guiding the movement and/or repositioning of a material through the pass-through slot of the CNC machine110, consistent with some implementations of the current subject matter. For instance, as shown inFIG.22, user interface (A) may display an area that has already been traced differently than an area that has yet been traced. This visual distinction can aid the user in moving and/or repositioning the oversized material, for example, to a next section of the material that has not be traced by the CNC machine100. Meanwhile, user interface (B) may include visual cues (e.g., white crosshairs that create a target area on the material bed150of the CNC machine100) that guides the user in moving and/or repositioning the oversized material through the CNC machine100. Once all components of an oversized image have been captured with a sufficient level of detail, the image segments may be stitched together using the techniques described earlier. FIG.23is a diagram illustrating illustrates how an oversized source image on a paper banner may be obtained by manually feeding the oversized material through a pass-through slot on the CNC machine100, consistent with some implementations of the current subject matter. The user edits the captured line art by reproducing a shrunken version of some aspect of the design with fidelity across the banner. Once all design elements are finalized, processing of the oversized material may occur in accordance with the user's intentions. In this example, the user-desired processing involves using the laser of the CNC machine100to cut directly on top of the original line art and faithfully reproducing shrunken versions of the line art at desired (previously blank) spaces on the material. The processing of the oversized project may again employ a variety of manual and/or automated material movement strategies to ensure continuity of processing. The preceding concept can also extend to imaging features, text, materials, 3D objects, etc. associated with surfaces that are curved, non-uniform, warped or that present other such challenges to image capture. In this case, knowing the geometry of camera relative to the location of the design, in addition to taking repeated height measurements to model or scan the surface, or repeated still images from varying locations, can provide the necessary data to de-warp the image, thus preserving the real dimensions of the feature of interest. In one such application, the desired image to capture for processing may be the logotype on a curved surface of a cylinder, or that wraps over the edge of a folded surface (as shown in the example inFIG.9); image capture and subsequent de-warping of the 3D mesh data could enable accurate reproduction of the logo-type on a desired surface with a different geometry. Alternate methods for achieving image capture are also possible, such as use of cameras with a focus to take multiple images at different focal depth exposures and then combining the images to capture the image in its entirety. In other implementations, instead of capturing an optical image, a sensor (for example, a laser diode and camera, an ultrasonic distance sensor, stereoscopic images or the like) can be used to create a depth map. The depth map can be manipulated with software to rotate, scale, adjust, bend, or move the depth map into desired positions or applications. The depth map can be generated by scanning of objects, including electronic or mobile devices for the purpose of creating customizable skins or covers, or other objects based on the surface features of the scanned object. The height of an upper surface of the material relative to a bed or other supporting surface can be a proxy for the thickness of the material. Alternatively, or in addition, a thickness of a material can be estimated using a view form the one or more cameras that images a side of the material. Various image processing operations can be used to estimate or calculate a material thickness form such measurements. Image Capture Feedback and On-Board Correction FIG.14is a diagram illustrating modifying a traced image and overlaying the modified image on a capture in the working area in the CNC machine100, consistent with some implementations of the current subject matter. User feedback and on-board correction can be included in preparing imagery for the purpose for cut-file generation and subsequent material processing. The use of interactive tools that permit users to see how material is being received by the capture software in real-time can allow users to determine whether the desired element has been imaged to their level of satisfaction. As shown inFIG.14a), an image traced or captured from another image file can be displayed, with software tools that can allow a user to manipulate or edit the image. Software tools can be provided to retouch the captured imagery directly, rather than requiring modifications and re-tracing of the original image file. As shown by the examples inFIG.14b), such examples may include paintbrush tools that can add or embellish upon existing parts of the captured design, eraser tools for manual removal of image capture artifacts or components that are no longer desirable, and magic wand as a tool for selecting and removing contiguous or noncontiguous regions of a given color. In a further example, the use of an intent brush feature may produce better machining outputs downstream by identifying and correcting for features in the original image that may be imperfect (for example, this tool may perform a path simplification on a trace of a hand-drawn straight line, in order to achieve an exact straight line; i.e. the software renders a better representation of user intent than what was provided by the original trace image). The modified image can be overlaid onto an image of the working area, another different image, or the like, to provide a preview of what the CNC machine100will do. For example, inFIG.14c), the thicker lines at the perimeter of the traced sun can represent instructions to cut, whereas the thinner lines interior to the traced sun can represent instructions to engrave. Image capture feedback can detect and flag potential problems associated with captured imagery, with respect to its suitability for processing by the CNC machine100. This approach may leverage single components or combinations of data components relating to camera imagery and sensor data, particularly height sensing data. For example, users can receive notifications upon detection of curved or bent surfaces which cannot be adequately imaged with current camera settings. In some implementations, prompts or notifications can be generated by the previewing software for imaging improvements or make those improvements automatically. In other implementations, lighting artifacts and anomalies, (e.g., shadow lines or spotlighting effects) can be detected via analysis of pixel brightness and density. Such issues can be flagged by the analysis software even before the user has identified them, thus enhancing design efficiency and the user experience. Preparing Cut-Files from Machine-Derived Imagery In some implementations, details of a design that has been drawn directly onto the desired material can be captured. The CNC machine100can then directly process over the top of said images to cause the desired changes to the material. In some implementations, this can be performed by imaging the material, removing distortions from the image (e.g. distortions introduced by the scanning camera), transforming the resulting image into a set of machine instructions necessary to cause the desired changes, not just on the material, but also over the source image, and executing the instructions. FIG.15is a diagram illustrating the creation of a machine file from a traced image and user-selected input, consistent with some implementations of the current subject matter. A user can designate portions of an image to be machined according to both analysis of the image (e.g outlines, colors, etc.) and user-supplied input selecting a mode of operation of the CNC machine100(e.g. cutting, engraving, etc.).FIG.15a) shows a top view of a piece of paper (white rectangle), with three circles imaged on the top of the deiced piece of material (shown by the diagonal striped rectangle) positioned within the material bed150of the CNC machine100(grey rectangle).FIG.15b) shows that based on user input, the user can define regions of commonality that can be associated with specific instructions (dotted lines). Here, the lower circle can be selected as a path for engraving. The middle circle can be selected as a path for cutting. For the upper circle, both interior and exterior borders can be identified as cut paths. As shown inFIG.15c), this can result in the manipulation of the material140as it lies directly under the image. In this example, the machine file generated in15b) would produce an engraved lower circle, a cutout middle circle and a cutout donut shape. The cut-out material is shown as removed to the right of15c). In some implementations, the CNC machine100and various computational methods can create a cut-file from a traced image. One approach of achieving this can include analyzing an image to find distinct edges of pixels of one color (e.g. black) relative to pixels of another color (e.g. white) and the subsequent generation of a vector path at their intersection. A high-pass filter applied to the image can remove gradients of light resulting, for example from environmental effects, so that user-generated markings are left intact. Some implementations can involve the use of a contour trace algorithm to seek regions of continuity and determine the placement of a path around the perimeter of that region, for example to send instructions to the CNC machine100that a region that shares a similar appearance should be cut out from the rest. Algorithmic interpretation of an image file can be used to automatically predict at least one user intention for a given cut-file. For example, regions with detected common qualities may permit users to globally apply specific formatting or behaviors. In one example, a circle with a thick outline may be treated as a separate entity than a circle with a thin outline, and the user may select cutting properties accordingly. Similarly, color can also indicate processing intentions. For example, features drawn in red ink are to be cut while blue-inked features are meant for engraving. A variety of processing intentions and markings may be possible including cutting, scoring, and engraving. FIG.16is a diagram illustrating isolating of elements in an image and designating CNC machine instructions to portions of the isolated elements, consistent with some implementations of the current subject matter. In creating machine file instructions, users can crop only the design regions of interest, rather than tracing all elements captured by a given camera. This can include selecting regions to retain (e.g. components of finished designs) versus components that may be ignored (e.g. scrap material, parts of the CNC machine100like the rails or the material bed150, etc). Such feature assignments can occur automatically or be manually inputted by the user. Materials designated into certain categories can be further tracked and stored as part of machine or use analytics for potential future use (for example, scrap material may be chronicled for inventory management). In the example shown byFIG.16a), an image has the material bed with a piece of cardboard and a QR code1610placed next to the piece of cardboard. Through a user interface, the user can crop out the portions of the image that are not of interest. In this example, shown atFIG.16b) only the QR code has been selected to remain in the image. Then, atFIG.16c) the user (or the software) can select certain features for machining on a piece of material. Elements sharing a common design or particular features can be extracted through image analysis or user selection. In this example, the three squares1620in the QR code have been selected. In some implementations, the perimeter of a material can be identified and located in the CNC machine100when that perimeter is not yet known. This can be used for scanning (for example to identify a region to scan) or other purposes such as positioning a design on material for processing. This can be accomplished through a variety of means, such as recognizing the bed of the machine and identifying regions that look different; identifying material by its characteristic appearance, for example white pixels representing a white sheet of paper; or identifying specially placed features, such as a barcode tiled across the surface or fiducial markers placed at a fixed distance relative to the material's perimeter to indicate that perimeter, such as markers placed a fixed distance from the perimeter from which the perimeter can be determined. In other implementations, identifying that a given image is of a particular general category can allow application of a set of generalizable rules or default settings that govern how these types of images are to be processed. For example, a photograph can be considered a distinct image type. Then, machine file generation defaults can be set in accordance with the typical use requirements for processing photos (e.g. use of dithering, rather than cut-line generation). This approach can also be used with multi-item identification, for example, the incorporation of mixed media elements into the article design process. This can be when the user wishes to treat elements from a photograph placed on one side of the bed differently to a cylindrical object placed on the other side of the bed in creation of an article that combines elements of the two. End-to-End Creation with a Single Machine Some implementations can perform end-to-end image capture and recreation using a single CNC machine such as, for example, the CNC machine100. Here, the CNC machine100, and any cameras (e.g., the lid camera110and/or the head camera120) or other sensors connected thereto, can capture some or all of the images that contribute to a design. The same CNC machine100can compute analyses, corrections or other manipulations to that imagery and then going through the process to create a machine file. The machine file can contain all elements necessary for creation of a desired article. Consequently, the same CNC machine100can then execute its own generated machine file by processing material into the desired article or component articles. Further, the CNC machine100can be used to execute that machine file in the same location, on the same material, as its source image. For example, a line drawn on a board may be used to cut that board in half. Some applications may exploit the end-to-end capabilities of the machine, either in whole or in part. For example, an image captured by a user can be subjected to a series of visual corrections that lead to generation of a machine file. The visual corrections can be processed immediately or saved and stored for further manipulation or processing at a later time point. The concept of end-to-end capture and recreation can also include the use of multiple CNC machines of a similar type to perform imaging, correction, or processing steps. CNC Machine Instructions Based on Objects in a CNC Machine Some implementations can include, for example, capturing an image, where the image includes a design. The capturing can be performed by a camera positioned within a housing that encloses the working area of the CNC machine100. As used herein, the term “design” can include any visible feature of the object or image in question. For example, a design can be an outline, a color, a shading, a gradient, a texture, a pattern, a physical property such as a length, width, depth of a recess, or the like. As one specific, non-limiting example, a user can place a round wooden disk with a letter A in the CNC machine100. An image of the disk can be generated and the user can select, or trace, the letter “A” in the image. Here, “A” is one design that includes three lines. Later, the circular outline of the disk can be selected or traced. Now here, the circular outline is the design. For each design, the user can specify a particular operation of the CNC machine100, for example, engraving the “A” onto a square piece of material and cutting out a circular portion based on the shape of the disk. While in some implementations, the image or design can be generated by cameras (e.g., the lid camera110and/or the head camera120) in the CNC machine100, or by user input, images or designs used by the CNC machine100can also be received from another computing system. For example, a jpeg file of the disk with the “A” can be received and interpreted as the image having the designs. Images and designs can be in any electronic format, for example, jpegs, pdfs, bitmaps, gifs, or the like. As described above, features of the design can be identified from the image. A feature can be a subset of the design, or can be the entire design. For example, the design described as the letter “A” can have three identified features, two angled lines and one horizontal line. The identification can be performed by, for example, interpreting user-input that specifies a feature of the image, receiving the output of an image analysis or photo-manipulation program that extracts features or designs from the image, or the like. In other implementations, the features can include a first pattern on a surface of the material. The first pattern can include a grain, a drawing, a carving, a color, or the like, that is visible on the surface of the material. The features can also be a second pattern on a surface of an item other than the material. As used herein, the term “design” generally refers to an overall design that can include one or more features. The features can each have one or more patterns that describe properties of the features. Accordingly, any of the design, features, or patterns can include any visible or detectable property that can be converted to an instruction of the CNC machine. For example, an item, such as a drawing, can be placed in the CNC machine. A second pattern can be an outline of a person on the drawing, as distinct from a first pattern, such as a wood grain on the material also in the CNC machine. The item can be any sort of object, for example, a drawing on paper, an article of clothing, a keychain, a model piece, or the like. The features can also include any physical features of the material or item, similar to that of a pattern. Software can be used to correct, improve, or modify, any of the images that can be presented to a user, or referenced by other programs operating with the CNC machine, to improve the image quality. In some implementations, the preview can be generated by at least executing one or more operations applied to the image to correct one or more dimensions of the design as generated in the preview based on the image. The operations can include, for example, de-distortion or de-warping, such as described elsewhere within. CNC Machine Instructions Based on Images Overlaid on Objects in a CNC Machine Other implementations can also include overlaying, on a graphical representation of the material140in the CNC machine100, an external image. However, here the image can be, for example, supplied by a user, generated by a drawing program, traced through a user interface, or the like, as opposed to generated from an object inside the CNC machine100. As described above, to serve as a backdrop, the generating of the preview can include capturing a view of the material while the material is positioned within the work area, the capturing comprising use of the one or more cameras. As used herein, the term “view” (generally referring to the background material) is provided only to distinguish from the “image” (generally referring to a representation of a change to the background material). Both the view and the image can be graphical images or image files configured to be stored by a computer or displayed at a user-interface. The capturing can include any of the image capturing techniques described herein. Also, similar to that described above, the capturing can include correcting one or more dimensions of the image as generated in the preview. The correcting can include, for example, de-distortion, de-warping, correction for lighting effects, adjustment of lighting within the casing, and thresholding. In some implementations, a preview can be generated as a precursor or as an integral part of, creating the machine file. For example, generating the preview can include displaying the view of the material to a user via a user interface. Creating the machine file can also include receiving user input via a user interface. The user input can include generating a design and positioning a design in a desired location relative to the material as shown in the view. There can be other sources of the design, for example, importing a graphical file that includes the design, automatic extraction of a design from another image, or the like. Furthermore, the creating of the machine file can also include receiving, at the user interface, a selection of a feature in the design, and an indication of whether the selected feature is to be cut or engraved. For example, if a user wanted to engrave an “A” onto a wood block in the CNC machine100, the preview can first display a view of the wood block as it appears in the CNC machine100. Then, the user, through the user-interface, can select or designate the image of design including the “A” and position the design as desired on the view. The features of the design, for example, the lines that make up the “A” can be selected by the user to be cut or engraved, what machine settings to apply, or the like. At any time, the user or other program in control of the CNC machine100can create a machine file to cause the CNC machine100to create the design on the material as shown in the preview. According to various implementations of the current subject matter, a method for fabrication with image tracing can include generating, by a camera having a view of an interior portion of a computer-numerically-controlled machine, an image comprising a pattern. The image can be transformed into a set of machine instructions for controlling the computer-numerically-controlled machine to effect a change in a material. The change can correspond to at least a portion of the pattern. At least one machine instruction from the set of machine instructions can be executed to control the computer-numerically-controlled machine to effect at least a portion of the change. The execution can include operating, in accordance with the at least one machine instruction, a tool coupled with the computer-numerically-controlled machine. The tool can configured to effect the change on the material. The transformation can include: analyzing the image to identify one or more features of the material; analyzing the image to identify the pattern on the surface of the material, the pattern being identified independently from the one or more features; and generating the set of machine instructions to cause the computer-numerically-controlled machine to effect the change on the material based at least on the pattern on the surface of the material. The pattern can be applied directly onto the material. The tool can trace the pattern on the material based at least on the image comprising the pattern. An operation can be executed with respect to the image to correct one or more features of the pattern comprising the image. The operation can include one or more of de-distortion, de-warping, correction for lighting effects, adjustment of lighting within a housing of the computer-numerically-controlled machine, and thresholding. The camera can be positioned to capture all of a material bed comprising the computer-numerically-controlled machine. The material can be at least partially disposed on the material bed. One or more additional images can be generated based at least on an analysis of the image. The analysis can include determining that the image fails to capture one or more features of the material The image can be interpreted to predict a user intention. The transformation of the image into the set of machine instructions can be based at least on the user intention. The interpretation can include: identifying a feature in at least a portion of the image; and executing the at least one machine instruction from the set of machine instructions to effect the change having one or more properties corresponding to the feature identified in at least the portion of the image. The user intention can be predicted automatically by the computer-numerically-controlled machine without requiring user-generated input. The user intention can be applied globally to specify the one or more properties of the change based at least on the detected feature being common throughout the image. According to some implementations of the current subject matter, a method for fabrication with image tracing can include generating, by a camera having a view of an interior portion of a computer-numerically-controlled machine, an image comprising a pattern. The image can be transformed into a first set of machine instructions for controlling the computer-numerically-controlled machine to effect a first change in a material. The change can correspond to at least a portion of the pattern. The first set of machine instructions can be combined with a second set of machine instructions to generate a third set of machine instructions. The second set of machine instructions can control the computer-numerically-controlled machine to effect a second change in the material. At least one machine instruction from the third set of machine instructions can be executed to control the computer-numerically-controlled machine to effect the first change and/or the second change. The execution can include operating, in accordance with the at least one machine instruction, a tool coupled with the numerically-controlled-machine. The tool can be configured to effect the first change and/or the second change directly onto the material. According to some implementations of the current subject matter, a method for fabrication with image tracing can include generating, by a camera having a view of an interior portion of a computer-numerically-controlled machine, a first image. A second image including a pattern can be transformed into a set of machine instructions for controlling the computer-numerically-controlled machine to effect a change in a material. The change can correspond to at least a portion of the pattern. The change can be at a position determined based at least on the first image. At least one machine instruction from the set of machine instructions can be executed to control the computer-numerically-controlled machine to effect the change. The execution can include operating, in accordance with the at least one machine instruction, a tool coupled with the computer-numerically-controlled machine. The tool can be configured to effect the change directly onto the material One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” Use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. | 185,384 |
11860607 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of elements and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “over,” “upper,” “on,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. As used herein, the terms such as “first,” “second” and “third” describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another. The terms such as “first,” “second” and “third” when used herein do not imply a sequence or order unless clearly indicated by the context. As used herein, the terms “approximately,” “substantially,” “substantial” and “about” are used to describe and account for small variations. When used in conjunction with an event or circumstance, the terms can refer to instances in which the event or circumstance occurs precisely as well as instances in which the event or circumstance occurs to a close approximation. Refer toFIG.1.FIG.1is a schematic view diagram illustrating a multi-chamber type cluster semiconductor manufacturing apparatus in accordance with some embodiments of the present disclosure. As shown inFIG.1, the multi-chamber type cluster semiconductor manufacturing apparatus1may include a series of heterogeneous manufacturing units configured to perform different operations or implement different functions. In some embodiments, the manufacturing units may include load ports (e.g., front opening unified pod (FOUP)10, atmosphere transfer module (ATM) robots F1, F2, an aligner X, load locks L1, L2, vacuum transfer module (VTM) robots V1, V2, first processing chambers such as chambers A1, A2, and second processing chambers such as chambers B1, B2. The load ports10are configured to place cassettes for storing wafers. The ATM robots F1, F2are configured to transfer wafers between the load ports10and the load locks L1, L2in an atmospheric environment. The VTM robots V1, V2are configured to transfer the wafer in the vacuum environment. The wafer is transferred to the aligner X for calibrating the orientation of the wafer before transferring to the load lock L1or L2. The load locks L1, L2are spaces configured to load-in and load-out the wafer. When the wafer is load-in, a front door of the load lock L1or L2opens to receive the wafer from the ATM robot F1or F2. The front door is then closed, and the load lock L1or L2is pumped to a vacuum environment. A back door of the load lock L1or L2then opens, and the wafer is transferred to the chamber A1or A2by the VTM robot V1or V2. When the wafer is load-out, the back door of the load lock L1or L2opens to receive the wafer from the VTM robot V1or V2. The back door of the load lock L1or L2is then closed, and the load lock L1or L2is ventilated to an atmospheric environment. The front door of the load lock L1or L2then opens, and the wafer is transferred to the load port10by the ATM robot F1or F2. In some embodiments, the first processing chambers A1, A2and the second processing chambers B1, B2include different types of processing chambers for performing different operations, and may be in communication with a center chamber12. In some embodiments, the multi-chamber type cluster semiconductor manufacturing apparatus1may include but is not limited to, for example a deposition apparatus such as a physical vapor deposition (PVD) apparatus. The first processing chambers A1, A2may be configured to pre-heat the wafers. The second processing chambers B1, B2may be configured to perform PVD operation on the wafers. In some other embodiments, the multi-chamber type cluster semiconductor manufacturing apparatus1may include a chemical vapor deposition (CVD) apparatus, etching apparatus, a photolithography apparatus or the like. As shown inFIG.1, the multi-chamber type cluster semiconductor manufacturing apparatus1can perform manufacturing operations on multiple wafers simultaneously and/or successively. At a time point, some wafers may be processed in the first processing chambers A1, A2and the second processing chambers B1, B2, some wafers may be handled by the ATM robots F1, F2and the VTM robots V1, V2, some wafers may be in the load locks L1and L2, and some wafers may be waiting to load-in. The manufacturing operations of the multi-chamber type cluster semiconductor manufacturing apparatus1are complex and continuously performed. Once one of the manufacturing operations for a wafer is idled, it may affect the manufacturing operations of other wafers, and it is very difficult to capture the root cause of the idle. In one or more embodiments of the present disclosure, a behavior recognition device cooperatively connected to one or more semiconductor manufacturing apparatuses is used to process log data of the one or more semiconductor manufacturing apparatuses, and to analyze behaviors related to manufacturing operations of manufacturing units of the one or more semiconductor manufacturing apparatuses. The behavior recognition device can real-time receive the log data the one or more semiconductor manufacturing apparatuses as long as they are generated during the manufacturing operations are performed. The behavior recognition device can real-time process the log data, and automatically build a machine learning model such as a transition state model. The machine learning model can automatically recognize the behaviors of all the manufacturing units, and generate behavior attributions for each wafer. Accordingly, the machine learning model can identify good/bad behaviors and capture the root cause of the bad behaviors with high accuracy. That is, behavior attributions can be learned. As used herein, the good/bad behaviors may be identified in terms of productivity. The good/bad behaviors may include good/bad sequences and good/bad activities. A good sequence refers to a wafer transfer sequence having higher transition probability and/or standard loop of the wafer transfer sequences, while a bad sequence refers to a wafer transfer sequence having lower transition probability and/or redundant loop. A good activity refers to an activity of a manufacturing unit having lower time consumption and/or consistent with the activities of other manufacturing units, while a bad activity refers to an activity of a manufacturing unit having higher time consumption and/or inconsistent with the activities of other manufacturing units. The behavior recognition device can further perform a simulation based on the transition state model in some embodiments, the transition state model can be output to a simulator, ail the behaviors including the good behaviors and bad behaviors can be reproduced in the simulator, and the manufacturing operations in the transition state model can be simulated by adjusting the control rule of the bad behavior. When the simulation result shows that adjusting the control rule can fix the bad behavior, the adjusted control rule can be adopted to perform the manufacturing operations on a new batch of wafers with the adjusted control rule. FIG.2is a schematic view diagram illustrating a semiconductor manufacturing system in accordance with some embodiments of the present disclosure. As shown inFIG.2, the semiconductor manufacturing system50may include at least one semiconductor manufacturing apparatus1, and a behavior recognition device. The semiconductor manufacturing apparatus1may include a multi-chamber type cluster semiconductor manufacturing apparatus1as illustrated inFIG.1. The behavior recognition device2is cooperatively connected to the semiconductor manufacturing apparatus1. As shown inFIG.2, the behavior recognition device2is cooperatively connected to two semiconductor manufacturing apparatuses1for example. The number of the semiconductor manufacturing apparatus1is not limited. In addition to the manufacturing units as illustrated inFIG.1, the semiconductor manufacturing system50may include a first control unit34and a first storage device32. The first control unit34is configured to control the manufacturing operations of the series of manufacturing units based on the control rules. In some embodiments, the control rules may include the job handling sequences of the manufacturing units, the wafer transfer sequence, operation parameters, etc. The first control unit34is also configured to generate log data recording the manufacturing operations of the series of manufacturing units. In some embodiments, the control rules of the semiconductor manufacturing system50are set by vendors, and may not be all known to the manufacturers. The log data, however, may record every manufacturing operation such as what each manufacturing unit does and how long each manufacturing operation lasts in terms of time. The first storage device32is cooperatively connected to the first control unit34and configured to store the log data transferred from the first control unit34. The behavior recognition device2includes a second storage device42, and a second control unit44. The second storage device42is cooperatively connected to the first control unit34of the semiconductor manufacturing apparatus1, and configured to store the log data transferred from the first control unit34. In some embodiments, the first control unit34not only transfers the log data to the first storage unit32of the semiconductor manufacturing apparatus1, but also duplicates the log data and transfers to the second storage device42of the behavior recognition device2. The first control unit34can transfer the log data to the second storage device42of the behavior recognition device2and to the first storage device32of the semiconductor manufacturing apparatus1in a real-time manner as the log data is generated, or in a postponed manner. The second control unit44is cooperatively connected to the second storage device42, and configured to receive the log data from the second storage device42. The second control unit44can also build a transition state model to analyze behaviors related to the manufacturing operations of the series of manufacturing units based on the log data. In some embodiments, the first control unit34and the second control unit44are two micro control units (MCUs), and each may include a processor such as a central processing unit (CPU). The first control unit34and the second control unit44each may further include embedded memory for storing instructions. In some embodiments, the first storage device32and the second storage device42are two storage devices such as hard disks or the like. In some embodiments, the second storage device42of the behavior recognition device2may include a network attached storage (NAS). The second storage device42may be connected to the semiconductor manufacturing apparatus1in a wired manner or a wireless manner. In some embodiments, instead of processing the log data in the first storage device32, the second control unit44processes the log data in the second storage unit42. This would minimize the work load of the first control unit34and the first storage device32, and thus would reduce the risk of overload occurring in the semiconductor manufacturing apparatus1. In some other embodiments, the first control unit34and the second control unit44may be a control unit and/or the first storage device32and the second storage device42may be a storage device, as long as the computing and data accessing abilities are high enough to perform both the manufacturing operations and the analysis of the log data at the same time. In some embodiments, the behavior recognition device2may further include a data transmission unit46cooperatively connected to the second storage device42and the second control unit44. The data transmission unit46may include an interface configured to transfer the log data of the semiconductor manufacturing apparatus1from the second storage device42to the second control unit44in a real-time manner or in a postponed manner. In some embodiments, the log data of different manufacturing units and/or different semiconductor manufacturing apparatuses1may be recorded in different formats. The second control unit44of the behavior recognition device2may further include a parser48configured to convert the log data recording the manufacturing operations of the manufacturing units in different formats into a uniform format as an input of the second control unit44. In some embodiments, the parser48can be implemented by a hardware device. Additionally or alternatively, the parser can be implemented by software or firmware. Referring toFIG.3A.FIG.3Ais a schematic diagram of a machine learning model in accordance with some embodiments of the present disclosure. As shown inFIG.3A, the machine learning model includes multiple tiers configured to execute different learning procedures. The machine learning model may adopt Neutral Network based algorithm. In some embodiments, the machine learning model includes a first tier (Tier1), a second tier (Tier2) and a third tier (Tier3). Tier1may be configured to perform a unit correlation learning from the perspective of each wafer (Wafer view). In Tier1, wafer transfer sequences and time consumption for each of the wafers in the manufacturing units based on the log data are learned to construct a transition state model of wafer transfer sequences. Tier2may be configured to learn the activity for each manufacturing unit from the perspective of each manufacturing unit (Unit view). In tier1, normal sequence and abnormal sequence can be identified based on the transition probabilities and/or loops of the wafer transfer sequences. In Tier2, the manufacturing operations of each manufacturing unit before and after arrivals of the wafers are learned to construct a transition state model of manufacturing unit operations. For example, the manufacturing operations of each of the manufacturing units such as ATM robot F1or F2, VTM robot V1or V2, first processing chamber A1or A2, second processing chamber B1or B2, load lock L1or L2are learned. In Tier2, the mapping between the wafer transfer sequence and the manufacturing operations of the manufacturing units may also be learned. In Tier2, the manufacturing cycles of the manufacturing operations of the manufacturing units and a time consumption of each activity in the manufacturing cycles can be learned, and normal activity and abnormal activity can be identified based on the time consumption and the manufacturing cycles. Tier3may be configured to analyze behaviors related to the manufacturing operations of the series of manufacturing units. In tier3, good/bad sequences can be identified from the normal/abnormal sequences, and good/bad activities can be identified from the normal/abnormal activities in terms of productivity. For example, bad sequence or bad activity can be confirmed in case this sequence or activity results in productivity loss compared to other sequences or activities. Furthermore, tier3may further learn behavior attributions for the bad behaviors, and to capture a root cause of the bad behavior based on the behavior attributions. Tier3may be configured to learn control rules of the good behaviors and the bad behavior to construct a transition state model by cross learning between Tier1and Tier2after Tier1and Tier2trained a plurality of wafers. In some embodiments, the transition state models constructed in Tier1, Tier2and/or Tier3may include but are not limited to, for example Markov chain model. As shown inFIG.3A, the machine learning model is constructed by multiple learning planes (e.g., Tier1and Tier2), and one cross learning plane (e.g., Tier3) so that the machine learning model may also be referred to as a 3D self-expansive cascade machine learning model. Refer toFIG.3B.FIG.3Bis a schematic diagram illustrating transition states of the machine learning model ofFIG.3Ain accordance with some embodiments of the present disclosure. In Tier1, wafer transfer sequences and time consumption for each of the wafers in the manufacturing units based on the log data are learned to construct a transition state model of wafer transfer sequences. For example, the wafer transfer sequences among the ATM robot F1, the load locks L1and L2and the VTM robot V1, and the time consumption of the wafer among the above manufacturing units are learned from the log data to construct a transition state model such as Markov chain model. In the Markov chain model, each location of the wafer represents a state, and the probability of the Markov process changing from one state to another state (indicated by an arrow) can be learned from the log data. In Tier2, the activities of each manufacturing operation for each manufacturing unit are learned to construct another transition state model. For example, the activities of the load lock L2are learned from the log data. The load lock L2may be in state a in which the load lock L2is in atmospheric state, and several activities may be executed when the load lock L2in state a. The load lock L2may be transitioned from state a to state a1in which the front door of the load lock L2is open to send a wafer. The load lock L2may be transitioned from state a to state a2in which the front door of the load lock L2is open to receive a wafer. The load lock L2may be transitioned from state a to state p in which the load lock L2is pumped. The load lock L2may be transitioned from state p to state v in which the load lock L2is vacuumed. The load lock L2may be transitioned from state v to state v1in which the back door of the load lock L2is open to send a wafer. The load lock L2may be transitioned from state v to state v2in which the back door of the load lock L2is open to receive a wafer. The load lock L2may be transitioned from state v to state d in which the load lock L2is ventilated. The load lock L2may be transitioned from state d to state a in which the load lock L2is in atmospheric state. In some embodiments, the load lock L2may be transitioned from state p to state s in which the pressure in the load lock L2is unstable based on the log data. The state s may be identified as an abnormal behavior based on the occurrence probability and/or time consumption from the log data. Similar to the activities of the load lock L2, the activities of other manufacturing units can be learned to construct a transition state model. In Tier3, the transition state models of Tier1and Tier2are cross referred to construct a transition model to identify the good/bad sequences from the normal/abnormal sequences learned from tier1, and to identify the good/bad activities from the normal/abnormal activities from tier2. Tier3can also learn the control rules including job handling sequences of the manufacturing units, operation parameters, time consumption, etc. FIG.4is a schematic diagram illustrating the relation between good/bad sequences and normal/abnormal sequences, and the relation between good/bad activities and normal/abnormal activities. As shown inFIG.4, the bad sequence identified in tier3may be identified as either a normal sequence or an abnormal sequence in tier1, and the bad activity identified in tier3may be identified as either a normal activity or an abnormal activity in tier2. The machine learning model may output the control rules of the manufacturing units of the semiconductor manufacturing apparatus based on the log data, and behaviors of the manufacturing units can be evaluated to find the root cause of an abnormal behavior. Refer toFIG.5.FIG.5is a schematic diagram illustrating job execution order of an ATM robot between the load ports (e.g., FOUP) and load locks in accordance with some embodiments of the present disclosure. As shown inFIG.5, Job1is an activity that transfers a wafer W1from load port10A to load lock L1by an ATM robot F1, and Job2is an activity that transfers a wafer W2from load lock L2to load port10B by an ATM robot F1. In some embodiments, the wafer W1may be aligned by an aligner X in Job1, and the wafer W2may be cooled in a cool station Y in Job2. In some embodiments, the default location of the ATM robot F1is set to be closer to the load ports10A,10B. Table 1 lists the control rules of Job1and Job2and waste time in different job order based on the output of the machine learning model learned from the log data. TABLE 1Job queueJob orderControl rulesWaste time1Job 1Job 1 arrives; Job 2 does not arrive0 second1Job 1Job 2 arrives; Job 1 does not arrive0.3 seconds2Job 2Job 2 arrival is ahead of Job 16.7 secondsshorter than or equal to 6 seconds2Job 2Job 2 arrival is ahead of Job 10.4 secondslonger than 6 seconds As shown in Table 1, the control rule shows the higher priority of Job1causes the 6.7 seconds of waste time. Accordingly, this control rule may be reconsidered to reduce the waste time. Refer toFIG.6.FIG.6is a schematic diagram illustrating job execution order of a load lock in accordance with some embodiments of the present disclosure. The load lock L may include a stack load lock which can accommodate two wafers W3and W4. As shown inFIG.6, wafer W3is an un-processed wafer to be transferred to the main frame such as the center chamber12of the semiconductor manufacturing apparatus, and wafer W4is a processed wafer to be transferred from the center chamber12to the load port10. The load lock L is in a vacuum state, and wafers W3and W4are in the load lock L. Job1is an activity that keeps the load lock L vacuumed, and transfer to an available manufacturing unit when available. Job2is an activity that ventilates the load lock L to an atmospheric state, and transfer wafer W4to the load port10. Table 2 lists the control rules of Job1and Job2and waste time in different job order based on the output of the machine learning model learned from the log data. TABLE 2Job queueJob orderControl rulesWaste time1Wafer W3 > Wafer W4Wafer 4 does not arrive0 second2Wafer W4 > Wafer W3Wafer W4 arrives17.4 seconds As shown in Table 2, the control rule shows the control unit will switch the load lock L from the atmospheric state to the vacuumed state when any processed wafer arrives, which cause 17.4 seconds of waste time. Accordingly, this control rule may be reconsidered to reduce the waste time. In some embodiments of the present disclosure, the behavior recognition device cooperatively connected to the semiconductor manufacturing apparatus can construct a transition state model based on the log data to automatically recognize the behaviors including the sequences of all the wafers and the activities of all the manufacturing units, and generate behavior attributions for each wafer. Accordingly, the machine learning model can identify good/bad sequences of the wafers and good/bad activities of the manufacturing units, and capture the root cause(s) of the bad behaviors with high accuracy. The machine learning model can further learn the control rules related to the bad behaviors and the control rules related to the bad behavior, thereby capturing the root cause. Refer toFIG.7.FIG.7is a flow chart illustrating a semiconductor manufacturing method in accordance with various aspects of one or more embodiments of the present disclosure. The method200may proceed with operation210followed by method100in which a transition state model of all manufacturing units is output to a simulator. The method200proceeds with operation212in which the bad behavior is reproduced based on the transition state model in the simulator. The method200proceeds with operation214in which the solution to the bad behavior is found by changing the transition state model. The method200proceeds with operation216in which the manufacturing operations of the manufacturing units are performed on a plurality of wafers with the adjusted control rule based on a result of the simulation. The method200is merely an embodiment, and is not intended to limit the present disclosure beyond what is explicitly recited in the claims. Additional operations can be provided before, during, and after the method200, and some operations described can be replaced, eliminated, or moved around for additional embodiments of the method. The behavior recognition device can further perform a simulation based on the transition state model. In some embodiments, the transition state model can be output to a simulator, all the behaviors including the abnormal behavior can be reproduced in the simulator, and the manufacturing operations in the transition state model can be simulated by adjusting the control rule of the abnormal behavior. When the simulation result shows that adjusting the control rule can fix the abnormal behavior, the adjusted control rule can be adopted to perform the manufacturing operations on a new batch of wafers with the adjusted control rule. FIG.8is a schematic diagram illustrating a scheme of a simulator in accordance with some embodiments of the present disclosure. As shown inFIG.8, the transition state model constructed based on the log data can be output to a simulator60. In some embodiments, the manufacturing operations, optional parameters and control rules of the manufacturing units of the semiconductor manufacturing apparatus can be stored using a universal tool command language (TCL) application or interface, and configured as role scripts. The simulator60may include control unit to perform the simulation, and storage device to store the transition state model. In some embodiments, the control unit and/or the storage device of the simulator60may be different from that of the behavior recognition device. In some other embodiments, the simulator60and the behavior recognition device may share common control unit and/or storage device. A simulation can be executed by an executor of the simulator60to reproduce the behavior of the manufacturing units of the semiconductor manufacturing apparatus. The reproduction of the behavior of the manufacturing units of the semiconductor manufacturing apparatus can be configured to calibrate the simulator60. When the simulation result is similar to the manufacturing operation of the manufacturing units, the manufacturing operations can be simulated in the transition state model by adjusting the control rule of the manufacturing units. When the adjusted control rule improves the performance (e.g., reduction of waste time) in the simulation result, the adjusted control rule can be adopted to perform the manufacturing operations of the series of manufacturing units on a plurality of wafers in real manufacturing. In some embodiments of the present disclosure, a behavior recognition device cooperatively connected to one or more semiconductor manufacturing apparatuses is used to process log data of the one or more semiconductor manufacturing apparatuses, and to analyze behaviors related to wafer transfer sequences and manufacturing operations of manufacturing units of the one or more semiconductor manufacturing apparatuses. The behavior recognition device can automatically build a machine learning model such as a transition state model. The machine learning model can automatically recognize the behaviors of all the wafers and the manufacturing units, and generate behavior attributions for each wafer and each activity of each manufacturing unit. Accordingly, the machine learning model can identify good behaviors and bad behaviors of the manufacturing units in terms of productivity loss, capture the root cause(s) of the bad behaviors, and learn the control rules of the manufacturing units. The behavior recognition device can further perform a simulation based on the transition state model. In some embodiments, a semiconductor manufacturing system includes at least one semiconductor manufacturing apparatus, and a behavior recognition device. The least one semiconductor manufacturing apparatus includes a series of manufacturing units, a first control unit and a first storage unit. The series of manufacturing units are configured to perform manufacturing operations on wafers. The first control unit is configured to control the manufacturing operations of the series of manufacturing units, and generate log data recording the manufacturing operations of the series of manufacturing units. The first storage device is cooperatively connected to the first control unit and configured to store the log data transferred from the first control unit. The behavior recognition device is cooperatively connected to the semiconductor manufacturing apparatus. The behavior recognition device includes a second storage device and a second control unit. The second storage device is cooperatively connected to the first control unit of the semiconductor manufacturing apparatus, and configured to store the log data transferred from the first control unit. The second control unit is cooperatively connected to the second storage device, and configured to receive the log data from the second storage device and build a transition state model to analyze behaviors related to the manufacturing operations of the series of manufacturing units based on the log data. In some embodiments, a semiconductor manufacturing method includes the following operations. Inputs of log data recording manufacturing operations of a series of manufacturing units of at least one semiconductor manufacturing apparatus performed on a plurality of wafers are received by a storage device. A transition state model is built by a control unit based on the log data. The transition state model performs analyzing behaviors related to wafer transfer sequence and activities of the manufacturing operations of the series of manufacturing units based on the log data, generating behavior attributions for the behaviors, and capturing a root cause of a bad behavior based on the behavior attributions. In some embodiments, a behavior recognition device for recognizing behaviors of a semiconductor manufacturing apparatus includes a storage device, a control unit. The storage device is configured to store log data of the semiconductor manufacturing apparatus. The control unit is cooperatively connected to the storage device, and configured to build a transition state model based on the log data to analyze behaviors related to wafer transfer sequences and manufacturing operations of a plurality of manufacturing units of the semiconductor manufacturing apparatus. The foregoing outlines structures of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 32,584 |
11860608 | DETAILED DESCRIPTION It should be noted that, the following detailed descriptions are all exemplary, and are intended to provide further descriptions of the present invention. Unless otherwise specified, all technical and scientific terms used herein have the same meanings as those usually understood by a person of ordinary skill in the art to which the present invention belongs. It should be noted that the terms used herein are merely used for describing specific implementations, and are not intended to limit exemplary implementations of the present invention. For example, unless otherwise specified in the context, singular forms are also intended to include plural forms. In addition, it should be further understood that, when the terms “comprise” and/or “include” are used in this specification, it indicates that there is a feature, a step, an operation, a device, a component, and/or a combination thereof. The embodiments in the present invention and features in the embodiments may be mutually combined in case that no conflict occurs. Interpretation of Terms: A complex network is a network presenting high complexity and the complexity is mainly represented in the following aspects: (1) a complex structure, represented by a large quantity of nodes, and that the network presents a plurality of different features; (2) network evolution, represented by generation and disappearance of nodes or connections; (3) connection diversity: there are differences in connection weights between nodes, and the connection may be directional; (4) dynamic complexity: a node set may belong to the nonlinear dynamic system, for example, a status of the node changes complexly with time; (5) node diversity: nodes in a complex network may represent anything; and (6) multi-complexity integration, that is, the foregoing multi-complexities affect each other. Wavelet analysis is a multi-resolution analysis method in which an adaptive operation can be simultaneously performed in time domain and frequency domain. During wavelet analysis, a signal (function) is gradually divided by using a scaling and translation operation, to finally achieve time division at a high frequency and frequency division at a low frequency, and automatically adapt to a requirement of time-frequency signal analysis, to focus on any detail of the signal. The Spearman correlation coefficient is also referred to as the Spearman rank correlation coefficient. “Rank” may be understood as a sequence or sorting order, which indicates solution according to a sorting position of raw data. This type of representation form has no limitations as those when a Pearson correlation coefficient is calculated. Mathematically, the Spearman correlation coefficient is a coefficient that measures an individual correlation between two columns of variables, and is irrelevant to a specific value of the variable and is only related to a relative relationship (size sorting) between the variables. The Spearman correlation coefficient is used herein as a weight of a node edge in a network model. A support vector machine is a binary classification model of which a purpose is to find a hyperplane to segment samples. A principle of segmentation is interval maximization, which is finally transformed into a convex quadratic programming problem to solve. When a training sample is linearly separable, a linearly separable support vector machine is learned through hard interval maximization; when the training sample is approximately linearly separable, a linear support vector machine is learned through soft interval maximization; when the training sample is linearly inseparable, a nonlinear support vector machine is learned through core techniques and soft interval maximization. Embodiment 1 This embodiment discloses an industrial equipment operation, maintenance and optimization method based on a complex network model. As shown inFIG.1, the method includes the following steps: Step 1: Obtain data of all sensors of industrial equipment, and calculate a Spearman correlation coefficient between data of every two of the sensors. Different sensors on the same equipment are used as nodes of a network, and a Spearman correlation coefficient between data of the sensors within the same time period is used as a weight of a network edge, to construct a fully connected weighted complex network oriented to data. Descriptions are provided by using a boiler in a thermal power generation scenario as an example. In the thermal power generation production scenario, fuel heats water to generate steam when the fuel is burned. The steam pressure drives a turbine to rotate, and then the turbine drives a generator to rotate to generate electricity. In this series of energy conversion, the core that affects the efficiency of power generation is the combustion efficiency of the boiler, that is, fuel is burned to heat water, to generate high temperature and high pressure steam. There are many factors affecting the combustion efficiency of the boiler, including adjustable parameters of the boiler, such as a combustion feed rate, primary and secondary air, induced air, return air, and water supply; and working conditions of the boiler, such as boiler temperature and pressure, furnace temperature and pressure, and a superheater temperature. Relevant data of the foregoing influencing factors may be acquired by using corresponding sensors. Step 1 specifically includes: Step 1.1: Select a network node, specifically, select all sensors of the boiler as the network nodes. Step 1.2: Select a same time period t, and summarize data collected by the sensors, a first column of data being data collected by a first sensor V0, a second column of data being data collected by a second sensor V1, and so on. A data set V=[V0, V1. . . Vn] of working states of all the sensors of the boiler may be obtained, where Virepresents a sensor name on the boiler. Step 1.3: Process missing data, specifically, process a data sequence returned from each sensor as a time sequence, and in these sequences, if a value of a sequence at a moment is NULL (that is, the sensor is abnormal at the moment and does not capture data), delete data of all the sequences at the moment regardless of whether other sequences have acquired data at the moment. In this way, the missing data is invalidated to facilitate subsequent mining of association rules. Step 1.4: Process the noise, specifically, denoise each signal by using one layer of db4 wavelet, so that most of the signal noise can be filtered out after wavelet transform. The wavelet transform is performed by using the following formula, where α is a scale, and τ is a translation amount: WT(α,τ)=1α∫-∞∞f(t)*φ(t-τα)dt Step 1.5: Process a data distribution difference, where the data distribution difference is processed by using a Spearman correlation coefficient this time, and analyze a correlation between the data. Correlation coefficients between the sensors are calculated by using the Spearman correlation coefficient according to the data set V constructed in step 1.2, and a calculation formula is as follows: ρ=1-6∑i=1Ndi2N(N2-1) A correlation coefficient matrix A may be obtained through calculation. A specific calculation process is provided below by using a correlation between the signal V0and the signal V1as an example: (1) Data of the column V0and the column V1is sorted according to a data size, to obtain data sets V0* and V1*, where V0*=[v00, v10, . . . , vn0], and then a new column xi=[1, 2, 3, . . . , n] is created to assign data of V0* a level value. Similarly, V1*=[v01, v11, . . . , vn1] and a level value sequence yi=[1, 2, 3, . . . , n] of V1* may be obtained. (2) Further, di2may be obtained through calculation: d12=(x1-y1)2d22=(x2-y2)2…dn2=(xn-yn)2 (3) Finally, a correlation coefficient ρ0-1=1-6(d12+d22+….+dn2)n(n2-1) between the signal V0and the signal V1may be obtained through calculation. By analogy, a correlation coefficient ρ between the other sensors may be separately calculated. Step 1.6: For each piece of sensor data, remove sensor data of which a correlation with the sensor data is less than a specified threshold. For example, a feature (sensor) of which a correlation coefficient is less than 0.1 is removed. Step 2: Use each sensor as a node, and use the Spearman correlation coefficient between data of the sensors within the same time period as a weight of a network edge, to construct a fully connected weighted network, as shown inFIG.2. Step 3: Perform appropriacy check on the correlation coefficient in the fully connected weighted complex network. In this embodiment, appropriacy check is performed only on one or more features, specifically including the following steps: receiving a selection of a user for a production target, obtaining a correlation coefficient matrix between data of a sensor corresponding to the production target and data of another sensor, and checking appropriacy of the correlation coefficient based on a support vector regression model. Step 3 specifically includes: Step 3.1: Receive a selection of a user for a production target. Specifically, data of a sensor is selected as a main production target of the equipment according to actual service experience. Using the boiler as an example, a steam amount may be selected as the main production target. Then a correlation coefficient matrix A between data of a sensor corresponding to the production target and data of another sensor is obtained. Step 3.2: Perform absolute value processing on the correlation coefficient matrix A to obtain a matrix B, and remove a feature (sensor) of which a correlation coefficient with the target is less than 0.1 by using the matrix B. Step 3.3: Construct a prediction model by using a support vector regression algorithm according to remaining features in step 3.2, and predict the target value selected in step 3.1, where a prediction step is as follows: After the correlation coefficient is obtained through calculation, the correlation between the sensors is checked by using a support vector machine model. Main steps of the check are as follows: (1) Divide the selected data into two parts: a training set and a test set. (2) Perform cross validation by using data of the training set, to train the support vector machine model. (3) Predict a target value by using data of the test set by using the trained model. (4) Compare a predicted result with an actual result, and determine the quality of the predicted result by using a root mean square error. A support vector regression (SVR) model in the support vector machine is selected to perform prediction, and a derivation formula thereof is as follows: For a general regression problem, a training sample D{(x1, y1), (x2, y2), . . . , (xn, yn)}, yi∈R is given, and f(x) that is approximate to y to the greatest extent is expected to be learned, where ω, b are to-be-determined parameters. In the model, a loss is zero only when f(x) and y are completely the same. However, in the support vector regression model, it is assumed that a maximum deviation of ϵ between f(x) and y can be tolerated, and a loss is calculated only when an absolute value of the difference between f(x) and y is greater than ϵ. In this case, it is equivalent to that an interval band with a width of 2ϵ is constructed by using f(x) as a center, and if the training sample falls within the interval band, it is considered that the training sample is predicted correctly. Therefore, the SVR problem may be formalized as: minω,b12ω2+C∑i=1mŁϵ(f(xi)-yi)(3) C is a regularization constant, and Łϵis an insensitive loss function of E and satisfies the following condition: Łϵ(z)={0,if❘"\[LeftBracketingBar]"z❘"\[RightBracketingBar]"≤ϵ❘"\[LeftBracketingBar]"z❘"\[RightBracketingBar]"-ϵ,otherwise(4) Further, slack variables ϵ and {circumflex over (ϵ)} may be introduced, and (4) is rewritten into the following form: minω,b,εi,ε^i12ω2+C∑i=1m(εi,+ε^i)(5)s.t.f(xi)-yi≤ϵ+εi,yi-f(xi)≤ϵ+ε^i,εi,≥0,ε^i≥0,i=1,2,…,m. Then, a Lagrange multiplier is introduced, and a Lagrange function can be obtained by using the Lagrange multiplier method: L(ω,B,α,α^,ε,ε^,µ,μ^)=(6)12ω2+C∑i=1m(εi,+ε^i)-∑i=1mμiεi-∑i=1mμ^iε^i+∑i=1mαi(f(xi)-yi-ϵ-εi)+∑i=1mα^i(yi-f(xi)-ϵ-ε^i) Further, a duality problem of SVR may be obtained: maxα,α^∑i=1myi(α^i-αi)-ϵ(α^i+αi)-12∑i=1m∑j=1m(α^i-αi)(α^j-αj)xiTxj(7)s.t.∑i=1m(α^i-αi)=0,0≤αi,α^i≤C. When the foregoing condition meets KKT, it can be learned that αican take a non-zero value when and only when f(xi)−yi−ϵ−εi=0. Similarly, {circumflex over (α)}ican take a non-zero value when and only when yi−f(xi)−ϵ−εi=0. In other words, only when the sample (xi, yi) does not fall within the interval band of ϵ, the corresponding αiand {circumflex over (α)}ican take non-zero values. In addition, the foregoing constraints cannot be true at the same time. Therefore, at least one of αiand {circumflex over (α)}iis zero. Based on this, a resolvent of SVR may be obtained as follows: f(x)=Σi=1m({circumflex over (α)}i−αi)xiTx+b(8) b=yi+ϵ−Σi=1m({circumflex over (α)}i−αi)xiTx(9) TABLE 1Model prediction resultModelTraining set mseTest set mseLinearSVR0.09610.1158 When the prediction result meets an expectation, it is considered that the Spearman correlation coefficient can well represent the correlation between the sensors of the boiler. After the correlation check succeeds, sensors on the boiler equipment are selected as nodes of a complex network, and the Spearman correlation coefficient obtained through calculation is used as the weight of the network node edge, to construct the fully connected weighted complex network oriented to industrial big data. Step 4: Obtain, when a parameter adjustment instruction for a target is received, a currently optimal parameter adjustment path of the target based on the fully connected weighted network. The optimal parameter adjustment path includes features directly related to the production target and features indirectly related to the production target. This step is to perform association rule mining on the monitoring factors of the equipment. When the user needs to adjust the production target, a plurality of directly related features of which correlations with the production target are greater than a specified threshold are searched for based on the fully connected weighted network, and then a plurality of indirectly related features of which correlations with the plurality of directly related features are greater than a specified threshold are searched for separately. The directly related features, the indirectly related features, and the correlation coefficient are visualized for the user's reference. The visualization may be implemented in any existing visualization methods such as a tree form and an undirected graph, and this is not limited herein. The user may select a feature according to a visualization result, and adjust a corresponding parameter. Embodiment 2 An objective of this embodiment is to provide an industrial equipment parameter adjustment path generation system based on a sensor network model. To achieve the foregoing objective, the present invention uses the following technical solution: This embodiment provides an industrial equipment parameter adjustment path generation system based on a sensor network model, including: a data obtaining module, configured to obtain data of all sensors of industrial equipment; a network construction module, configured to calculate a Spearman correlation coefficient between data of every two of the sensors within the same time period; use each sensor as a node, and use the Spearman correlation coefficient as a weight of a network edge, to construct a fully connected weighted network; and a parameter adjustment path generation module, configured to obtain, when an adjustment instruction for a target feature is received, a currently optimal parameter adjustment path of the target feature based on the fully connected weighted network. Embodiment 3 An objective of this embodiment is to provide an electronic device. An electronic device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, when executing the program, the processor implementing the following steps, including: obtaining data of all sensors of industrial equipment, and calculating a Spearman correlation coefficient between data of every two of the sensors within the same time period; using each sensor as a node, and using the Spearman correlation coefficient as a weight of a network edge, to construct a fully connected weighted network; and obtaining, when an adjustment instruction for a target feature is received, a currently optimal parameter adjustment path of the target feature based on the fully connected weighted network. Embodiment 4 An objective of this embodiment is to provide a computer-readable storage medium. A computer-readable storage medium stores a computer program, when executed by a processor, the program implementing the following steps: obtaining data of all sensors of industrial equipment, and calculating a Spearman correlation coefficient between data of every two of the sensors within the same time period; using each sensor as a node, and using the Spearman correlation coefficient as a weight of a network edge, to construct a fully connected weighted network; and obtaining, when an adjustment instruction for a target feature is received, a currently optimal parameter adjustment path of the target feature based on the fully connected weighted network. The steps involved in the foregoing Embodiment 2, Embodiment 3, and Embodiment 4 correspond to Embodiment 1. For a specific implementation, refer to related descriptions of Embodiment 1. The term “computer-readable storage medium” should be understood as a single medium or a plurality of media including one or more instruction sets, and should also be understood as including any medium. The any medium can store, encode, or carry an instruction set used for being executed by a processor, and cause the processor to perform any method in the present invention. The foregoing one or more embodiments have the following technical effects: In the present invention, production equipment in reality is digitized by using industrial big data, to construct a complex network oriented to the industrial big data, and various factors of the equipment are connected. By using the network, an optimal path for equipment parameter tuning may be found by traversing paths on the network, thereby reducing dependence of an enterprise on a domain expert. In the model constructed in the present invention, a correlation between data is calculated by using a Spearman correlation coefficient. In this way, an overall distribution of data and a sample size can be well ignored, to resolve problems of the industrial big data in these aspects. In addition, the model is completely data-driven, so that an impact of service knowledge on the data model is greatly eliminated. A person skilled in the art should understand that the modules or steps in the present invention may be implemented by using a general-purpose computer apparatus. Optionally, they may be implemented by using program code executable by a computing apparatus, so that they may be stored in a storage apparatus and executed by the computing apparatus. Alternatively, the modules or steps are respectively manufactured into various integrated circuit modules, or a plurality of modules or steps thereof are manufactured into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software. The foregoing descriptions are merely preferred embodiments of the present invention, but are not intended to limit the present invention. A person skilled in the art may make various alterations and variations to the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention. Although the foregoing describes specific implementations of the present invention with reference to the accompanying drawings, the protection scope of the present invention is not limited. A person skilled in the art should understand that, based on the technical solutions of the present invention, various modifications or variations made by a person skilled in the art without creative efforts shall still fall within the protection scope of the present invention. | 20,704 |
11860609 | DETAILED DESCRIPTION One or more specific embodiments of the present disclosure will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. When introducing elements of various embodiment of the present disclosure, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of these elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Industrial automation systems may be used in various contexts, such as a manufacturing plant, a resource extraction system, a hydrocarbon extraction site, a chemical refinery facility, an industrial plant, a power generation system, a mining system, a brewery, or the like. For example, in a resource extraction system context, a drive associated with a control system may control load and position of a rod pump to perform an oil extraction process. Although examples are provided with regard to specific contexts, one of ordinary skill in the art will recognize that these examples are not intended to be limiting and that the techniques described herein can be used with any suitable context. To improve operation of industrial automation systems, components of the industrial automation system (e.g., supervisory control system) may monitor performance of one or more devices (e.g., operator control station) with respect to the industrial automation process as a whole. Statuses and/or information from the one or more components (e.g., supervisory control system) may be transmitted to respective control systems of drives associated with the one or more devices via an Ethernet network. Respective control systems of the drives may use the statuses and/or information to make control decisions related to the one or more devices controlled or coupled to the drive. As mentioned above, each drive may be housed in a control cabinet associated with the industrial automation system, and each drive may include a control system to control operations of respective components (e.g., load devices, motor). To enable the supervisory control system to receive statuses and/or information from the one or more devices (e.g., operator control station), each device may be connected to an Ethernet network. In some cases, each device may be connected to the Ethernet network via a separate Ethernet cable. However, wiring each device to the Ethernet network via an Ethernet cable may be cumbersome, cost inefficient, and result in a bundle of Ethernet cables, which may be difficult to maintain. As such, it may be desirable to connect operator control stations to the Ethernet network while reducing the number of wires and overall installation cost, increasing transmission speed, and the like. Accordingly, the present disclosure provides techniques for connecting operator control stations using a single pair Ethernet (SPE) cable. As used herein, single pair Ethernet (SPE) conductors may include a single pair of twisted wire for transmitting and receiving data. Non-limiting examples of SPE conductors include SPE cables, SPE wires, SPE traces, and SPE bars. As used herein, a gateway communication device may be a communication device that is directly (e.g., no intervening components) connected to the Ethernet network. The gateway communication device may serve to connect two or more networks and provide a routing function. That is, the gateway communication device may receive data or status information from components (e.g., supervisory control systems) of the industrial automation system from the Ethernet network and facilitate routing of the data or status information to respective destination drives. The gateway communication device may also receive data (e.g., control signal) from other drives and facilitate routing of the data to respective destination components via the Ethernet network. Based on the data or status information received via the gateway communication device (e.g., performance of the component), a respective control system of a respective drive may make a control decision. A drive may control torque, power, speed, direction, or any suitable operation of a respective component. For example, a variable frequency drive (VFD) may control a speed of a motor based on a command received from the gateway communication device via the Ethernet network. The SPE conductors may be used to couple the operator control stations to the Ethernet network, such as, via the gateway device. By way of introduction,FIG.1illustrates an example industrial automation system10employed by a food manufacturer in which the present embodiments described herein may be implemented. It should be noted that although the example industrial automation system10ofFIG.1is directed at a food manufacturer, the present embodiments described herein may be employed within any suitable industry, such as automotive, mining, hydrocarbon production, manufacturing, and the like. That is, the following brief description of the example industrial automation system10employed by the food manufacturer is provided herein to help facilitate a more comprehensive understanding of how the embodiments described herein may be applied to industrial devices to significantly improve the operations of the respective industrial automation system based on the current configuration of the equipment in the industrial automation system. As such, the embodiments described herein should not be limited to be applied to the example depicted inFIG.1. Referring now toFIG.1, the example industrial automation system10for a food manufacturer may include silos12and tanks14. The silos12and the tanks14may store different types of raw material, such as grains, salt, yeast, sweeteners, flavoring agents, coloring agents, vitamins, minerals and preservatives. In some embodiments, sensors16may be positioned within or around the silos12, the tanks14, or other suitable locations within the example industrial automation system10to measure certain properties, such as temperature, mass, volume, pressure, humidity, and the like. The raw materials be provided to a mixer18, which may mix the raw materials together according to a specified ratio. The mixer18and other machines in the example industrial automation system10may employ certain industrial automation devices20to control the operations of the mixer18and other machines. The industrial automation devices20may include controllers, input/output (I/O) modules, motor control centers (MCCs), motors, human machine interfaces (HMIs), operator control stations, contactors, starters, sensors16, actuators, conveyors, drives, relays, protection devices, switchgear, compressors, sensor, actuator, firewall, network switches (e.g., Ethernet switches, modular-managed, fixed-managed, service-router, industrial, unmanaged, etc.) and the like. The mixer18may provide a mixed compound to a depositor22, which may deposit a certain amount of the mixed compound onto conveyor24. The depositor22may deposit the mixed compound on the conveyor24according to a shape and amount that may be specified to a control system for the depositor22. The conveyor24may be any suitable conveyor system that transports items to various types of machinery across the example industrial automation system For example, the conveyor24may transport deposited material from the depositor22to an oven26, which may bake the deposited material. The baked material may be transported to a cooling tunnel28to cool the baked material, such that the cooled material may be transported to a tray loader30via the conveyor24. The tray loader30may include machinery that receives a certain amount of the cooled material for packaging. By way of example, the tray loader30may receive25ounces of the cooled material, which may correspond to an amount of cereal provided in a cereal box. A tray wrapper32may receive a collected amount of cooled material from the tray loader30into a bag, which may be sealed. The tray wrapper32may receive the collected amount of cooled material in a bag and seal the bag using appropriate machinery. The conveyor24may transport the bagged material to case packer34, which may package the bagged material into a box. The boxes may be transported to a palletizer36, which may stack a certain number of boxes on a pallet that may be lifted using a forklift or the like. The stacked boxes may then be transported to a shrink wrapper38, which may wrap the stacked boxes with shrink-wrap to keep the stacked boxes together while on the pallet. The shrink-wrapped boxes may then be transported to storage or the like via a forklift or other suitable transport vehicle. To perform the operations of each of the devices in the example industrial automation system10, the industrial automation devices20may be used to provide power to the machinery used to perform certain tasks, provide protection to the machinery from electrical surges, prevent injuries from occurring with human operators in the example industrial automation system10, monitor the operations of the respective device, communicate data regarding the respective device to a supervisory control system40, and the like. In some embodiments, each industrial automation device20or a group of industrial automation devices20may be controlled using an operator control station42. The operator control station42may generate and/or receive data regarding the operation of the respective industrial automation device20, other industrial automation devices20, user inputs, and other suitable inputs to control the operations of the respective industrial automation device(s)20. The operator control station42may have access to configuration data associated with the connected industrial automation devices20. That is, the operator control station42may include memory or a storage component that stores information concerning the configuration of each industrial automation device20connected to it. In some embodiments, the information or configuration data may be populated or input by an operator at the time the respective industrial automation device20is installed. Additionally, the operator control station42may query the connected industrial automation device20to retrieve configuration data, such as model number, serial number, firmware revision, assembly profile, and the like. In some embodiments, the supervisory control system40may collect configuration data from multiple operator control stations42and store the information in a suitable memory or storage component. In some embodiments, the industrial automation devices20(e.g., operator control stations) may include a communication feature that enables the industrial automation devices20to communicate data between each other and other devices. The communication feature may include a network interface that may enable the industrial automation devices20to communicate via various protocols such as Ethernet/IP, ControlNet, DeviceNet, ProfiNet, ModBus TCP, BacNet/IP, or any other industrial communication network protocol. Alternatively, the communication feature may enable the industrial automation devices (e.g., operator control stations) to communicate via various wired, such as Ethernet (e.g., single pair Ethernet (SPE), drive serial interface (DSI), and the like, or wireless communication protocols, such as Wi-Fi, mobile telecommunications technology (e.g., 2 G, 3 G, 4 G, LTE, 5 G), Bluetooth, near-field communications technology, and the like). As mentioned above, the industrial automation devices20may be controlled using a local control system. In certain embodiments, the local control system may be disposed within a respective drive54. One or more drives54may be disposed in a control cabinet (e.g., a low voltage motor control center50) of the industrial automation system10. Along with the one or more drives54, the control cabinet may include one or more gateway communication devices52of the industrial automation system10. In some embodiments, the one or more gateway communication devices52may be enclosed in a different housing than the one or more drives54. For example, each gateway communication device52may be enclosed in a separate housing than each drive54. In other embodiments, at least one gateway communication device52and at least one drive54may be integrated together in a common housing. The gateway communication device52may receive data (e.g., status information) from components (e.g., supervisory control system, operator control station) of the industrial automation system10via a communication network (e.g., Ethernet network) and may facilitate routing of the data to a respective destination drive via Ethernet connection56. In some embodiments, the gateway communication device52may be a drive with the ability to interface with the communication network. Based on receiving data form components of the industrial automation system10via the gateway communication device52, a respective drive54may make a control decision. In some embodiments, the components, such as the supervisory control system, the operator control station, and the like, may make the control decision, and the gateway communication device52may transmit the data related to the control decision to a respective drive54. For example, the drive54may control torque, power, speed, direction, or any suitable operation of a respective industrial automation device20(e.g., load device). That is, the drive54may include drive circuitry, such as switches (e.g., diodes, IGBTs, thyristors), that convert single-phase or multi-phase alternating current (AC) voltage into a controllable AC voltage that may be used to perform control operations for a load device, such as a motor. In addition, the gateway communication device52may receive data from the drives54via the Ethernet connection56and route the data to components via the communication network. With the foregoing in mind,FIG.2illustrates an embodiment of the low voltage motor control center50including the operator control station42, in accordance with an embodiment of the present disclosure. The low voltage motor control center50may include one or more drives54to control one or more motors. Each drive54may include a control system for controlling the one or more motors and may also include a communication component, a processor, a memory, a storage unit, input/output ports, an image sensor (e.g., a camera), a location sensor, a display, additional sensors (e.g., vibration sensors, temperature sensors), and the like. The communication component may be a wireless or wired communication component that may facilitate communication between the drive54and other devices (e.g., the operator control station42). The operator control station42may be communicatively coupled to the gateway52, the one or more drives54, and/or any other devices of the industrial automation system via the Ethernet connection56. The operator control station42may generate and may transmit data and/or signals via the Ethernet connection56to the gateway52, the one or more drives54, and/or any other suitable devices and may transmit the data and/or signals according to a SPE Ethernet protocol. For example, the operator control station42may receive a user input via one or more user input interfaces, as described herein, and may generate and may transmit signals and/or data based on the received user input. In some embodiments, the gateway52may receive control signals from other control systems (e.g., the operator control station) via the Ethernet connection56and may provide these signals to each drive54. In some embodiments, the Ethernet connection56may be implemented by a ribbon cable and may include multiple (e.g., seven) wires. For example, the ribbon cable may transmit a select signal, a network power positive signal, a network power negative signal, a control power positive signal, a control power negative signal, and a Single Pair Ethernet (SPE) cable via seven conductors. The SPE may include one pair of conductors to facilitate Ethernet transmission of data. The SPE conductors may carry a SPE positive signal and a SPE negative signal, which may provide network communication functionality across the Ethernet network and to each device connected to the Ethernet network. The SPE positive signal and SPE negative signal may be a part of a bus and/or multi-drop topology (e.g., a topology where multiple data endpoints couple to a same communication bus). Communication transmitted via the SPE conductors may follow the SPE Ethernet protocol. By communicatively coupling the operator control station42and other components of the industrial automation system via the gateway52and the SPE conductors, the present embodiment reduces the size of the communication cables used to interconnect different components via an Ethernet network as compared to other communication conductors including standard Ethernet cables. Moreover, the SPE conductors provide for up to 10 Megabytes/second transmission rate across 1,000 meters, thereby enabling multiple components to be connected to the gateway52. With the foregoing in mind,FIG.3illustrates an embodiment of the low voltage motor control center50including the operator control station42, in accordance with an embodiment of the present disclosure. The low voltage motor control center50may include one or more starters58and each starter58may include one or more safety relays and/or one or more safety contactors. The one or more starters58may include Low Voltage Soft Starters, Medium Voltage Soft Starters, Low Voltage Starters, or any combination thereof. Each starter58may start and monitor motors and drives of the industrial automation system. The operator control station42may be communicatively coupled to the gateway52, the one or more starters58, and/or any other devices of the industrial automation system via the Ethernet connection56. The operator control station42may generate and may transmit data and/or signals via the Ethernet connection56to the gateway52, the one or more starters58, and/or any other suitable devices and may transmit the data and/or signals according to a SPE Ethernet protocol. For example, the operator control station42may receive a user input via one or more user input interfaces, as described herein, and may generate and may transmit signals and/or data based on the received user input. With the foregoing in mind,FIG.4illustrates a schematic diagram of the operator control station42, in accordance with an embodiment of the present disclosure. The operator control station42may include a controller60that may control operation of the control station42and may process data acquired by the operator control station42. The controller60may be provided in the form of a computing device, such as a programmable logic controller (PLC). The controller60may include at least one processor, such as processor62, and at least one memory, such as memory64. In the illustrated embodiment, the operator control station42may also include user input interface(s)66, data acquisition circuitry68, a display70, and an SPE communication interface72. The processor62may process acquired data and/or may translate acquired data to provide communication via the SPE communication interface72coupled to an Ethernet network76. For example, the processor62may transmit one or more data signals in the Ethernet communication protocol from the SPE communication interface72and communication port74to one or more devices communicatively coupled via the Ethernet connection56to the Ethernet network76. Likewise, the controller60may receive data signals in the Ethernet communication protocol via the SPE communication interface72and communication port74. In certain embodiments, the operator control station42may include additional elements not shown inFIG.4, such as additional data acquisition and processing controls, additional display panels, multiple user interfaces, and so forth. The user input interface66may be capable of receiving an input from a user to adjust operation of one or more devices of an industrial automation system. In some embodiments, the user input interface66may include any number of push buttons66A, switches66B (e.g., toggle switches, selector switches, and so forth), pendant stations, lights66C (e.g., LEDs, pilot lights, and so forth), any other suitable operating interface, or any combination thereof. For example, the user input interface66may include a start push button to initiate an operation for a device of an industrial automation system, a stop push button to end operation/shut off operation of the device, a selector switch to select an operating mode of the device, and a pilot light to indicate faults with the operator control station and/or the device. In certain embodiments, the user input interface66may be a portion of the display70. For example, the user input interface66may be a touch screen. In another embodiment, the input interface66may be a push button with an LED indicator that acts as the display70. The display70may provide an indication of a current operating mode of the operator control station42and/or one or more devices of the industrial automation system. The display70may include one or more lights and/or an indication on a touch screen display to display an operating mode of the operator control station42. The data acquisition circuitry68may be communicatively coupled to the processor62and may include receiving and conversion circuitry. The data acquisition circuitry68may receive data from one or more devices of the industrial automation system and may transmit the data to the processor62. In certain embodiments, the data acquisition circuitry68may be communicatively coupled to the SPE communication interface72and may receive data from one or more devices of the industrial automation system via the SPE communication interface72. For example, the SPE communication interface72may be an Ethernet communication interface and may enable communication between the Ethernet network76and one or more devices of the industrial automation system. In some embodiments, the memory64may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor62and/or data to be processed by the processor62. For example, the memory64may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like. Additionally, the processor62may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof. Further, the memory64may store data obtained via one or more devices of the industrial automation system and/or algorithms utilized by the processor62. The SPE communication interface72may enable communication between the operator control station42and components (e.g., gateway(s), motor(s), drive(s), starter(s), and so forth) of an industrial automation system via the Ethernet network76. The Ethernet network76may be a logical partition of a network and each device connected to the logical partition may have a portion (e.g., identifier) of an associated Internet Protocol (IP) address that corresponds to the logical partition. In certain embodiments, the processor62may receive and/or may translate data signals between the Ethernet communication protocol and any other suitable communication protocol to facilitate generation and transmission of signals from the operator control station42to one or more components of the industrial automation system. For example, the processor62may receive and/or may transmit data signals via the SPE communication interface72and communication port74. In some embodiments, the operator control station42may be communicatively coupled to the Ethernet network76via the Ethernet connection56(e.g., SPE conductors). With the foregoing in mind,FIG.5illustrates an example embodiment of the operator control station42, in accordance with an embodiment of the present disclosure. The operator control station42may include any number of user input interfaces, such as user input interfaces66A,66B,66C,66D. In certain embodiments, a first user input interface66A may be a selector switch and may adjust an operating mode of one or more devices of the industrial automation system. For example, the selector switch may be moved to select a desired operating mode for one or more devices. In response to movement of the selector switch and selecting the desired operating mode, the processor of the operator control station42may generate and may transmit a signal to one or more devices and/or to a gateway to adjust operation of the one or more devices. For example, the signal may instruct the one or more devices to operate according to the selected operating mode and/or the signal may instruct the gateway to generate and transmit a second signal to adjust the operating mode of the one or more devices according to the selected operating mode. In some embodiments, a second user input interface66B may be a push button and may power down and/or shut off operation of one or more devices of the industrial automation system. For example, the processor of the operator control station42may generate and may transmit a signal in response to the second user input interface66B receiving an input from a user. Additionally or alternatively, the second user input interface66B may include an indicator to provide a visual indication of a power status (e.g., power off) of the one or more devices. For example, the second user input interface66B may include a red pilot light that may activate to indicate no power is currently supplied to the one or more devices. In certain embodiments, a third user input interface66C may be a second push button and may power on and/or start operation of one or more devices of the industrial automation system. For example, the processor of the operator control station42may generate and may transmit a signal in response to the third user input interface66C receiving an input from a user. Additionally or alternatively, the third user input interface66C may include an indicator to provide a visual indication of a power status (e.g., power on) of the one or more devices. For example, the third user input interface66C may include a green pilot light that may activate to indicate power is currently supplied to the one or more devices. A fourth user input interface66D may be a pilot light to provide a visual indication of an error and/or fault associated with one or more devices and/or the operator control station42. For example, the fourth user input interface66D may be a yellow pilot light and may activate to indicate a fault and/or an error associated with the one or more devices. The operator control station42may also include a housing78and the housing may contain at least one of the components of the operator control station42ofFIG.4. In certain embodiments, the operator control station42may incorporate any number of user input interfaces and each user input interface may be a separate node of the In-cabinet Bus communication network. For example, each user input interface of the operator control station42may be connected in series and may pass generated messages to adjacent user input interfaces for transmission to a destination component associated with the generated message. As such, the wiring and installation of the operator control station42may be greatly simplified due to the connections. For example, a message transmitted via an In-cabinet Bus infrastructure disposed between a user input interface of the operator control station42and the destination component may not be interrupted by data or control signals transmitted from other user input interfaces or other components of the operator control station42. That is, adjacent user input interfaces and/or components or intermediary user input interfaces and/or components between the user input interface of the operator control station42and the destination component may be bypassed during transmission of the single pair Ethernet (SPE) data packet. In some embodiments, the operator control station42may include one or more communication components (e.g., ports, modems, network switches) that couple to the single pair Ethernet (SPE) conductors56to transmit single pair Ethernet (SPE) data packets to the destination component. WhileFIG.5illustrates four user input interfaces, in other embodiments, any number of user input interfaces may be included and many other embodiments are envisaged. For example, more or fewer user input interfaces may be included in other embodiments of the operator control station. Additionally or alternatively, the operator control station may include any number of user input interfaces, any number of push buttons, any number of switches, any number of LEDs, any other suitable indicators, any other suitable interfaces, and so forth. With the foregoing in mind,FIG.6illustrates a flowchart of a process80for routing data (e.g., control instructions, status information and performance of load devices) from the operator control station to the components (e.g., gateway52, control system of drives54, starters58, relays, and so forth), in accordance with an embodiment of the present disclosure. Although the following description of the process80will be discussed as being performed by the processor62of the operator control station42, it should be noted that any suitable computing component may perform the process80. In addition, although the process80is described in a particular order, it should be noted that the process80may be performed in any suitable order. At block82, the operator control station42may receive a user input, for example, at user input interface66ofFIG.4. The processor62may receive data associated with the user input and/or may parse and/or analyze the user input and generate data associated with the user input. Additionally or alternatively, the processor62may generate a control signal based on the user input. For example, the processor62may determine the user input corresponds to a pressing a shut off input interface and the processor62may generate a control signal to shut off and/or power down one or more devices of the industrial automation system. At block84, based on parsing and/or analyzing the user input received from the user input interface66ofFIG.4, the processor62may identify a component (e.g., drive54, starter58) that corresponds to the received user input. In some embodiments, the data associated with the user input includes a destination internet protocol (IP) address that helps operator control station42determine the destination component. In some cases, the data associated with the user input received and/or generated by the operator control station42may not be in a format or a state that is suitable for being routed using the single pair Ethernet (SPE) conductors56. As such, if the data is not in a suitable format, at block86, the processor62may convert the data into a single pair Ethernet (SPE) data packet that is suitable for transmission via the single pair Ethernet (SPE) conductors56. The single pair Ethernet (SPE) data packet includes internet protocol (IP) address, control information, load data, and so forth associated with single pair Ethernet (SPE) protocol. Based on determining the destination component and converting the data to the single pair Ethernet (SPE) data packet, at block88, the processor62may transmit the single pair Ethernet (SPE) data packet to the destination component via the via the single pair Ethernet (SPE) conductors56. In some embodiments, the processor62may transmit the single pair Ethernet (SPE) data packet to the gateway52ofFIG.1, the gateway52may then forward the single pair Ethernet (SPE) data packet to a control system of the drive54, and the drive54may forward the SPE data packet to an adjacent component, and so forth until the destination component receives the single pair Ethernet (SPE) data packet. That is, each control system of the respective component, upon receiving the single pair Ethernet (SPE) data packet, determines whether it is specified to process the single pair Ethernet (SPE) data packet. If the control system determines that it is indeed specified to process the single pair Ethernet (SPE) data packet, the control system processes the single pair Ethernet (SPE) data packet. Otherwise, the control system may forward the single pair Ethernet (SPE) data packet to an adjacent component without processing it. In other embodiments, the operator control station42may transmit the single pair Ethernet (SPE) data packet via an In-cabinet Bus directly to the destination control system without transmitting the packet to an intermediary control system. For example, a message transmitted via an In-cabinet Bus infrastructure disposed between the operator control station42and the destination component may not be interrupted by data or control signals transmitted from intermediary components. That is, adjacent components or intermediary components between the operator control station42and the destination component may be bypassed during transmission of the single pair Ethernet (SPE) data packet. In some embodiments, the operator control station42may include one or more communication components (e.g., ports, modems, network switches) that couple to the single pair Ethernet (SPE) conductors56to transmit single pair Ethernet (SPE) data packets to the destination component. With the foregoing in mind,FIG.7illustrates a schematic diagram of low voltage motor control center units50A,50B, in accordance with embodiments of the present disclosure. The low voltage motor control center unit50A may include any number of starters, such as starter58A. In some embodiments, the low voltage motor control center units50A,50B may be a single low voltage motor control center having any number of starters and/or any number of corresponding operator control stations. Additionally or alternatively, the low voltage motor control center units50A,50B may be separate low voltage motor control centers. In certain embodiments, the low voltage motor control center unit50A may include only the starter58A. The low voltage motor control center unit50A may include any type of starter, such as a full voltage non-reversible (FVNR) starter, a direct on-line (DOL) starter, a full voltage reversible (FVR) starter, or a direct on-line reversing (DOLR) starter. The operator control station42A may be communicative coupled to the starter58A via the SPE conductors56and may control one or more operations of the starter58A. The operator control station42A may be configured based on the type of starter58A. For example, if the starter is a FVNR or DOL starter, the operator control station42A may include at least four components, such as a first component (e.g., selector switch) to switch the starter58A between automatic and manual operation, a second component (e.g., illuminating push button) to provide an indication of an operational status (e.g., running, stopped, fault, and so forth) of the starter58A and to stop operation of the starter58A, a third component (e.g., second illuminating push button) to provide a different indication of an operational status (e.g., running, stopped, fault, and so forth) of the starter58A and to start operation of the starter58A, and a fourth component (e.g., third illuminating push button) to provide another different indication of an operational status (e.g., running, stopped, fault, and so forth) of the starter58A and to reset operation of the starter58A. The second operator control station42B may be configured based on the type of starter58B. For example, if the starter is a FVR or DOLR starter, the second operator control station42B may include at least four components, such as a first component (e.g., a selector switch) to switch the second operator control station42B between local mode and remote mode operation, a second component (e.g., a second selector switch) to switch operation (e.g., forward, reverse, off) of the starter58B, a third component (e.g., LED indicator) to provide an indication of an operational status (e.g., running, stopped, fault, and so forth) of the starter52B, and a fourth component (e.g., illuminating push button) to provide a different indication of an operational status (e.g., running, stopped, fault, and so forth) of the starter52B and to reset operation of the starter52B. In local mode operation, the second operator station42B and starter58B may communicate directly via the SPE conductors56and the second operator station42B may directly control operation of the starter58B. For example, the second operator station42B and starter52B may communicate without intermediate components, such as gateway52or any other suitable controller (e.g., a programmable logic device). In remote mode operation, the second operator station42B and/or the starter58B may be controlled by an intermediate component, such as gateway52or any other suitable controller (e.g., a programmable logic device). A connector interface92may couple the low voltage motor control center units50A,50B to the gateway52via the SPE conductors56. The gateway52may supply DC power to the low voltage motor control center units50A,50B and a controller90, the gateway52, and/or the operator stations42A,42B may identify components on the In-cabinet Bus network. For example, the controller90, the gateway52, and/or the operator stations42A,42B may determine a type of starter and may program components (e.g., operator control station, starters, contactors) based on the type of starter. The controller90may be any suitable automation controller, such as a programmable logic device, a programmable logic controller, and the like. The controller90may be provided in the form of a computing device, such as a programmable logic controller (PLC). The controller90may include at least one processor and at least one memory. In some embodiments, the memory may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor and/or data to be processed by the processor. For example, the memory may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like. Additionally, the processor may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof. Further, the memory may store data obtained via one or more devices of the industrial automation system and/or algorithms utilized by the processor. In some embodiments, the operator control station may include any number of modular components (e.g., user input interfaces) and the modular components may be selected based on a desired operation of the low voltage motor control center. For example, the number and/or the type of user input interfaces (e.g., selector switch, push button, LED indicator, and so forth) may be selected and configured according to a desired operation. In certain embodiments, the user input interfaces may be initially configured and/or reconfigured by any number of software operations. For example, the operator control station may be communicatively coupled to the Ethernet network via the SPE conductors and the operator control station may receive configuration instructions to configure any number of user input interfaces. As mentioned above, in some complex industrial automation systems10, one or more controllers and/or other industrial automation components (e.g., variable frequency drives (VFDs), PLCs, programmable automation controllers (PACs), contactors, starters, overload protection components, fuses, circuit breakers, disconnects, short circuit protectors, etc.) may be combined into an enclosure or cabinet and referred to as an MCC.FIG.8is a front view of an embodiment of an MCC100. As shown, the MCC100includes an enclosure102that is divided into vertical sections104,106,108,110,112,114,116. Each section may be further divided into one or more buckets118,120,122,124, which may be configured to receive units. The units may include, industrial automation components configured to perform industrial automation functions. The units may thus include, for example, motor controllers, VFDs, PLCs, PACs, contactors, starters, overload protection components, fuses, circuit breakers, disconnects, short circuit protectors, and so forth. In some embodiments, the size of each bucket118,120,122,124may be customized to the type of unit the bucket118,120,122,124is configured to receive. In other embodiments, different MCCs100may be available preconfigured with differently sized buckets. As shown, the cabinet doors126of some buckets may include disconnect switches128for disconnecting the respective unit from the MCC100. Accordingly, to remove a unit, a user may actuate the disconnect switch128(e.g., from “on” to “off”) to electrically disconnect the unit from the MCC100. The user may then open the cabinet door126, and physically remove the unit from the enclosure102. If the unit is being replaced with a different unit, the new unit may be physically installed in the bucket124, the cabinet door126closed, and the disconnect switch128actuated (e.g., from “off” to “on”). The units within an MCC100may join a wired In-cabinet Bus network by coupling to a multidrop cable that extends through the MCC enclosure102.FIG.9depicts a portion of the multidrop cable200for use within the MCC100ofFIG.8. The illustrated portion of the multidrop cable200may include one or more terminals202positioned along transmission lines204. The terminal202may include a slot206to facilitate electrical connection of an industrial automation device via a tap circuitry (not shown) to the transmission lines204. A node may include the terminal202and a respective connected tap circuitry. In some embodiments, the terminals202may be referred to as “drops”, while the portions of transmission lines204extending between terminals may be referred to as “trunks”210. Accordingly, the term “multidrop” in multidrop cable200refers to the cable200having multiple terminals202to which components may be connected. The transmission lines204may include electrical conductors208A-208G. It should be noted that different number of terminals202may be used in different embodiments with the multidrop cable200in the MCC100. The multidrop cable200may facilitate communication between the nodes using various communication protocols. Hence, the number of conductors of transmission lines204and the arrangement of the conductors may vary based on the communication protocol being used by the MCC100. For example, the multidrop cable200may use an industrial Ethernet network protocol (EtherNet/IP). The terminals202may each include respective tap circuitry that may facilitate connection of various industrial automation components to the transmission lines204of the multidrop cable200. The connectors may facilitate power transmission and/or communication between the input/output signals of the respective node and the transmission lines204of the multidrop cable200. The MCC100may facilitate data communication between different numbers of nodes in different configurations and different directions using the multidrop cable200. For example, the MCC100may communicatively connect motor controllers, VFDs, PLCs, PACs, contactors, starters, overload protection components, fuses, circuit breakers, disconnects, short circuit protectors, etc. within the MCC100using one or multiple multidrop cables200. Also, a node may take any shape or form as long as the connection adhere to the communication protocol of the multidrop cable200. For example, a sensor may be positioned on a tap circuitry, and the tap circuitry may connect to a slot206of the terminal202to communicate with one or multiple other nodes connected on the multidrop cable200through the transmission lines204. FIG.10depicts a cross-sectional side view of an embodiment of the transmission lines204of the multidrop cable200using EtherNet/IP protocol. It should be noted that the multidrop cable200is not intended to be limited to the EtherNet/IP protocol or the depicted conductors208A-208G shown inFIG.10. The multidrop cable200may employ other communication protocols and/or other combination of conductors in different embodiments. Also, the transmission lines204may include cables with different wire gauge or conductive materials for different applications. The transmission lines204may include single pair Ethernet (SPE) conductors302, a switched power (SP)304pair, a pair of network power (NP) conductors306A and306B, and a select line conductor308. The SPE302may include a first and a second conductor to enable transmission of a differential signal. In certain embodiments, the SPE302may be a single pair Ethernet cable and the SP304and the NP306A and306B may carry Direct Current (DC) power. The SPE302conductors may transmit communication signals and the SP304conductors may transmit signals in the form of switched electrical power between different nodes. In some embodiments, the SPE302and/or the SP304may deliver electrical power to one or multiple nodes to power actuators, contactors, and sounders, among other things. The NP306A and NP306B conductors may provide electrical power to one or multiple nodes. In some embodiments, the NP306A and NP306B conductors may power the communication circuits and/or microcontrollers of the respective one or multiple nodes. Furthermore, the select line conductor308may communicate a select line signal to facilitate identification and configuration of nodes. The select line conductor308may transmit communication signals and/or facilitate communication or transmission of power signals by the SPE302conductors and/or the SP304conductors. For example, the select line conductor308may include identification numbers associated with selection of a node on the multidrop cable200. It should be noted that in different examples a selected node by the select line conductor308may perform different functions associated with the selected node. To improve operation of industrial automation systems, components of the industrial automation system (e.g., supervisory control system) may monitor performance of one or more devices (e.g., operator control station) with respect to the industrial automation process as a whole. Statuses and/or information from the one or more components (e.g., supervisory control system) may be transmitted to respective control systems of drives associated with the one or more devices via an Ethernet network. Respective control systems of the drives may use the statuses and/or information to make control decisions related to the one or more devices controlled or coupled to the drive. As mentioned above, each drive may be housed in a control cabinet associated with the industrial automation system, and each drive may include a control system to control operations of respective components (e.g., load devices, motor). To enable the supervisory control system to receive statuses and/or information from the one or more devices (e.g., operator control station), each device may be connected to an Ethernet network. In some cases, each device may be connected to the Ethernet network via a separate Ethernet cable. However, wiring each device to the Ethernet network via an Ethernet cable may be cumbersome, cost inefficient, and result in a bundle of Ethernet cables, which may be difficult to maintain. As such, it may be desirable to connect operator control stations to the Ethernet network while reducing the number of wires and overall installation cost, increasing transmission speed, and the like. Accordingly, the present disclosure provides techniques for connecting operator control stations using a single pair Ethernet (SPE) cable. By employing the techniques described in the present disclosure, the systems described herein may allow for connecting operator control stations to the Ethernet network while reducing the number of wires and overall installation cost, increasing transmission speed, and the like. While only certain features of the present disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the embodiments described herein. The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function]. . . ” or “step for [perform]ing [a function]. . . , ” it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f). | 50,257 |
11860610 | DETAILED DESCRIPTION In order to make the purpose, technical scheme, and advantages of the present disclosure more clear, the present disclosure is further described in detail below in combination with the embodiments and drawings. The schematic embodiments and descriptions of the present disclosure are only used to explain the present disclosure and are not used as a limitation of the present disclosure. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise; and the plural forms may be intended to include the singular forms as well, unless the context clearly indicates otherwise. As shown inFIG.1, some embodiments of the present disclosure aim to provide an industrial internet of things with dual front sub platform, the industrial internet of things includes a user platform, a service platform, a management platform, a sensor network platform and an object platform that interact in turn. The service platform adopts a centralized layout, and the management platform and the sensor network platform adopt a front sub platform layout. The centralized layout refers to that the service platform uniformly receives data, uniformly processes the data, and uniformly sends the data. The front sub platform layout refers to that each management platform and the sensor network platform is provided with a general platform and a plurality of sub platforms, the plurality of sub platforms respectively store and process data of different types or different receiving objects sent by a lower platform, and the general platform stores, processes, and transmits the data to a upper platform after summarizing the data of the plurality of sub platforms. A user sends a modification instruction of product manufacturing parameters according to production needs. The user platform receives the modification instruction to modify the product manufacturing parameter of a production line, generates a first instruction and sends the first instruction to the service platform. The product manufacturing parameters include a product manufacturing capacity. The service platform receives and processes the first instruction to generate a second instruction recognized by the management platform and sends the second instruction to the general platform of the management platform. The general platform of the management platform receives the second instruction and sends the second instruction to a plurality of sub platforms of the management platform at the same time. The plurality of sub platforms of the management platform perform data processing on the second instruction to generate a third instruction recognized by the sensor network platform, and the third instruction is transmitted to the general platform of the sensor network platform through the sub platforms of the management platform, respectively. The general platform of the sensor network platform receives the third instruction and sends the third instruction to a plurality of sub platforms of the sensor network platform at the same time. The plurality of sub platforms of the sensor network platform integrate the third instruction with data of real-time product manufacturing capacity to form different types of configuration files, and send the configuration files to corresponding object platform. The sub platforms of the sensor network platform are provided with independent sub platform databases, and the data of real-time product manufacturing capacity is real-time data stored in corresponding sub platform databases corresponding to the object platform, which is obtained by a product meter. The object platform receives the configuration files sent by the sub platforms of corresponding sensor network platform, performs manufacturing, or sends purchase reminders according to the configuration files. It should be noted that as the physical architecture of the industrial internet of things with dual front sub platform, it is specifically as follows: the user platform is configured as a terminal device, which interacts with the user. The service platform is configured as a first server, which receives an instruction from the user platform and sends it to the management platform, extracts information required for processing the user platform from the management platform and sending it to the user platform. The management platform is configured as a second server, which controls the operation of the object platform and receives the feedback data of the object platform. The sensor network platform is configured as a communication network and a gateway for interaction between the object platform and the management platform. The object platform is configured as a production line equipment to perform manufacturing and/or sends purchase reminders, and a product meter. Since this part belongs to a more common architecture in the prior art, descriptions of the embodiment are not repeated. In the prior art, in the field of intelligent manufacturing technology, the production process of products and their accessories involves more production line equipment, each production line equipment has a maximum product manufacturing capacity per unit time, each production line equipment may operate according to an actual product manufacturing capacity, in which the actual product manufacturing capacity is less than or equal to the maximum product manufacturing capacity. When it is necessary to adjust the manufacturing capacity of a production line equipment or all production line equipment, it is necessary to consider that each production line equipment may not exceed its maximum product manufacturing capacity after adjustment. After the adjustment of a single production line equipment, it is also necessary to consider its impact on other production line equipment on the production line. Due to the large count of production line equipment, the data integration and classification in the existing technology are not only large in processing capacity, but also irregular in classification, which is very difficult to implement, resulting in the inability to realize the unified regulation of the whole production line equipment in the existing technology. It often requires a plurality of systems of internet of things classified separately, which undoubtedly increases the cost and system complexity. The industrial internet of things with dual front sub platform in the embodiment first uses the independently arranged service platform to process all instructions and uploaded data, so as to facilitate data integration and data manipulation, and facilitate the coordinated and unified processing of the user platform, so that the service platform and/or user platform can better control the internet of things. While the management platform adopts the front sub platform layout, it may use its general platform and service platform for data interaction and use its different sub platforms for data transmission and processing, so as to fully share the overall data processing capacity of the management platform and ensure that the data transmission of different sub platforms may correspond to different object platforms. Similarly, the sensor network platform adopts the front sub platform layout, the general platform of the sensor network platform may be used to integrate and upload data or distribute and decompose the data to the corresponding sub platform according to different objects, and the sub platform may be used to process the collected data or received data, so as to unify the data format of different object platforms and keep consistent with the data of the management platform, realize the integration of different data sources of different object platforms, simplify the data interaction format conversion of different object platforms, and reduce data processing. It should be noted that the user platform in the embodiment may be a desktop computer, a tablet computer, a notebook computer, a mobile phone, or other electronic devices that can realize data processing and data communication, which is not limited here. In specific applications, the first server and the second server may adopt a single server or a server cluster, which is not too limited here. It should be understood that a process of the data processing mentioned in the embodiment may be processed by the processor of the server, and the data stored in the server may be stored on the storage device of the server, such as a hard disk and other memory. In specific applications, the sensor network platform may adopt a plurality of groups of gateway servers or a plurality of groups of intelligent routers, which are not limited here. It should be understood that the process of the data processing mentioned in the embodiment of the present disclosure may be processed by the processor of the gateway server, and the data stored in the gateway server may be stored in the storage device of the gateway server, such as the hard disk and other memories such as a solid state drive (SSD). In some embodiments, the production line equipment is all kinds of production line equipment relied on in the assembly line of product manufacturing. Taking mechanical products as an example, the production line equipment may be part assembly equipment, general assembly equipment, testing equipment, etc. Further, taking the automobile engine assembly line as an example, the production line equipment may be cylinder block processing equipment, cylinder block positioning and turnover equipment, cam assembly installation equipment, bolt assembly installation equipment, machine filter assembly, and oiling equipment, etc. Similarly, the product meter is used to measure the completion of workpieces within the unit time in the corresponding production line equipment, which may be various mechanical or electronic counters. In some embodiments, modifiable data set of the product manufacturing capacity is modifiable data set of the final product manufacturing capacity, which is modification data of the manufacturing capacity obtained by comprehensively considering all production line equipment without affecting the normal manufacturing of all production line equipment. Specifically, the sub platforms of a plurality of the sensor network platforms correspond to different production line equipment, and each production line equipment is correspondingly configured with the product meter. The production line equipment stores and classifies data of maximum product manufacturing capacity per unit time of equipment and the data of real-time product manufacturing capacity obtained in real time by the product meter to the sub platforms of corresponding sensor network platform. The sub platforms of the sensor network platform obtain data of modifiable product manufacturing capacity of corresponding production line equipment based on the data of maximum product manufacturing capacity and the data of real-time product manufacturing capacity, and transmit the data of modifiable product manufacturing capacity to the general platform of the sensor network platform through the corresponding sub platforms. The data of modifiable product manufacturing capacity is a difference between the data of maximum product manufacturing capacity and the data of real-time product manufacturing capacity. The general platform of the sensor network platform compiles and packages all the data of modifiable product manufacturing capacity and sends it to the corresponding sub platforms of the management platform. The general platform of the management platform receives and analyzes the data of modifiable product manufacturing capacity of each sub platform of the management platform, compares the data of modifiable product manufacturing capacity of all production line equipment, obtains a minimum value of the data of modifiable product manufacturing capacity as a final value of modifiable product manufacturing capacity, and compiles and transmits the final value of modifiable product manufacturing capacity to the service platform. The service platform receives and analyzes the final value of modifiable product manufacturing capacity, decomposes the value of modifiable product manufacturing capacity obtained from the analysis according to operation rules to form different sub data sets or arrays, maps the sub data sets or the arrays to a data table of modifiable product manufacturing capacity to form a data set of modifiable product manufacturing capacity, and compiles and sends the data set to the user platform. The data table of modifiable product manufacturing capacity is a data table formulated in the service platform according to the operation rules for filling in sub data sets or the array. In the embodiment, by obtaining a maximum data of the product manufacturing capacity and data of real-time product manufacturing capacity of the production line equipment in unit time respectively, the data of modifiable product manufacturing capacity of the corresponding production line equipment may be obtained, and then the modifiable product manufacturing capacity data of all production line equipment may be compared to determine the minimum data of modifiable product manufacturing capacity, different sub data sets or arrays may be formed within the range of product manufacturing capacity of this data, so as to select reasonable values for adjustment. It is further illustrated that when installing cams in the automobile engine assembly line, it specifically includes eight sub processes: loosing tile cover 01, removing tile cover 02, installing upper and lower shaft tiles 03, installing piston cooling nozzle 04, inserting camshaft drive key 05, installing camshaft thrust plate 06, lifting and placing crankshaft 07, and driving key 08. It is assumed that a production line equipment is set for each process, which is numbered according to the order of 01-08, the specific parameters of each production line are shown in Table 1 below: TABLE 1Specific parameters of production line equipmentMinimumvalue ofSub data setsArray ofMaximumReal-timemodifiablemodifiableof modifiablemodifiableproductproductproductproductproductproductmanufacturingmanufacturingmanufacturingmanufacturingmanufacturingmanufacturingNumbercapacitycapacitycapacitycapacitycapacitycapacity013224870; 2; 4; 60; 1; 2; 3; 4;02372611(allowable5; 6; 70343358modifiable0444377unit05554213capacity is 2)0645378073629708554411 It can be seen from table 1 that after comparing the minimum values among the eight production line equipment in 01-08, the minimum modifiable product manufacturing capacity in all production line equipment is 7, so 7 is taken as the final modifiable product manufacturing capacity. When the manufacturing capacity of all production line equipment in 01-08 is increased, it may not exceed the maximum product manufacturing capacity of all production line equipment, so as to ensure safety regulation. The data is only used as processing source of the subsequent data, which can also reduce the huge data processing capacity brought by a plurality of parameters of different production line equipment. In some embodiments, the value of modifiable product manufacturing capacity obtained from the analysis is decomposed according to the operation rules to form different sub data sets or arrays. The operation rules are as follows: taking natural number less than or equal to the value of modifiable product manufacturing capacity as the modifiable values, and forming a sequentially sorted array of all modifiable values; or presetting an allowable modifiable unit capacity by the service platform, multiplying the allowable modifiable unit capacity with the natural number starting from zero, and taking all the values whose calculation results are less than the value of modifiable product manufacturing capacity as a sub data set, and the allowable modifiable unit capacity being a minimum value of modified product manufacturing capacity that is allowed for each production line equipment. Taking Table 1 as an example, when the modifiable product manufacturing capacity is 7, the natural number less than or equal to 7 includes 0, 1, 2, 3, 4, 5, 6, and 7. Thus, the above values are taken as modifiable values to form an array, and the user platform may select within a range of the array. Similarly, when the modifiable product manufacturing capacity is 7 and the allowable modifiable unit capacity is set to be 2, a data set including 0, 2, 4, and 6 may be formed, and the user platform may select an increased product manufacturing capacity in the data set. In some embodiments, after modifying all production line equipment to complete the current product manufacturing task, when it is necessary to restore the starting manufacturing capacity, it may be restored by the following methods: the sub platforms of the sensor network platform take the data of real-time product manufacturing capacity as the basic data before manufacturing is not performed by the production line equipment according to the configuration files; after the object platform performs manufacturing according to the configuration file and the user platform sends a data rollback instruction, the service platform performs the data processing on the data rollback instruction and sends it to the general platform of the management platform. The general platform of the management platform sends the data rollback instruction to a plurality of sub platforms of the management platform at the same time, the plurality of sub platforms of the management platform perform the data processing on the data rollback instruction to generate a recognizable data recognizable by the sensor network platform and send the recognizable data to the general platform of the sensor network platform. The general platform of the sensor network platform receives the data rollback instruction and respectively sends the processed data rollback instruction to each sub platform of the sensor network platform after performing the data processing on the data rollback instruction. The sub platforms of the sensor network platform receive the data rollback instruction, perform rollback operation with the basic data in each sub platform as rollback data, send the basic data to the production line equipment and update and cover the parameter value of existing product manufacturing capacity. In some embodiments, the plurality of sub platforms of the sensor network platform integrate the third instruction with the data of real-time product manufacturing capacity to form different types of configuration files, and send the configuration files to the corresponding object platform including: extracting modification instruction data of the product manufacturing capacity from the third instruction, and obtaining parameter values of real-time product manufacturing capacity through adding modification value of the product manufacturing capacity in the modification instruction data to the data of real-time product manufacturing capacity by the plurality of sub platforms of the sensor network platform; forming different types of the configuration files using the operation rules of different production line equipment for the parameter values of real-time product manufacturing capacity, and sending the configuration files to the corresponding object platform. Through the above operations, a plurality of sub platforms of the sensor network platform may convert the third instruction into the parameter value of real-time product manufacturing capacity, so that the production line equipment may directly read and use the configuration file, further simplifying the difficulty of data interaction of production line equipment. In some embodiments, the object platform receives the configuration file sent by the sub platform of the corresponding sensor network platform and performs manufacturing according to the configuration file including: receiving the configuration files as update files sent by the sub platforms of corresponding sensor network platform by the production line equipment of the object platform, and updating and iterating the parameter value of existing product manufacturing capacity of production line equipment using the parameter value of real-time product manufacturing capacity in the configuration file by the production line equipment of the object platform, and the production line equipment controls the product manufacturing capacity in unit time. In some embodiments, the count of workpieces manufactured by different production line equipment per unit time is different, in order to minimize the impact of modified manufacturing capacity on all production line equipment, it is best to modify the manufacturing capacity of production line equipment with small manufacturing capacity first. Therefore, it is necessary for different production line equipment to modify the manufacturing capacity according to different execution times, which may be executed by the following method. When the first instruction corresponds to different execution time, the sub platforms of the management platform write the execution time into the corresponding third instruction. When the sub platform databases of the sensor network platform receive and store the third instruction, the sub platforms of the sensor network platform extract the execution time using processors of the sub platforms. When the third instruction is integrated with the data of real-time product manufacturing capacity to form the configuration files, the execution time is written into the configuration files. After the object platform receives the configuration files sent by the sub platforms of the corresponding sensor network platform, the object platform extracts the execution time and performs manufacturing according to the configuration files at the execution time. The processors of sub platforms are respectively arranged in corresponding gateways of the sub platforms of the sensor network platform. As shown inFIG.2, some embodiments of the present disclosure aim to provide a control method for the industrial internet of things with dual front sub platform, the industrial internet of things includes a user platform, a service platform, a management platform, a sensor network platform and an object platform that interact in turn. The service platform adopts a centralized layout, and the management platform and the sensor network platform adopt a front sub platform layout. The centralized layout refers to that the service platform uniformly receives data, uniformly processes the data, and uniformly sends the data. The front sub platform layout refers to that each management platform and the sensor network platform is provided with a general platform and a plurality of sub platforms, the plurality of sub platforms respectively store and process data of different types or different receiving objects sent by a lower platform, and the general platform stores, processes, and transmits the data to a upper platform after summarizing the data of the plurality of sub platforms. A user sends a modification instruction of product manufacturing parameters according to production needs. The user platform receives the modification instruction to modify the product manufacturing parameter of a production line, generates a first instruction and sends the first instruction to the service platform. The product manufacturing parameters include a product manufacturing capacity. The service platform receives and processes the first instruction to generate a second instruction recognized by the management platform and sends the second instruction to the general platform of the management platform. The general platform of the management platform receives the second instruction and sends the second instruction to a plurality of sub platforms of the management platform at the same time. The plurality of sub platforms of the management platform perform data processing on the second instruction to generate a third instruction recognized by the sensor network platform, and the third instruction is transmitted to the general platform of the sensor network platform through the sub platforms of the management platform, respectively. The general platform of the sensor network platform receives the third instruction and sends the third instruction to a plurality of sub platforms of the sensor network platform at the same time. The plurality of sub platforms of the sensor network platform integrate the third instruction with data of real-time product manufacturing capacity to form different types of configuration files, and send the configuration files to corresponding object platform. The sub platforms of the sensor network platform are provided with independent sub platform databases, and the data of real-time product manufacturing capacity is real-time data stored in corresponding sub platform databases corresponding to the object platform, which is obtained by a product meter. The object platform receives the configuration files sent by the sub platforms of corresponding sensor network platform, performs manufacturing, or sends out purchase reminders according to the configuration files. The following describes the industrial internet of things with dual front sub platform and its control method by taking the example that the automobile production process determines the product manufacturing parameters of the production line through a material requirement planning system. Automobile production process may include stamping process, welding process, coating process and general assembly process. Material requirement planning (MRP) refers to that a backward plan is made according to the length of the lead time the subordination and capacity relationship of items at all levels according to the product structure, by taking each item as the planning object and taking the completion period as the time benchmark, and the order of release time of each item is distinguished according to the length of the lead time, which is a material planning management mode in industrial manufacturing enterprises. The material requirement planning system is a management information system based on logistics demand planning. Production plan and purchase plan may be determined by inputting the basic data into the MRP system. Production plan refers to the plan that the enterprise makes overall layouts for production tasks and specifically formulates the variety, quantity, quality and progress of production products. Purchase plan refers to the predictable layout and deployment of material purchase management activities during the planning period. In some embodiments, the user may send a modification instruction of product manufacturing parameter according to the production need, the user platform may receive the modification instruction, modify product manufacturing parameters of a production line, generate a first instruction, and send the first instruction to the service platform. Production needs may be determined according to the main production plan. The main production plan refers to all kinds of products and spare parts produced within a planned period of time. The service platform may receive and process the first instruction, generate a second instruction recognized by the management platform and send it to the general platform of the management platform. The general platform of the management platform may receive the second instruction and send the second instruction to a plurality of sub platforms of the management platform at the same time. A plurality of sub platforms of the management platform may perform the data processing on the second instruction to generate the third instruction that may be recognized by the sensor network platform, and the third instruction is transmitted to the general platform of the sensor network platform through the sub platforms of the management platform. A plurality of sub platforms of the management platform may be configured as a stamping management platform, a welding management platform, a coating management platform and a general assembly management platform. The material loss of a single vehicle, a production capacity per unit time, and a safety stock of each material in each automobile production process may be predicted respectively by the plurality of sub-platforms of the management platform. A plurality of sub platforms of the management platform may take the production capacity per unit time, material loss of a single vehicle, safety stock of each material, master production plan, and actual stock in each automobile production process as the basic data of the material requirement planning system to determine the production plan and purchase plan. A plurality of sub platforms of the management platform may generate the third instruction recognized by the sensor network platform based on the production plan and purchase plan. A plurality of sub platforms of the management platform may send material purchase reminders according to the relationship between actual stock and safety stock in the future. A plurality of sub platforms of the management platform may generate the third instruction recognized by the sensor network platform based on the material purchase reminders. The general platform of the sensor network platform may receive the third instruction and send the third instruction to a plurality of sub platforms of the sensor network platform at the same time. A plurality of sub platforms of the sensor network platform may integrate the third instruction with data of the real-time product manufacturing capacity to form different types of configuration files, and send the configuration files to the corresponding object platform. The sub platforms of the sensor network platform are provided with independent sub platform database, and the data of real-time product manufacturing capacity is the real-time data stored in a corresponding sub platform database corresponding to the object platform, which is obtained by a product meter. The object platform receives the configuration file sent by the sub platform corresponding to the sensor network platform and performs manufacturing and/or sends a purchase reminder according to the configuration file. The object platform may be configured as stamping machine tool and robot, automatic welding equipment and robot, heavy industry spraying robot, general assembly equipment, terminal equipment and acquisition equipment to provide required data for a plurality of sub platforms of the management platform. FIG.3is an exemplary flowchart of a plurality of sub platforms of the management platform performing data processing on the second instruction to generate a third instruction recognized by the sensor network platform according to some embodiments of the present disclosure. As shown inFIG.3, the process300includes the following steps. In some embodiments, the process300may be executed by a processor. In step310, the plurality of sub platforms of the management platform are configured as a stamping management platform, a welding management platform, a coating management platform, and a general assembly management platform based on stamping process, welding process, coating process, and general assembly process in an automobile production process. In some embodiments, taking the automobile production process as an example, the automobile production process may include stamping process, welding process, coating process and general assembly process. Accordingly, a plurality of sub platforms of the management platform may be configured as the stamping management platform, the welding management platform, the coating management platform, and the general assembly management platform. The object platform may be configured as stamping machine tool and robot, automatic welding equipment and robot, heavy industry spraying robot, final assembly equipment, terminal equipment and acquisition equipment. In step320, respectively predicting material loss of a single vehicle, a production capacity per unit time, and a safety stock of each material in each automobile production process by the plurality of sub platforms of the management platform. Material loss of a single vehicle refers to the materials lost in each automobile production process in the process of producing a single vehicle. For example, the material loss in the stamping process may include metal material loss. Material loss in the welding process may include welding material loss. The material loss in the coating process may include coating loss. The material loss in the general assembly process may include the loss of parts and connectors. Production capacity per unit time refers to the count of pieces completed in each automobile production process per unit time. For example, the production capacity per unit time of the coating process may be 100 vehicles per day. The safety stock of materials refers to the material stock that ensures the normal and orderly operation of the whole production line. For example, the safety stock of paint in the coating process may be 2000 L. In some embodiments, taking the coating process as an example, when the coating process is fully automatic operation of heavy industry spraying robot without manual participation, the coating management platform may determine the coating loss of a single vehicle through the coating loss prediction model based on the product specification, the coating loss per unit area of the coating process, and the situation of the coating equipment. When the coating process is not fully automatic operation of heavy industry spraying robot and requires manual participation, the input of coating loss prediction model may also include a degree of manual proficiency. Product specification refers to the volume and size of the product. For example, the product specification may refer to a product of the length, width, and height of the product. In some embodiments, product specification may be obtained based on a terminal device in the object platform. The coating loss per unit area in the coating process refers to the amount of coating lost per unit area of the vehicle. In some embodiments, the coating loss per unit area in the coating process may be obtained based on the heavy industry spraying robot in the object platform. For example, the average value of the paint lost per unit area of the vehicle sprayed by the heavy industry spraying robot in history may be taken as the coating loss per unit area. The situation of coating equipment refers to the basic situation of the equipment used in the coating process. In some embodiments, the situation of the coating equipment may include equipment attribute information (e.g., equipment model, etc.), equipment working parameters (e.g., shaping air volume, spraying distance, rotating cup speed, spraying flow, etc.), and equipment maintenance and replacement information (e.g., equipment service time, whether the equipment has been repaired). In some embodiments, the situation of the coating equipment may be obtained based on the heavy industry spraying robot in the object platform. The coating loss prediction model may be a multi classification model. For more information about the coating loss prediction model, please refer to other parts of the present disclosure, e.g.,FIG.4and its related description. In some embodiments, when the coating process is fully automatic operation of the heavy industry spraying robot without manual participation, the coating loss of a single vehicle may be obtained based on the collected historical coating loss data of a single vehicle. For example, the average value of the coating loss data of a historical single vehicle may be taken as the coating loss of a single vehicle. The prediction method of material loss of single vehicle in other automobile production processes is similar to that of coating loss of single vehicle in coating process, which will not be repeated here. The working parameters of the equipment in the stamping process may include the closing height of the press, the height of the stretching pad, the pressure, the angle of the air source, the number of sensors, the stroke, etc. The working parameters of the equipment in the welding process may include preloading time, welding time, welding pressure, welding current, preheating current, preheating time, cooling holding time, rest time, etc. The working parameters of the equipment in the general assembly process may include the working parameters of the general assembly equipment. In some embodiments, a plurality of sub platforms of the management platform may determine the adjustment value of the production capacity per unit time of each automobile production process through the capacity prediction model based on the labor situation of each automobile production process and the preset production capacity per unit time of each automobile production process. Then, the production capacity per unit time of each automobile production process is determined based on the preset production capacity per unit time of each automobile production process and the adjustment value of the production capacity per unit time of each automobile production process. In some embodiments, the processor may input the labor situation of each automobile production process and the preset production capacity per unit time of each automobile production process into the capacity prediction model, and then the capacity prediction model outputs the adjustment value of the production capacity per unit time of each automobile production process. The capacity prediction model may be a deep learning model, such as a deep neural networks (DNN), a recurrent neural network (RNN), a convolutional neural networks (CNN), etc. For more information about the capacity prediction model, please refer to other parts of the present disclosure, e.g.,FIG.5and its related descriptions. In some embodiments, a plurality of sub platforms of the management platform may determine whether the adjustment value of the production capacity per unit time of each automobile production process determined by the capacity prediction model is greater than the preset production capacity per unit time of each automobile production process. In response to a determination that the adjustment value of the production capacity per unit time of each automobile production process is greater than the preset production capacity per unit time of each automobile production process, the preset production capacity per unit time of each automobile production process is taken as the general production capacity per unit time of each automobile production process. In response to a determination that the adjustment value of the production capacity per unit time of each automobile production process is less than or equal to the preset production capacity per unit time of each automobile production process, the adjustment value of the production capacity per unit time of each automobile production process determined by the capacity prediction model is taken as the final production capacity per unit time of each automobile production process. For example, the preset production capacity per unit time of a vehicle coating process is 50 vehicles/hour, if the adjustment value of production capacity per unit time of the coating process is greater than 50 vehicles, 50 vehicles/hour may be determined as the production capacity per unit time of the coating process; if the adjustment value of the production capacity per unit time of the coating process is 40 vehicles/hour, which is less than the preset production capacity per unit time of the coating process of 50 vehicles/hour, 40 vehicles/hour may be taken as the production capacity per unit time of the coating process. In some embodiments, the safety stock of materials may be determined based on the material loss of a single vehicle. In some embodiments, taking the coating process as an example, the safety stock of the paint may determine the material demand in the production time period from the current time to the future time based on the material loss of a single vehicle, so as to determine the safety stock. For example, the coating loss of a single vehicle is 2 L, the production period from the current time to the future time is 1 day, and the production capacity of the coating process is 1000 vehicles per day. Then the material demand of the production time period from the current time to the future time is 2×1×1000=2000 L, which may be directly used as the safety stock of paint, or may be used as the safety stock of paint after appropriate increase (for example, 2100 L). In some embodiments, the safety stock of paint may be adjusted based on the confidence determined by the coating loss prediction model. For example, if the confidence is high, the accuracy of determined the coating loss of a single vehicle is high, so the safety stock of paint may be set lower. If the confidence is low, the accuracy of determined the coating loss of a single vehicle is relatively low, so the safety stock of paint may be set higher. For more information about the confidence determined by the coating loss prediction model, please refer to other parts of the present disclosure, e.g., for example,FIG.4and its related description. In step330, determining a production plan and a purchase plan by taking the material loss of a single vehicle, the production capacity per unit time, and the safety stock of each material in each automobile production process as basic data of a material requirement planning system. Basic data refers to the data of some basic material. In some embodiments, the basic data may include the production capacity per unit time, the material loss of a single vehicle, and the safety stock of each material in each automobile production process. In some embodiments, the basic data also includes a master production plan and a current stock. Master production plan refers to all kinds of products and spare parts produced within a planned period of time, which may be obtained by the terminal device in the object platform. Current stock refers to the stock at the current time, which may be obtained by the acquisition equipment in the object platform. In some embodiments, a plurality of sub platforms of the management platform may take the production capacity per unit time, material loss of a single vehicle, safety stock of each material, master production plan and actual stock in each automobile production process as the basic data of the material requirement planning system to determine the production plan and the purchase plan. In step340, generating the third instruction recognized by the sensor network platform based on the production plan and the purchase plan. In some embodiments, a plurality of sub platforms of the management platform may generate third instructions recognized by the sensor network platform based on production plans and the purchase plans. In step350, sending material purchase reminders by a plurality of sub platforms of the management platform according to the relationship between actual stock at a future time and safety stock. The actual stock at a future time refers to the actual value of the predicted stock at a certain time after the current time. For example, if the current time is Jan. 1, 2030, the actual stock at the future time may be the actual value of the predicted stock on Mar. 1, 2030. In some embodiments, the actual stock at the future time is a result that the actual stock at the current time minus the material demand in the production time period from the current time to the future time, and adds the planned receipt capacity in the time period. For example, taking the paint stock as an example, if the length of the production time period from the current time to the future time is 1 day, the actual stock of paint at the current time is set as 3000 L, the demand for paint in one day is set as 2000 L, and the planned stored capacity in one day is set as 1000 L, so the actual stock of paint in the future time is 3000−2000+1000=2000 L. In some embodiments, the material demand in the production time period from the current time to the future time may be determined based on the length of the production time period from the current time to the future time, the production capacity of the whole vehicle per unit time, and the material loss of a single vehicle. In some embodiments, the material demand in the production time period from the current time to the future time is a result that the length of the production time period from the current time to the future time multiples production capacity per unit time of the whole vehicle, and then multiples the material loss of a single vehicle. For example, taking the paint stock as an example, if the length of the production time period from the current time to the future time is 1 day, the production capacity of the whole vehicle in one day is set as 1000 and the coating loss of a single vehicle is set as 2 L, the coating demand in the corresponding production time period from the current time to the future time is 1×1000×2=2000 L. In some embodiments, the production capacity per unit time of the whole vehicle may be the production capacity per unit time of the last process. For example, if the last process is the general assembly process, the production capacity per unit time of the general assembly process is taken as the production capacity per unit time of the whole vehicle. In some embodiments, a material purchase reminder may be sent when the actual stock in the future time is less than the safety stock. For example, taking the coating process as an example, if the safety stock of paint is 120 L and the actual stock in the future time is less than 120 L, a material purchase reminder may be sent. In some embodiments, a material purchase reminder may be sent based on the daily material demand and the actual stock in the future time. For example, taking the coating process as an example, the actual stock of paint in the future time is 400 L, while the paint required for daily production is expected to be 100 L. At this time, the actual stock in the future time is greater than the safety stock, but a reminder may be sent as follows: the stock will be consumed within 4 days, please replenish the stock in time. Step360: generating a third instruction recognized by the sensor network platform based on the material purchase reminder. In some embodiments, a plurality of sub platforms of the management platform may generate a third instruction recognized by the sensor network platform based on the material purchase reminder. The production plan and purchase plan are determined by taking the production capacity per unit time, the material loss of a single vehicle, and the safety stock of each material in each automobile production process respectively predicted by a plurality of sub platforms as the basic data of MRP. It may purchase materials of appropriate quantity and variety, choose the appropriate time to order, maintain the lowest stock level as far as possible, and obtain various materials required for production in time to ensure the timely supply of products required by users. FIG.4is a schematic diagram of a structure of a coating loss prediction model according to some embodiments of the present disclosure. In some embodiments, as shown inFIG.4, the coating loss prediction model420is a multi-classification model, which may include a neural network model (e.g., CNN, RNN, DNN, etc.). In some embodiments, as shown inFIG.4, when the coating process is fully automatic operation of the heavy industry spraying robot without manual participation, the input of the coating loss prediction model420may include the product specification410-1, the coating loss per unit area of the coating process410-2, the situation of the coating equipment410-3, and the output is the coating loss of a single vehicle430. When the coating process is not fully automatic operation of heavy industry spraying robot and requires manual participation, the input of coating loss prediction model may also include the degree of manual proficiency410-4. The degree of manual proficiency refers to the proficiency of manual operation. In some embodiments, the degree of manual proficiency may be a numerical value or letter that may reflect the manual proficiency. For example, the degree of manual proficiency may be expressed by values between 1-10, letters A-F, or stars. The value is larger, the alphabetical order, or the star is higher, indicating that the degree of manual proficiency is higher. In some embodiments, the degree of manual proficiency may be determined through the proficiency prediction model based on the number of spraying vehicles per unit time and the qualification rate of spraying vehicles. In some embodiments, the type of proficiency prediction model may include a neural network model (e.g., CNN, RNN, DNN, etc.). In some embodiments, the proficiency prediction model may be used to process the number of spraying vehicles per unit time and the qualification rate of spraying vehicles to determine the degree of manual proficiency. For example, the number of spraying vehicles per unit time and the qualification rate of spraying vehicles may be input into the proficiency prediction model, and the proficiency prediction model outputs the manual proficiency. In some embodiments, the proficiency prediction model may be trained and acquired based on historical data. The historical data includes the number of spraying vehicles per unit time of historical workers and the qualification rate of historical spraying vehicles. The number of historical workers' spraying vehicles per unit time and the qualification rate of historical spraying vehicles may be used as training samples. The identification of training samples may be historical degree of manual proficiency. The historical degree of manual proficiency may be determined manually. The training samples with identification may be input into the initial proficiency prediction model, and the parameters of the initial proficiency prediction model may be updated through training. When the training model meets the preset conditions, the training stops and the trained proficiency prediction model is obtained. In some embodiments, as shown inFIG.4, the parameters of the coating loss prediction model420may be trained by a plurality of groups of labeled first training samples440. In some embodiments, a plurality of groups of first training samples440may be obtained, and each group of first training samples440may include a plurality of training data and labels corresponding to the training data. The training data may include the historical product specifications, the coating loss per unit area of the historical coating process, and the situation of the historical coating equipment. The label of the training data may be the actual value of the coating loss of a historical single vehicle. When the coating loss prediction model420is trained, the coating loss of a single vehicle may be divided into several sections (for example, 0 L˜20 L, 20 L˜40 L, 40 L˜60 L, 60 L˜80 L, 80 L˜100 L), and then the label may be constructed based on the section where the actual value is located. For example, if the actual value of coating loss of a historical single vehicle is 20 L-40 L, the label is [0, 1, 0, 0, 0], that is, the label at the corresponding position of the section is 1 and other positions are 0. Correspondingly, the coating loss of a single vehicle430output by the coating loss prediction model420is a vector, and the value in the vector represents the possibility of belonging to each section. The section with the largest value in the vector is taken as the prediction result of the model, and the output value of the corresponding section is confidence. For example, the coating loss prediction model420may output the probability value of coating loss of five single vehicles, which may be expressed as [0.1, 0.69, 0.05, 0.06, 0.1], and the coating loss 20 L˜40 L of a single vehicle corresponding to the highest probability value of 0.69 is the coating loss of a single vehicle. The highest probability value of 0.69 is the confidence of coating loss of a single vehicle. Through a plurality of groups of first training samples440, the parameters of the initial coating loss prediction model450may be updated to obtain the trained coating loss prediction model420. In some embodiments, the parameters of the initial coating loss prediction model450may be iteratively updated based on a plurality of first training samples so that the loss function of the model meets the preset conditions. For example, the loss function converges, or the loss function value is less than the preset value. When the loss function meets the preset conditions, the model training is completed, and the trained initial coating loss prediction model450is obtained. The coating loss prediction model420and the trained initial coating loss prediction model450have the same model structure. In some embodiments, when the input of the coating loss prediction model420also includes the degree of manual proficiency410-4, the training sample may also include historical degree of manual proficiency. Through the coating loss prediction model to predict the coating loss of a single vehicle, the product specification, the coating loss per unit area of the coating process, and the situation of the coating equipment may be taken as the input of the coating loss prediction model, and combined with the interrelated prediction results of manual proficiency, causing that the coating loss prediction model may predict the coating loss of a single vehicle more accurately. FIG.5is a schematic diagram of a structure of a capacity prediction model according to some embodiments of the present disclosure. In some embodiments, as shown inFIG.5, the input of the capacity prediction model520may include the labor situation510-1of each automobile production process and the preset production capacity per unit time510-2of each automobile production process, and the output is the adjustment value530of the production capacity per unit time of each automobile production process. In some embodiments, the labor situation may include the number of labor and the degree of manual proficiency. For more information about the degree of manual proficiency, please refer to other parts of the present disclosure, e.g.,FIG.4and its related description. In some embodiments, as shown inFIG.5, the parameters of the capacity prediction model520may be trained by a plurality of groups of labeled second training samples540. In some embodiments, a plurality of groups of second training samples540may be obtained, and each group of second training samples540may include a plurality of training data and labels corresponding to the training data. The training data may include the historical labor situation of each automobile production process and the historical preset production capacity per unit time of each automobile production process. The historical labor situation of each automobile production process and the historical preset production capacity per unit time of each automobile production process are the labor situation and preset production capacity per unit time within the historical time period. The label of training data may be the actual value of historical production capacity per unit time of each automobile production process. In some embodiments, the parameters of the initial capacity prediction model550may be iteratively updated based on a plurality of second training samples to make the loss function of the model meet the preset conditions. For example, the loss function converges, or the loss function value is less than the preset value. When the loss function meets the preset conditions, the model training is completed, and the trained initial capacity prediction model550is obtained. The production capacity prediction model520and the trained initial capacity prediction model550have the same model structure. Through the capacity prediction model to predict the adjustment value of the production capacity per unit time of each automobile production process, the labor situation of each automobile production process and the preset production capacity per unit time of each automobile production process may be used as the input of the capacity prediction model, so that the capacity prediction model may predict the adjustment value of the production capacity per unit time of each automobile production process more accurately. Then, the production capacity per unit time of each automobile production process is more accurately determined based on the relationship between the preset production capacity per unit time of each automobile production process and the adjustment value of the production capacity per unit time of each automobile production process. In some embodiments, a computer-readable storage medium may be used to store computer instructions. When the computer instructions are executed by a processor, the control method for the dual front sub platform industrial internet things can be realized. Those skilled in the art may realize that the units and algorithm steps of each example described in combination with the embodiments disclosed herein can be realized by electronic hardware, computer software or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described according to function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical scheme. Professional technicians may use different methods to realize the described functions for each specific application, but such realization should not be considered to be beyond the scope of the present invention. In several embodiments provided in the present application, it should be understood that the disclosed devices and methods may be realized in other ways. For example, the embodiment of device described above is only schematic. For example, the division of the unit is only a logical function division, and there may be another division mode in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, or electrical, mechanical or other forms of connection. The units described as separate parts may or may not be physically separated. Those skilled in the art can realize that the units and algorithm steps of the examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described according to the function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical scheme. Professional technicians may use different methods to realize the described functions for each specific application, but such realization should not be considered to be beyond the scope of the present invention. In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, each unit may exist separately, or two or more units may be integrated in one unit. The above integrated units may be realized in the form of hardware or software functional units. If the integrated unit is realized in the form of software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium, It includes several instructions to enable a computer device (which may be a personal computer, server, grid device, etc.) to perform all or part of the steps of the method described in various embodiments of the present invention. The aforementioned storage media include: USB flash disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disc or optical disc and other media that may store program codes. The specific embodiments described above further detail the purpose, technical scheme and beneficial effects of the present disclosure. It should be understood that the above are only the specific embodiments of the present disclosure and are not used to limit the protection scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present disclosure should be included in the protection scope of the present disclosure. | 61,597 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.